Is it cryptographically insecure to use fixed-length AES-GCM messages?

Is there any weaknesses to encrypting fixed-length messages? Should a random amount of padding be added to the message to decrease the odds of some sort of attack?

c# – Enumerating all fixed-length paths from a vertex in a graph

Graph-related classes

Base implementations of the relevant classes in my graph library. I’m using C#9 in a .NET 5 project. I opted to use structs for my Vertex and Edge types to disallow null, which simplifies the equality and comparison implementations. I’ve also chosen to keep all collections as generic as possible using IEnumerable<T> and I’ve also tried to use LINQ where I could because it’s convenient.


A simple directed graph with some basic functionalities.

public class DirectedGraph<T, U>
    where T : IComparable<T>, IEquatable<T>
    where U : IComparable<U>, IEquatable<U>
    public LinkedList<Vertex<T>> Vertices { get; set; } = new();
    public LinkedList<Edge<T, U>> Edges { get; set; } = new();

    public virtual void Add(T state)
        => Vertices.AddLast(new Vertex<T>(state));
    public virtual void SetEdge(Vertex<T> from, Vertex<T> to, U weight)
        if (Vertices.Contains(from) && Vertices.Contains(to))
            Edges.AddLast(new Edge<T, U>(from, to, weight));

    public virtual IEnumerable<Edge<T, U>> GetEdgesFrom(Vertex<T> from)
        => Edges.Where(x => x.From.Equals(from));

    public virtual IEnumerable<Edge<T, U>> GetEdgesTo(Vertex<T> to)
        => Edges.Where(x => x.To.Equals(to));

    // Method to be reviewed
    public IEnumerable<IEnumerable<Edge<T, U>>> GetPathsWithLengthFrom(int length, Vertex<T> vertex)
        // Using breadth-first search
        if (length == 1)
            // This seems like a reasonable base case
            return GetEdgesFrom(vertex).Select(x => Enumerable.Empty<Edge<T, U>>().Append(x));
        else if (length > 1)
            var pathsSoFar = GetPathsWithLengthFrom(length - 1, vertex);
            var newPaths = Enumerable.Empty<IEnumerable<Edge<T, U>>>();
            foreach (var path in pathsSoFar)
                // Better way to duplicate paths or other approach altogether?
                var pathEnd = path.Last().To;
                var nextPieces = GetEdgesFrom(pathEnd);
                foreach (var nextPiece in nextPieces)
                    newPaths = newPaths.Append(path.Append(nextPiece));
            return newPaths;
        throw new ArgumentOutOfRangeException(nameof(length), "Path length must be greater than or equal to 1.");


Abridged version of the vertex class with basic interface implementations. In my actual project I also implemented all the comparison and equality operators (<, >, ==, etc.).

public struct Vertex<T> : 
    where T :
    public Vertex(T state)
        => State = state;

    public T State { get; set; }

    #region Interface implementations and operator overloads
    public int CompareTo(Vertex<T> other)
        => State.CompareTo(other.State);

    public int CompareTo(T other)
        => State.CompareTo(other);

    public bool Equals(Vertex<T> other)
        => CompareTo(other) == 0;

    public bool Equals(T other)
        => CompareTo(other) == 0;

    public override int GetHashCode()
        => State.GetHashCode();


public struct Edge<T, U>
    where T : IComparable<T>, IEquatable<T>
    where U : IComparable<U>, IEquatable<U>
    public Edge(Vertex<T> from, Vertex<T> to, U weight)
        => (From, To, Weight) = (from, to, weight);

    public Vertex<T> From { get; set; }
    public Vertex<T> To { get; set; }
    public U Weight { get; set; }

Goal of the graph library

Initially, I wanted to use graphs to enumerate all possible ways to choose k elements from a set of n total elements, but I discovered better ways to achieve that goal.

That being said, I created a graph with customizable states and weights for a poker calculator. The customizable weight could for example keep track of which cards were drawn or which hands are likely to arise given the current game state and the remaining cards in the deck.

Finally, writing a small graph theory library from scratch helps me learn more about algorithms and data structures.

Why C# and not a faster language like C/C++?

I’m just more familiar with C# and I like LINQ, C# also has a ton of abstractions that I would likely have to import or write myself in C/C++ (the more I have to write myself, the more likely I’m going to make mistakes). Not to mention memory management with pointers and references.

But if C# turns out to be too slow, even after optimisations, I might switch over to C/C++ anyway.

python – Create a multi-dimentional array having fixed-length nested arrays from dataframe

I am working to transform my dataframe into an array of fixed-sized segments that I should feed to a convolutional neural net. Specifically, I would like to transfrom the df to list of m arrays each containing segments sized (1,5,4). So at last, I would have an (m,1,5,4) array.

To clarify my question, I explain using this MWE. Suppose this is my df:

df = {
    'id': (1,1,1,1,1,1,1,1,1,1,1,1),
'speed': (17.63,17.63,0.17,1.41,0.61,0.32,0.18,0.43,0.30,0.46,0.75,0.37),
'acc': (0.00,-0.09,1.24,-0.80,-0.29,-0.14,0.25,-0.13,0.16,0.29,-0.38,0.27),
'jerk': (0.00,0.01,-2.04,0.51,0.15,0.39,-0.38,0.29,0.13,-0.67,0.65,0.52),
'bearing': (29.03,56.12,18.49,11.85,36.75,27.52,81.08,51.06,19.85,10.76,14.51,24.27),
'label' : (3,3,3,3,3,3,3,3,3,3,3,3) }

df = pd.DataFrame.from_dict(df)

To do this, I use this function:

def df_transformer(dataframe, chunk_size=5):
    grouped = dataframe.groupby('id')

    # initialize accumulators
    X, y = np.zeros((0, 1, chunk_size, 4)), np.zeros((0,))

    # loop over segments (id)
    for _, group in grouped:

        inputs = group.loc(:, 'speed':'bearing').values
        label = group.loc(:, 'label').values(0)

        # calculate number of splits
        N = len(inputs) // chunk_size

        if N > 0:
            inputs = np.array_split(inputs, (chunk_size)*N)
            inputs = (inputs)
        # loop over splits
        for inpt in inputs:
            inpt = np.pad(
                inpt, ((0, chunk_size-len(inpt)),(0, 0)), 
            # add each inputs split to accumulators
            X = np.concatenate((X, inpt(np.newaxis, np.newaxis)), axis=0)
            y = np.concatenate((y, label(np.newaxis)), axis=0) 

    return X, y

The df above has 12 rows, so if transformed correctly to the intended form, I should get an array of shape (3,1,5,4). In the above function, segments with less than 5-rows are zero-padded, to make segment shaped (1,5,4).

This function has 2-bugs that I’m yet to figure out how to fix it.

BUG-1: the function introduces an all-zero array (which is totally not intended), like this:

X , y = df_transformer(df(:10))

array((((( 1.763e+01,  0.000e+00,  0.000e+00,  2.903e+01),
         ( 1.763e+01, -9.000e-02,  1.000e-02,  5.612e+01),
         ( 1.700e-01,  1.240e+00, -2.040e+00,  1.849e+01),
         ( 1.410e+00, -8.000e-01,  5.100e-01,  1.185e+01),
         ( 6.100e-01, -2.900e-01,  1.500e-01,  3.675e+01))),

       ((( 0.000e+00,  0.000e+00,  0.000e+00,  0.000e+00),
         ( 0.000e+00,  0.000e+00,  0.000e+00,  0.000e+00),
         ( 0.000e+00,  0.000e+00,  0.000e+00,  0.000e+00),
         ( 0.000e+00,  0.000e+00,  0.000e+00,  0.000e+00),
         ( 0.000e+00,  0.000e+00,  0.000e+00,  0.000e+00))),

       ((( 3.200e-01, -1.400e-01,  3.900e-01,  2.752e+01),
         ( 1.800e-01,  2.500e-01, -3.800e-01,  8.108e+01),
         ( 4.300e-01, -1.300e-01,  2.900e-01,  5.106e+01),
         ( 3.000e-01,  1.600e-01,  1.300e-01,  1.985e+01),
         ( 4.600e-01,  2.900e-01, -6.700e-01,  1.076e+01)))))

The all-zero second array shouldn’t have been there. I should have only the first and last arrays in that case.

BUG-2: The function ONLY works for a df with 10 rows or less (i.e. twice chunk_size or less). So passing the entire df above will always fail with an error, like so:

X , y = df_transformer(df)

Traceback (most recent call last):
  File "", line 49, in <module>
    X , y = df_transformer(df)
  File "", line 38, in df_transformer
    inpt = np.pad(
  File "<__array_function__ internals>", line 5, in pad
  File "/Users/IT/anaconda3/lib/python3.8/site-packages/numpy/lib/", line 748, in pad
    pad_width = _as_pairs(pad_width, array.ndim, as_index=True)
  File "/Users/IT/anaconda3/lib/python3.8/site-packages/numpy/lib/", line 519, in _as_pairs
    raise ValueError("index can't contain negative values")
ValueError: index can't contain negative values

Expected output: In this case, the output should be sized (3,1,5,4).

(((( 1.763e+01  0.000e+00  0.000e+00  2.903e+01)
   ( 1.763e+01 -9.000e-02  1.000e-02  5.612e+01)
   ( 1.700e-01  1.240e+00 -2.040e+00  1.849e+01)
   ( 1.410e+00 -8.000e-01  5.100e-01  1.185e+01)
   ( 6.100e-01 -2.900e-01  1.500e-01  3.675e+01)))

 ((( 3.200e-01 -1.400e-01  3.900e-01  2.752e+01)
   ( 1.800e-01  2.500e-01 -3.800e-01  8.108e+01)
   ( 4.300e-01 -1.300e-01  2.900e-01  5.106e+01)
   ( 3.000e-01  1.600e-01  1.300e-01  1.985e+01)
   ( 4.600e-01  2.900e-01 -6.700e-01  1.076e+01)))
 ((( 7.500e-01  -3.800e-01  6.500e-01  1.451e+01)
   ( 3.700e-01  2.700e-01  5.200e-01  2.427e+01)
   ( 0.000e+00  0.000e+00  0.000e+00  0.000e+00)
   ( 0.000e+00  0.000e+00  0.000e+00  0.000e+00)
   ( 0.000e+00  0.000e+00  0.000e+00  0.000e+00))))

Note: The Idea is to split the df into chunk_size=5 equal sizes, where the last chunk for an id cannot make up to 5 it should be zero-padded.

For couple of days, I am struggling to fix this bug, but not successful. Can someone help with a fix to this bugs please?

Real Analysis – Two measures that agree on a set of fixed-length intervals in $ R $

Let $ mu $,$ nu $ to be two $ sigma $-finished Borel measure on $ mathbb {R} $.
Let $ mathcal {B} $ to be Borel $ sigma $ algebra generated by open sets in
$ mathbb {R} $.

Prove or refute: $$ forall t in mathbb {R} mu ((t, t + 1)) = nu ((t, t + 1)) implies forall A in mathcal {B} mu (A) = nu (A) $$

My attempt:

I've tried to prove: $$ forall t, s in mathbb {R} quad mu ((t, t + 1) cap (s, s + 1)) = nu ((t, t + 1) cap ( s, s + 1))

Then using the $ pi- lambda $ theorem, I will be able to prove $ sigma ( mathcal {I}) subseteq D $

or $ mathcal {I}: = {I | exists t, r in mathbb {R}, , 0 leq r leq1 st I = (t, t + r) } cup { emptyset } $

$$ D: = {A in mathcal {B} | forall Bounded closed interval I subset mathbb {R} mu (I) = nu (I) } $$

Very similar to what was done here.

I could not prove it, so I tried to find a counter-example.
I notice this by changing the hypothesis in:

$$ forall t in mathbb {R} mu ((t, t + 1)) = nu ((t, t + 1)) implies forall A in mathcal {B} mu (A) = nu (A)

I could find a counter-example by defining $ mu $ to be the
standard Lebesgue measurement, and $ nu $ to be a measure that matters
the number of integers, that is to say

$ forall A in mathcal {B} nu (A) = | mathbb {N} cap A | $.

I could not modify the counter-example to fit the initial hypothesis,
and because it is not valid for the next hypothesis:
forall t in mathbb {R} mu ((t, t + 1)) = nu ((t, t + 1)) implies forall A in mathcal {B} mu (A) = nu (A)

I started thinking that it was "too" artificial.

How should I precede?

BTW: It was a duty for the class "introduction to the true analysis", it's
Not anymore.

arrays – Algorithm for generating a fixed-length repetitive sequence

I'm trying to create an algorithm to generate test cases. Each test case is a table of $ n $ randomly generated natural integers using a pseudo-random function. The destination for $ n $ is $ 0 <n $ 256.

Each table is generated using a "repeatability / noise factor", $ r $. Yes $ r = n $, the array elements will all be calculated individually with the pseudo-random function (hence, the function is called $ n $ time). On the lower limit, $ r = $ 1, all elements of the array are generated at once (the function is called once). As another example, if $ r = n-1 $the first two elements will be calculated at the same time, while the remaining elements will be calculated individually.

It is best to spread the repetition over the elements rather than calculate a repetition once: $[a, a, b, b, c]$ is better $[a, a, a, b, c]$, although both cases call only 3 times the pseudo-random function. As such, if $ n $ is an even number, and $ r = frac n2 $, then the table must be composed of $ r $ groups of 2 elements calculated together.

If my explanation is not clear, some examples of data might help:

Given $ n = $ 5, $ r = $ 5, the resulting table is $[a, b, c, d, e]$

Given $ n = $ 5, $ r = $ 4, the resulting table is $[a, a, b, c, d]$

Given $ n = $ 5, $ r = $ 3, the resulting table is $[a, a, b, b, c]$

Given $ n = $ 5, $ r = $ 2, the resulting table is $[a, a, a, b, b]$

Given $ n = $ 5, $ r = $ 1, the resulting table is $[a, a, a, a, a]$