fourier analysis – Does this formula correspond to a series representation of the Dirac delta function $delta(x)$?

Consider the following formula which defines a piece-wise function which I believe corresponds to a series representation for the Dirac delta function $delta(x)$. The parameter $f$ is the evaluation frequency and assumed to be a positive integer, and the evaluation limit $N$ must be selected such that $M(N)=0$ where $M(x)=sumlimits_{nle x}mu(n)$ is the Mertens function.

(1) $quaddelta(x)=underset{N,ftoinfty}{text{lim}} 2left.sumlimits_{n=1}^Nfrac{mu(n)}{n}sumlimits_{k=1}^{f n} left(left{
cos left(frac{2 k pi (x+1)}{n}right) & xgeq 0 \
cos left(frac{2 k pi (1-x)}{n}right) & x<0 \
right.right.right),quad M(N)=0$

The following figure illustrates formula (1) above evaluated at $N=39$ and $f=4$. The red discrete dots in figure (1) below illustrate the evaluation of formula (1) at integer values of $x$. I believe formula (1) always evaluates to exactly $2 f$ at $x=0$ and exactly to zero at other integer values of $x$.

Illustration of formula (1) for delta(x)

Figure (1): Illustration of formula (1) for $delta(x)$

Now consider formula (2) below derived from the integral $f(0)=int_0^{infty}delta(x) f(x), dx$ where $f(x)=e^{-left| xright|}$ and formula (1) above for $delta(x)$ was used to evaluate the integral. Formula (2) below can also be evaluated as illustrated in formula (3) below.

(2) $quad e^{-left| 0right|}=1=underset{N,ftoinfty}{text{lim}} 4sumlimits_{n=1}^Nmu(n)sumlimits_{k=1}^{f n}frac{n cosleft(frac{2 pi k}{n}right)-2 pi k sinleft(frac{2 pi k}{n}right)}{4 pi^2 k^2+n^2},,quad M(N)=0$

(3) $quad e^{-left| 0right|}=1=underset{Ntoinfty}{text{lim}} mu(1)left(cothleft(frac{1}{2}right)-2right)+4sumlimits_{n=2}^Nfrac{mu(n)}{4 e left(e^n-1right) n}\$ $left(-2 e^{n+1}+e^n n+e^2 n-e left(e^n-1right) left(e^{-frac{2 i pi }{n}}right)^{frac{i n}{2 pi }} B_{e^{-frac{2 i pi }{n}}}left(1-frac{i n}{2 pi },-1right)+e left(e^n-1right) left(e^{-frac{2 i pi }{n}}right)^{-frac{i n}{2 pi }} B_{e^{-frac{2 i pi }{n}}}left(frac{i n}{2 pi }+1,-1right)+left(e^n-1right) left(B_{e^{frac{2 i pi }{n}}}left(1-frac{i n}{2 pi },-1right)-e^2 B_{e^{frac{2 i pi }{n}}}left(frac{i n}{2 pi }+1,-1right)right)+2 eright),quad M(N)=0$

The following table illustrates formula (3) above evaluated for several values of $N$ corresponding to zeros of the Mertens function $M(x)$. Note formula (3) above seems to converge to $e^{-left| 0right|}=1$ as the magnitude of the evaluation limit $N$ increases.

n & text{N=$n^{th}$ zero of $M(x)$} & text{Evaluation of formula (3) for $e^{-left| 0right|}$} \
10 & 150 & 0.973479, + i text{5.498812269991985$grave{ }$*${}^{wedge}$-17} \
20 & 236 & 0.982236, – i text{5.786047752866836$grave{ }$*${}^{wedge}$-17} \
30 & 358 & 0.988729, – i text{6.577233629689039$grave{ }$*${}^{wedge}$-17} \
40 & 407 & 0.989363, + i text{2.6889189402888207$grave{ }$*${}^{wedge}$-17} \
50 & 427 & 0.989387, + i text{4.472005325912989$grave{ }$*${}^{wedge}$-17} \
60 & 785 & 0.995546, + i text{6.227857765313369$grave{ }$*${}^{wedge}$-18} \
70 & 825 & 0.995466, – i text{1.6606923419056456$grave{ }$*${}^{wedge}$-17} \
80 & 893 & 0.995653, – i text{1.1882293286557667$grave{ }$*${}^{wedge}$-17} \
90 & 916 & 0.995653, – i text{3.521050901644269$grave{ }$*${}^{wedge}$-17} \
100 & 1220 & 0.997431, – i text{1.2549006768893629$grave{ }$*${}^{wedge}$-16} \

Finally consider the following three formulas derived from the Fourier convolution $f(y)=intlimits_{-infty}^inftydelta(x) f(y-x) dx$ where all three convolutions were evaluated using formula (1) above for $delta(x)$.

(4) $quad e^{-left|yright|}=underset{N,ftoinfty}{text{lim}} 4sumlimits_{n=1}^Nmu(n)sumlimits_{k=1}^{f n}frac{1}{4 pi^2 k^2+n^2} left(left{
n cosleft(frac{2 k pi (y+1)}{n}right)-2 k pi e^{-y} sinleft(frac{2 k pi}{n}right) & ygeq 0 \
n cosleft(frac{2 k pi (y-1)}{n}right)-2 k pi e^y sinleft(frac{2 k pi}{n}right) & y<0 \
end{array}right.right), M(N)=0$

(5) $quad e^{-y^2}=underset{N,ftoinfty}{text{lim}} sqrt{pi}sumlimits_{n=1}^Nfrac{mu(n)}{n}\$ $ sumlimits_{k=1}^{f n}e^{-frac{pi k (pi k+2 i n y)}{n^2}} left(left(1+e^{frac{4 i pi k y}{n}}right) cosleft(frac{2 pi k}{n}right)-sinleft(frac{2 pi k}{n}right) left(text{erfi}left(frac{pi k}{n}+i yright)+e^{frac{4 i pi k y}{n}} text{erfi}left(frac{pi k}{n}-i yright)right)right), M(N)=0$

(6) $quadsin(y) e^{-y^2}=underset{N,ftoinfty}{text{lim}} frac{1}{2} left(i sqrt{pi }right)sumlimits _{n=1}^{text{nMax}} frac{mu(n)}{n}sumlimits_{k=1}^{f n} e^{-frac{(2 pi k+n)^2+8 i pi k n y}{4 n^2}} left(-left(e^{frac{2 pi k}{n}}-1right) left(-1+e^{frac{4 i pi k y}{n}}right) cosleft(frac{2 pi k}{n}right)+right.\$ $left.sinleft(frac{2 pi k}{n}right) left(text{erfi}left(frac{pi k}{n}+i y+frac{1}{2}right)-e^{frac{4 i pi k y}{n}} left(e^{frac{2 pi k}{n}} text{erfi}left(-frac{pi k}{n}+i y+frac{1}{2}right)+text{erfi}left(frac{pi k}{n}-i y+frac{1}{2}right)right)+e^{frac{2 pi k}{n}} text{erfi}left(-frac{pi k}{n}-i y+frac{1}{2}right)right)right),qquad M(N)=0$

Formulas (4), (5), and (6) defined above are illustrated in the following three figures where the blue curves are the reference functions, the orange curves represent formulas (4), (5), and (6) above evaluated at $f=4$ and $N=39$, and the green curves represent formulas (4), (5), and (6) above evaluated at $f=4$ and $N=101$. The three figures below illustrate formulas (4), (5), and (6) above seem to converge to the corresponding reference function for $xinmathbb{R}$ as the evaluation limit $N$ is increased. Note formula (6) above for $sin(y) e^{-y^2}$ illustrated in Figure (4) below seems to converge much faster than formulas (4) and (5) above perhaps because formula (6) represents an odd function whereas formulas (4) and (5) both represent even functions.

Illustration of formula (4)

Figure (2): Illustration of formula (4) for $e^{-left|yright|}$ evaluated at $N=39$ (orange curve) and $N=101$ (green curve) overlaid on the reference function in blue

Illustration of formula (5)

Figure (3): Illustration of formula (5) for $e^{-y^2}$ evaluated at $N=39$ (orange curve) and $N=101$ (green curve) overlaid on the reference function in blue

Illustration of formula (6)

Figure (4): Illustration of formula (6) for $sin(y) e^{-y^2}$ evaluated at $N=39$ (orange curve) and $N=101$ (green curve) overlaid on the reference function in blue

Question (1): Is it true formula (1) above is an example of a series representation of the Dirac delta function $delta(x)$?

Question (2): What is the class or space of functions $f(x)$ for which the integral $f(0)=intlimits_{-infty}^inftydelta(x) f(x) dx$ and Fourier convolution $f(y)=intlimits_{-infty}^inftydelta(x) f(y-x) dx$ are both valid when using formula (1) above for $delta(x)$ to evaluate the integral and Fourier convolution?

Question (3): Is formula (1) above for $delta(x)$ an example of what is referred to as a tempered distribution, or is formula (1) for $delta(x)$ more general than a tempered distribution?

Formula (1) above for $delta(x)$ is based on the nested Fourier series representation of $delta(x+1)+delta(x-1)$. The conditional convergence requirement $M(N)=0$ for formula (1) is because the nested Fourier series representation of $delta(x+1)+delta(x-1)$ only evaluates to zero at $x=0$ when $M(N)=0$.

Whereas the Fourier convolution $f(y)=intlimits_{-infty}^inftydelta(x) f(y-x) dx$ evaluated with $delta(x)$ defined in formula (1) above seems to converge for $xinmathbb{R}$, Mellin convolutions such as $f(y)=intlimits_0^inftydelta(x-1) fleft(frac{y}{x}right) frac{dx}{x}$ and $f(y)=intlimits_0^inftydelta(x-1) f(y x) dx$ evaluated using the nested Fourier series representation of $delta(x+1)+delta(x-1)$ typically seem to converge on the half-plane (e.g. $Re(s)>0$ or $Re(s)<0$ depending on the function $f(x)$), but in some cases these Mellin convolutions are globally convergent for $sinmathbb{C}$. I’ll note that in general formulas derived from Fourier convolutions evaluated using formula(1) above for $delta(x)$ seem to be more complicated than formulas derived from Mellin convolutions using the nested Fourier series representation of $delta(x+1)+delta(x-1)$ which I suspect is at least partially related to the extra complexity of the piece-wise nature of formula (1) above.

See this answer I posted to my own question What is Relationship Between Distributional and Fourier Series Frameworks for Prime Counting Functions? for more information on the nested Fourier series representation of $delta(x+1)+delta(x-1)$ and examples of formulas derived from Mellin convolutions using this representation. See my Question related to nested Fourier series representation of $h(s)=frac{i s}{s^2-1}$ for information on the more general topic of nested Fourier series representations of other non-periodic functions.

formal languages – Shortest unambiguous representation of a graph over an alphabet

I just started reading a book on theoretical computer science and here are a couple of beginner questions about graphs, which I am struggling to answer:

Given the graph with the matrix representation
what is its shortest possible unambiguous representation over the boolean alphabet? How about over any alphabet?

Thank you :]

fa.functional analysis – General construction of enveloping C*-algebra, left/right-regular representation, etc

In a number of contexts (e.g. groups, crossed products, groupoids, Fell bundles) there are similar constructions of enveloping C*-algebras and left/right-regular representations that incorporate intertwining operators, subrepresentations, cyclicity, matrix coefficient functions, etc. Is there a generalized version of this construction?

reference request – Irreducibility of the adjoint representation in positive characteristic

Let $G$ be a simple, simply connected, algebraic group over an algebraically closed field $k$ of characteristic $p>0$. Let $Ad$ be the adjoint representation of $G$ on $mathrm{Lie}(G)$.

Given the Dynkin diagram of $G$, for which $p$ is $Ad$ irreducible?

I am happy to assume that $p$ is a very good prime for $G$.

usability – Why is tree representation of data becoming unpopular?

The number 1 answer for this question begins by stating:

“People don’t generally use hierarchical structures ‘in the real world’ — it seems to be something that has been forced upon them, a technical remnant of the past.”

What?!? That’s crazy talk. This is an old question but I wanted to answer it because it’s spreading a bit of misinformation. Hierarchical systems are actually pretty awesome and efficient. I don’t know what this guy is talking about.

He also says that:

“What needs to be understood is the way that people recognise and organise things. Our brains don’t work in a hierarchical way (without generating a lot of heat). Instead, we recognise things by similarity — similarity of appearance, smell, taste, touch, etc. We see an apple, and we know it’s an apple straight away. We don’t have to think about it — in a sense, it’s a one dimensional way of thinking.”

This is crazy too. Our brains don’t organize information in a hierarchical system but if it did, it would work much more efficiently. Our brains literally cannot work in a hierarchical way so it won’t generate a lot of heat. It simply can’t do it. And, our brains do not work by similarity either. I take that back. It does a little bit of classification. For example, they might put red memories next to red memories but this system of classification is actually a minor, secondary method used by the brain. The brain actually stores information by chronological order. That’s it. Do you, guys think that’s efficient at all? This is why when you forget something, no matter how much you think about the events that occurred after you lost something, you will NEVER remember where you placed it. But, simply walk to place at the time right before you lost it, you’ll remember where you put the lost object in a second. Every memory is linked in one direction – forward. You can’t remember things backward, unfortunately. Try it.

Storing things chronologically might be appropriate sometimes but many times, it’s not very efficient. Yet, that’s how Evolution set things up. By the way, tagging has its good points but it has its bad points too. But that’s a topic for another time.

I just wanted to make sure people understood that hierarchical filing systems are not some remnant of the past. They are an excellent way of organizing information. You can divide the number of files into 2, 3, or 10 different folders (4 is preferred) based upon a sensible category. Doing it this way, you can quickly whittle down the number of files until you finally locate your file. For example, if you have 10,000 files, you only need about 6 folder levels to find the file you want.

So, what’s the problem? The problem is that the amount of information has increased exponentially. I don’t mean it has increased exponentially from 20 years ago to today. No. It has increased exponentially each year for the past 2 decades. This is the problem of today. TMI. Too Much Information. Maybe, we should rename it Too Many Files. Whatever. This problem actually started at the advent of the internet. We had something called portals like Yahoo, AOL, and MSN. These were websites that helped people find the websites that they needed. Need to go shopping? Yahoo gave you a comprehensive list of where to click.
It was essentially a list of websites divided via a hierarchical system. But, the problem even then was the number of websites was literally growing exponentially. The portal up keepers could not keep up with reviewing which websites were the best and posting it on their website. Search engines were apparently needed and needed quickly. Now, here comes the ingenious part. It wasn’t intentionally ingenious but just lucky but still an ingenious arrangement if it was done intentionally. Search engines found a source of labor that could quickly find the website that you were looking for. Nope, it wasn’t immigrants or out-sourcing the labor to India. The source of labor was the Department of Y.O.U. You and everyone else found the website for you and everyone else! You and everyone else did the work for the search engines! The websites you and everyone visit most often is tracked by the search engines and pushes them up higher in the site rankings. Basically, Google is taking your work and you pay them for it! Apparently, it’s billions of dollars a year. Ironic isn’t it? Or, maybe the correct word is sick? I don’t know.

If you read an introductory book on information science, they would immediately explain that hierarchical file systems are a great way to organize data. However, the main reason that hierarchial file systems are becoming less popular is that they start to become inefficient as the number of files grows large.

I’ll go through a simple explanation. I forget the exact number but after a certain number of levels, it gets mentally strenuous and time-consuming to retrieve just one file. For example, say you have 50,000 files which is slightly large for a personal computer. You’ll need at most 8 levels of folders to reach the deepest folder if each folder node branches into 4 other folder nodes. (4x4x4x4x4x4x4x4=65536) You’ll probably need less folder levels but I am giving you the worst-case scenario. Just going through 4 levels is time-consuming but imagine having to go through 8 folder levels to reach the file that you want!

This is why tagging has been becoming a more important system of file organization. The number of files have increased dramatically. In servers, it’s not surprising to find tens of millions of files. Are you going to search through them via a folder system? That would be nuts. Tagging is a way to get to the information directly. The only problem is that tagging comes with its own problems like creating unique names.

decimal expansion – Is there a numerical representation in which each rational has only one representation?

In positional representations, there are always rational numbers that have multiple representations. For example, in base 10, 1 can be written as 1 or as $ 0. Overline {9} $. Are there digital representations in which all logics have exactly one representation?

Beginner – Graphical representation of various mathematical functions in Python

The task of this code is to create graphs of various algebraic, logarithmic and trigonometric functions and relationships using Python. matplotlib.plyplot module. Transforming code into graphics is a process. First, I secure a list of xs using set_width(width). Then I go through the list, replacing each x in the function equation. The result is a list of the same length of the ys of the xs. Now that I have the xs and the functions of the xs, I can plug in the list of the two ply.plot() and display the result. The exceptions to this process are the logarithmic and square root functions due to mathematical domain errors.

import matplotlib.pyplot as plt
import numpy as np
import math

def set_width(width):
    """Sets how many xs will be included in the graphs ("width" of the graph)"""
    return list(range(-width, width + 1))

def linear(width):
    """Graphs a linear function via slope intercept form"""
    xs = set_width(width)

    def ys(m=1.0, b=0):
        return (m * x + b for x in xs)

    "xs" and "ys" are not labeled "domain" and "range" because "all real numbers" will be limited to just a certain 
    list of xs and ys

    plt.plot(xs, ys())
    plt.plot(xs, ys(3, 2))
    plt.plot(xs, ys(5, -3))

def quadratic(width):
    """Graphs a quadratic function via vertex form"""
    xs = set_width(width)

    def ys(a=1.0, h=0, k=0):
        return (a * (x - h) ** 2 + k for x in xs)

    plt.plot(xs, ys())
    plt.plot(xs, ys(1, 10, -50))
    plt.plot(xs, ys(-4))

def exponential(width):
    """Graphs an exponential function"""
    xs = set_width(width)

    def ys(a=1.0, b=2.0, h=0, k=0):
        return (a * b ** (x - h) + k for x in xs)

    plt.plot(xs, ys())
    plt.plot(xs, ys(3, 2, 4, 20))
    plt.plot(xs, ys(5, 0.75))

def absolute(width):
    """Graphs an absolute function"""
    xs = set_width(width)

    def ys(a=1.0, h=0, k=0):
        return (a * abs(x - h) + k for x in xs)

    plt.plot(xs, ys())
    plt.plot(xs, ys(4, 7))
    plt.plot(xs, ys(-0.5, -4, -15))

def square_root(width):
    """Graphs a square root function"""
    def transform(a=1.0, h=0, k=0):
        xs = (x for x in set_width(width) if x - h >= 0)
        ys = (a * np.sqrt(x - h) + k for x in xs)
        return xs, ys

    parent = transform()
    plt.plot(parent(0), parent(1))
    twice_r5 = transform(2, 5)
    plt.plot(twice_r5(0), twice_r5(1))
    half_l2_u5 = transform(.5, -2, 5)
    plt.plot(half_l2_u5(0), half_l2_u5(1))

def cube_root(width):
    """Graphs a cube root function"""
    xs = set_width(width)

    def ys(a=1.0, h=0, k=0):
        return (a * np.cbrt(x - h) + k for x in xs)

    plt.plot(xs, ys())
    plt.plot(xs, ys(-3, 0, 1))
    plt.plot(xs, ys(2, 4, -3))

def sideways_parabola(height):
    """Graphs a sideways parabola (quadratic relation)"""
    ys = set_width(height)

    def xs(a=1.0, h=0, k=0):
        return (a * (y - k) ** 2 + h for y in ys)

    plt.plot(xs(), ys)
    plt.plot(xs(3, 3, 3), ys)
    plt.plot(xs(-2, -7, 0), ys)

def logarithms(width):
    """Graphs a logarithmic function"""
    def ys(b=2.0, a=1.0, h=0, k=0):
        xs = (x for x in set_width(width) if x - h > 0)
        ys = (a * math.log(x - h, b) + k for x in xs)
        return xs, ys

    parent = ys()
    plt.plot(parent(0), parent(1))
    three_r3 = ys(3, 2, 1000)
    plt.plot(three_r3(0), three_r3(1))

def sine(width):
    """Graphs a sine function"""
    xs = set_width(width)

    def ys(a=1.0, h=0, k=0):
        return (a * np.sin(x - h) + k for x in xs)

    plt.plot(xs, ys())
    plt.plot(xs, ys(3, 5))
    plt.plot(xs, ys(0.5, 0, -3))

def cosine(width):
    """Graphs a cosine function"""
    xs = set_width(width)

    def ys(a=1.0, h=0, k=0):
        return (a * np.cos(x - h) + k for x in xs)

    plt.plot(xs, ys())
    plt.plot(xs, ys(-1))
    plt.plot(xs, ys(2, 7, 9))

def tangent(width):
    """Graphs the tangent function"""
    xs = set_width(width)

    def ys(a=1.0, h=0, k=0):
        return (a * math.tan(x - h) + k for x in xs)

    plt.plot(xs, ys())
    plt.plot(xs, ys(1, -10))
    plt.plot(xs, ys(6, -8, 20))


explanation of evidence – Rudin, Riesz Representation Step X: Why do we need $ | a | $?

In this last step, we show that $ Lambda f = int_X f d mu $. And it suffices to show that $ Lambda f le int_X f d mu $ when $ f $ is right. Let me briefly describe this step to put context in my question. Although the question only concerns inequalities at the end.

Let $ K $ be the support of $ f $. To choose $ (a, b) $ be an interval containing the range of $ f $. to choose $ epsilon> 0 $ and choose $ y_i $ such as $ y_i – y_ {i-1} < epsilon $ and $$ y_0 <a <y_1 < cdots <y_n = b. $$
Let $ E_i = {x: y_ {i-1} <f (x) le y_i } cap K $. Then there are open sets $ E_i subset V_i $ such as $$ mu (V_i) < mu (E_i) + frac { epsilon} {n} $$ and $ f (x) <y_i + epsilon $ for $ x in V_i $. Then, using the previous steps, we find functions $ h_i prec V_i $ such as $ sum h_i = 1 $. so $ mu (K) le sum Lambda h_i $, and $ h_i f le (y_i + epsilon) h_i $. We now get a long chain of equality and inequality.

$$ hspace {-. 5 in} Lambda f = sum ^ n Lambda (h_if) le sum ^ n (y_i + epsilon) Lambda h_i \
hspace {.15in} = sum ^ n (| a | + y_i + epsilon) Lambda h_i – | a | sum ^ n Lambda h_i \
hspace {.55in} le sum ^ n (| a | + y_i + epsilon) ( mu (E_i) + frac { epsilon} {n}) – | a | mu (K) \
hspace {1.16in} = sum ^ n (y_i – epsilon) mu (E_i) + 2 epsilon mu (K) + frac { epsilon} {n} sum ^ n (| a | + y_i + epsilon) \
hspace {.15in} le int_X f d mu + epsilon (2 mu (K) + | a | + b + epsilon). $$

My question is why do we inject $ | a | $ in inequality? What is it for? It looks like we can transform the second sum of the fourth line into the fifth line without $ | a | $ be there.

Variables cv.complex – Reference request for the integral representation of the Hadamard product of two infinite series

To define $ F (x) = sum_ {n geq 1} f_ {n} x ^ n $ and $ G (x) = sum_ {n geq 1} g_ {n} x ^ n $. Then Hadamard's product $ F $ and $ G $ East

$$ H (x): = (F * G) (x) = sum_ {n geq 1} f_ {n} g_ {n} x ^ n. $$

The author of the Riesz equivalent of the Riemann hypothesis and the Hadamard product states that

$$ H (x) = frac {1} {2 pi} int_ {0} ^ {2 pi} F ( sqrt {x} e ^ {it}) G ( sqrt {x} e ^ {-it}) mathrm {d} t. $$

However, no reference / proof of this identity was given. So, does anyone know where I can find proof / reference of this identity?