reference request – Approximating band limited functions from scattered data : any mathematical or physical applications?

Approximating a function from its scattered data can be done by radial basis functions(RBF) method, as long as the function lies in the native space of the RBF. By approximation, I mean the interpolant converges to the function being approximated as the data points become dense in the domain.

All band limited functions lie in the native spaces for most of RBFs. But here the problem is, we are approximating functions from a more regular class by functions from less regular class.
This is not the usually helpful in most applications.

Suppose there is a method to approximate band-limited functions from their scattered data, by functions belonging to the same class(functions of same band width or lower). That means the final interpolant is also band limited and belongs to the same class.(The interpolant converges point-wise to the function being approximated as the data points become dense).

Question: Are there any mathematical applications for this type of approximation of band limited functions from scattered data. Can it be applied in solving any math problems? I’d be happy to know any cases where this can be used.

mathematical physics – gamma functions as infinite series

I have been trying to solve the series but couldn’t proceed a single step in this. Can someone please give me some clue on how to proceed?

The question is to prove the given expression

$$frac{1}{n+1}+mfrac{1}{n+2}+frac{m(m+1)}{2!}frac{1}{n+3}+frac{m(m+1)(m+2)}{3!}frac{1}{n+4}+……+infty=frac{Gamma(n+1)Gamma(1-m)}{Gamma(n-m+2)}$$

where $n>-1$ and $m<1$.

mathematical optimization – Parameter estimation of a stochastic volatility jump diffusion model

Is it possible to estimate the parameters of a stochastic volatility jump diffusion model with return prices using Mathematica and with maximum likelihood estimation approach?

For example, see here:

https://onlinelibrary.wiley.com/doi/10.1111/1540-6261.00566

My model consists of two Poisson processes, but they are only in the return dynamics as different from the paper above. I would like to estimate the parameters mu, kappa, sigma, rho, theta and the intensity. Can I handle this in Mathematica?

blockchain – How does the mathematical problem get harder for bitcoin mining over time?

The easiest way I can think of to explain this is to give you a die (dice) and say “If you roll less than a 6, i’ll give you $100”. It’ll be pretty easy to get the money, right? The only way you can lose is if you roll a 6

Now if I change the game to be “roll less than a 2”, it’s harder (less likely) that you’ll get it on your first go, because you have to roll a 1.

So lets say I am aiming for you to win $100 a minute, and you’re fast enough to roll about 3 times a minute. You start rolling and I notice with the difficulty set at 5, you’re winning a lot more often than once a minute, so I dial the difficulty number down (make it harder). Say we reach a point where with the difficulty set at “below 3” you are, on average, winning once a minute. Then you have a real streak of bad luck and it takes you 4 minutes to roll less than 3 – you just kept rolling 6’s for the first 3 minutes. I figure I’ll go easy on you, and I raise the difficulty number again because you’re having a bad streak so I’ll make it easier to win. Then you start doing really well, so i start reducing the number again.

On a regular basis, i’ll look back over the average time it took you to win recent games and decide whether to adjust the difficulty or not, and tweak it up or down to try and keep you to winning once a minute. If you get faster at rolling, i’ll make the game harder. If you get your buddy to roll with you, I’ll make it harder still. If you put the die away for a day then roll a win it’ll massively affect the average time for a win, and I’ll probably adjust thing so it’s easier.


This uses a 6 sided dice; the numbers involved in bitcoin are much much higher – at the time of writing the difficulty is about 15,000,000,000,000 and hashes can be between 0 and 2^224 (about 270,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000) but the premise remains the same – it’s a dice with 27000 vigintillion sides and you have to roll less than about 15 trillion. And you can buy devices that roll up to 16 trillion times a second. With your big rewards you can buy loads of these devices and fill a warehouse with them, increasing your odds of being the one to roll the win, if you can afford the power bill

The rolling of the dice is the generating of the hash code for the transactions the miner has decided to pack into a block.

In essence the miner packs a block with transactions, calculates the hash, it’s greater than the difficulty, they change one number in the block header (rather than go to the effort of packing another block of transactions – a lot more computationally expensive to do but would have the desired effect of changing the hashcode, which is the overall goal) called the nonce. So we change the nonce, re-do the hash, is it less than the difficulty? No. Change the nonce, redo the hash. Check the difficulty/change the nonce/hash/check/change/hash/check/change..

Eventually something happens:

  • we find a hash less than the current difficulty – quick! announce it to the network and get the world to verify the work, accept it and we get the reward
  • someone else finds a block of transactions that has a hash less than the difficulty, they get the reward, all the transactions they packed are completed/paid – we need to start over with a new block of transactions
  • we run out of nonces (there are only 4 billion) because we’re hashing so fast (transaction blocks have a timestamp that increments, which also changes the hashcode, but even then coupled with the 4 billion nonces means if we hash faster than >4ghash/second we’ll run out of nonces before the time ticks up by a second, so we then go and modify one of the transactions slightly, or swap it out for another) – whatever is needed to bring about even a valid single byte change in the block will produce a different hashcode, so we make that change and carry on (probably back with the nonce changing method)

So who is the authority on what the difficulty should be? Same as anything else with bitcoin – everyone. The network decides on the difficulty, and because all (or most) of the network are good actors, you can’t cheat, you can’t write a program that claims a win with some hash, because everyone else proof checks the work and they don’t accept it if you cheated the hashcode, or lied about getting lower than the difficulty. It’s like everyone else in the casino watched you roll, and if you rolled a 6 but shouted “i rolled a 1!” they ignore you

functional programming – Simplifying the following mathematical expression using a computer?

I have this following beastly expression typed up very nicely in LaTeX formatting, as you can see. What is the easiest way that I can get a computer to simplify this expression for me? I have zero programming experience. I installed sagemath but it seems pretty complicated.

$W_{(1,1)}(t,v)=frac{-t^{-2k}v^k}{3}(frac{v^{frac{3}{2}}-v^{frac{-3}{2}}}{t^{frac{3}{2}}-t^{frac{-3}{2}}})(frac{v^{frac{1}{2}}-v^{frac{-1}{2}}}{t^{frac{1}{2}}-t^{frac{-1}{2}}})+frac{t^{-2k}v^k}{4}(frac{v-v^{-1}}{t-t^{-1}})^2+frac{t^{-2k}v^k}{12}(frac{v^{frac{1}{2}}-v^{frac{-1}{2}}}{t^{frac{1}{2}}-t^{frac{-1}{2}}})-frac{t^{-k}v^k}{4}(frac{v^2-v^{-2}}{t^2-t^{-2}}) + frac{t^{-k}v^k}{8}(frac{v-v^{-1}}{t-t^{-1}})^2+frac{t^{-k}v^k}{4}(frac{v-v^{-1}}{t-t^{-1}})(frac{v^{frac{1}{2}}-v^{frac{-1}{2}}}{t^{frac{1}{2}}-t^{frac{-1}{2}}})^2-frac{t^{-k}v^k}{8}(frac{v^{frac{1}{2}}-v^{frac{-1}{2}}}{t^{frac{1}{2}}-t^{frac{-1}{2}}})^4+frac{-v^kt^{k}}{4}(frac{v^2-v^{-2}}{t^2-t^{-2}})+frac{v^kt^{k}}{3}(frac{v^{frac{3}{2}}-v^{frac{-3}{2}}}{t^{frac{3}{2}}-t^{frac{-3}{2}}})(frac{v^{frac{1}{2}}-v^{frac{-1}{2}}}{t^{frac{1}{2}}-t^{frac{-1}{2}}})+frac{v^kt^{k}}{8}(frac{v-v^{-1}}{t-t^{-1}})^2-frac{v^kt^{k}}{4}(frac{v-v^{-1}}{t-t^{-1}})(frac{v^{frac{1}{2}}-v^{frac{-1}{2}}}{t^{frac{1}{2}}-t^{frac{-1}{2}}})^2+frac{v^kt^{k}}{24}(frac{v^{frac{1}{2}}-v^{frac{-1}{2}}}{t^{frac{1}{2}}-t^{frac{-1}{2}}})^4$

mathematical optimization – maximize an expression within an interval.How?

I have tried to maximize this expression but in the interval -1<= x <=1 , but I can’t get the correct syntax

Maximize({(4 - 3 x - Sqrt(16 - 24 x + 9 x^2 - x^3))^(1/3) + (4 - 3 x + Sqrt(16 - 24 x + 9 x^2 - x^3))^(1/3), x, -1 <=x <= 1)

help me with the syntax

mathematical programming – What is the difference between a fraction and a float?

Computers usually deal with floating-point numbers rather than with fractions. The main difference is that floating-point numbers have limited accuracy, but are much faster to perform arithmetic with (and are the only type of non-integer numbers supported natively in hardware).

Floating-point numbers are stored in “scientific notation” with a fixed accuracy, which depends on the datatype. Roughly speaking, they are stored in the form $alpha cdot 2^beta$, where $1 leq alpha < 2$, $beta$ is an integer, and both are stored in a fixed number of bits. This limits the accuracy of $alpha$ and the range of $beta$: if $alpha$ is stored using $a$ bits (as $1.x_1ldots x_a$) then it always expresses a fraction whose denominator is $2^a$, and if $beta$ is stored using $b$ bits then it is always in the range $-2^{b-1},ldots,2^{b-1}-1$.

Due to the limited accuracy of floating-point numbers, arithmetic on these numbers is only approximate, leading to numerical inaccuracies. When developing algorithms, you have to keep that in mind. There is actually an entire area in computer science, numerical analysis, devoted to such issues.

mathematical optimization – Speeding up a Table with NMinimize

I have a parametric system of differential equations (a and b are the parameters). For every value of the parameters in a given range I want to find the right initial conditions to fit some observations. With this aim I defined a table which runs over the parameters a and b, and for every parameter I use NMinimize to minimize a chi-square variable over the initial conditions space {x0,vx0,y0,vy0}. I wonder if there is a way to speed up this process. The main part of the code is this:

Table({a, b,  NMinimize(chisquare(x0, vx0, y0, vy0)(a)(b), {{800 < x0 < 1100}, {400 < vx0 < 900},{1350 < y0 < 
1750}, {10 < vy0 < 110}}, {x0, vx0, y0, vy0})((1))}, {a,arange}, {b, orange})

NMinimize takes 146 seconds for a given couple of parameters {a,b}. Do you think there is a more efficient way to do this?

mathematical optimization – SemidefiniteProgramming for operator norms: Stuck at the edge of dual feasibility

I’m trying to calculate operator norms of linear transformations over space of matrices. For instance, find norm of $f(A)=XA$ by optimizing following:

$$max_{|A|=1} |XA|$$

This looks like a semidefinite programme, but I’m having trouble solving it with SemidefiniteOptimization. Simplest failing example is to find operator norm of $f(A)=5A$ in 1 dimension. It fails with Stuck at the edge of dual feasibility. Any suggestions?

Constraints
$$
A succ 0\
Isucc A \
x I succ -5 A
$$

Objective

$$
text{min}_{A,x} x
$$

d = 1;
ii = IdentityMatrix(d);
(* Symbolic symmetric d-by-d matrix *)
ClearAll(a);
X = 5*ii;
A = Array(a(Min(#1, #2), Max(#1, #2)) &, {d, d});
vars = DeleteDuplicates(Flatten(A));

cons0 = VectorGreaterEqual({A, 0}, {"SemidefiniteCone", d});
cons1 = VectorGreaterEqual({ii, A}, {"SemidefiniteCone", d});
cons2 = VectorGreaterEqual({x ii, -X.A}, {"SemidefiniteCone", d});
SemidefiniteOptimization(x, cons0 && cons1 && cons2, {x}~Join~vars)
```

algorithm – Code for finding mathematical expressions

My doubt:

Is there an algorithm that allows us to find formulas or mathematical expressions for things?


For example:

Say, I wanted to find an approximate formula for the perimeter of an ellipse in terms of a and b (where a and b have usual meanings in the context of an ellipse), what kind of algorithm would help me arrive at a mathematical function (expression/formula) in terms of a and b?