## equation resolution – complicated polynomial taking forever

I'm trying to solve this very complex polynomial equation for x, but it takes too much time or I get an error message. No suggestion?

FindRoot (((g + 0.4 * (1/s)) ((log (k * (1 + x)e ^ -r)) /k + (r + (s ^ 2) /2) -s) -0,16 (1/s) ((log (k(1 + x)e ^ -r)) /k + (r + (s ^ 2) / 2) -s) ^ 3 + 0.025 (1/s) ((log (k(1 + x)e ^ -r)) /k + (r + (s ^ 2) / 2) -s) ^ 5-0.003 (1/s) ((log (k(1 + x)e ^ -r)) /k + (r + (s ^ 2) / 2) -s) ^ 7)) ((0.4(1/s) ((log (k * (1 + x)e ^ -r)) /k + (r + (s ^ 2) 2)) – 0,16 (1/s) ((log (k(1 + x)e ^ -r)) /k + (r + (s ^ 2) 2)) ^ 3 + 0.025 (1s) ((log (k(1 + x)e ^ -r)) /k + (r + (s ^ 2) 2)) ^ 5-0.003 (1/s) ((log (k(1 + x) * e ^ -r)) k + (r + (s ^ 2) 2)) ^ 7)) == x, {x}, real)

## Aggraphic geometry – Multiplicity of a positive characteristic polynomial

Let $$mathbb K$$ to be an area of ​​character $$p> 0$$.
Let $$f in mathbb K [x_1, dots, x_n]$$ to be a multivariate polynomial and let $$q in mathbb K ^ n$$. Is there a computer method for determining the multiplicity of $$f$$ at $$q$$ without explicitly calculating with the Gröbner bases the powers of the determining ideal of $$q$$?

## calculus – Show that \$ f (x) = xe ^ {x ^ 2} \$ is invertible and determine the maclaurin polynomial at the 6th degree of \$ f ^ {- 1} (x) \$

It is not necessary to calculate $$f ^ {- 1}$$ analytically because the derivative of $$f ^ {- 1}$$ can be written according to the derivative of $$f$$, in fact you have that

$$(f ^ {- 1}) '(y) = frac {1} {f' (f ^ {- 1} (y))}$$

You can calculate other derivatives of $$f ^ {- 1}$$ from this relationship.

In your case you have that

$$f (0) = 0$$

So $$f ^ {- 1} (0) = 0$$ this means that

$$(f ^ {- 1}) '(0) = frac {1} {f' (0)}$$

For the second derivative, you have this

$$(f ^ {- 1}) & # 39; & # 39; (y) = ( frac {1} {f (- {1} (y))}) ## EQU1 ## = – frac {1} {(f (f ^ {1} (y)) ^ 2} (f ^ {- 1}) & (y) =$$

$$= – frac {1} {(f '(f ^ {- 1} (y)) ^ 3}$$

So

$$(f ^ {- 1}) '' (0) = – frac {1} {(f '(0)) ^ 3}$$

## Performance tuning – Optimal way to extract the "positive part" of a multivariate polynomial

I have multivariate polynomials with numerical coefficients, like e. g.

``````p - s - p q^2 s^2 + 3 r s^2 + 3 r^2 s^2 - p r^2 s^2 - 2 q r^2 s^2 - 2 r^3 s^2 + s^3
``````

and I would like to take the sum of these positive coefficient monomials only.

Although for my purposes

``````FromCoefficientRules[Select[CoefficientRules[poly],Last[#]>0&],Variables[poly]]
``````

seems to be quite fast, it's a matter of translating in another form and vice versa, so I think there must be a more optimal way to do it, probably using some tips with the internal representation of polynomials.

Is there?

## real analysis – smooth function whose average value on any ball \$ B (x, r) \$ is a polynomial of \$ r \$

Let $$f$$ to be a smooth function on $$mathbb R ^ N$$, suppose that for everything $$x in mathbb R ^ N$$, $$frac {1} {Flight (B (x, r))} int_ {B (x, r)} f$$ is a polynomial of $$r$$ (we note the polynomial $$p_x (r)$$), or $$B (x, r)$$ is the ball closed with the center to $$x$$ and radius $$r$$.

For example, any harmonic function satisfies this condition. In general, can we translate this condition into a differential equation for $$f$$ ? And choose such $$f$$, is the degree of $$p_x (r)$$ independent of $$x$$? For example, for the harmonic function, the degree is always zero.

## divisibility – The polynomial division first!

$$x ^ {28} + 9x ^ {26} + 39x ^ {24} + 95x ^ {22} + 76x ^ {20} -384x ^ {18} -1928x ^ {16} -4868x ^ {14} -7712x ^ {12} -6144x ^ {10} + 4864x ^ {8} + 24320x ^ {6} + 39936x ^ {4} + 36864x ^ {2} +16384$$

In one example, the above polynomial appeared. It should have six factors, like
$$(x ^ 4-4x ^ 2 cos (4 pi c_ color {blue} { text {r}}) +4)$$
and one like $$(x ^ 4-4x ^ 2 cosh (4 pi c_ color {red} { text {n}}) + 4)$$

Is it possible to first polynomial division and then to determine the values ​​of $$cos (4 pi c)$$?

I was simply thinking of multiplying the factors to get a set of equations, much like symmetrical elementary polynomials. Is this approach known to work in these cases?

## quotient of algebra on the polynomial ring

Given a finite-dimensional algebra, $$A$$, on a polynomial ring, $$K (x)$$and an element $$a$$ in this algebra, is it possible to define the quotient of $$A$$ by the ideal generated by $$a$$ like an algebra on $$K$$?

As a motivating example, leave $$K (x_1, x_2)$$ to be the two-variable polynomial ring on $$K$$and consider two polynomials, $$p_1, p_2$$, in this ring. The ring of the quotient can be formed by the ideal generated by $$p_1$$ and $$p_2$$and suppose it's a finite dimension like an algebra on $$K$$.

Otherwise, suppose we use first $$K (x_1, x_2) end K (x_2) (x_1)$$and think about $$p_1$$ as a polynomial in $$x_1$$ with coefficients in $$K (x_2)$$. Then we can form the ring quotient by $$p_1$$and we get a (suppose) finite dimensional algebra on $$K (x_2)$$, and $$(p_2)$$ is an element of this algebra. Then I would like to use a construct of the type described in the first paragraph to reproduce the algebra of the second paragraph. Is it possible?

## algorithms – How do you solve a general linear diophantine equation in polynomial time (with minimization constraint)?

Given

$$a_1 X_1 + dots + a_n X_n = b$$

or $$a_i, b in Bbb {Z}$$. How do you get a clearer picture of the solution defined in the polynomial time.

Also, what I really want is to do the above, but by minimizing:

$$X_1 + points + X_n, X_i in Bbb {N}$$

All my $$X_i$$ are delimited by small sets such as $${0, 2 }$$ for small instances of my problem, but I can not very well try $$2 ^ n$$ or more possibilities, now I can! It would not be polynomial time.

## reference query – Probability that a random polynomial monique has few real zeros

In the document https://arxiv.org/pdf/math/0006113.pdf, it is shown that the probability that a random polynomial $$a_0 + a_1x + cdots + a_n x ^ n$$ at $$o ( log n / log log n)$$ the true zeros are $$n ^ {- b + o (1)}$$ as $$n to infty$$, where the coefficients $$a_i$$ are distributed independently and identically with a mean variance and zero unit and where $$b> 0$$ is an absolute constant (see Theorem 1.2 on page 2).

Question: Are there similar results in the literature for monique polynomials? That is, are there estimates of the probability that a random monique polynomial $$a_0 + a_1x + cdots + a_ {n-1} x ^ {n-1} + x ^ n$$ has few real zeros?

What I know: I am not sure that one can simply deduce the desired estimate in the case of monique polynomials from the case of nonmonic polynomials. However, it seems possible that the method used in the above article could be modified to give the desired estimate in the simple case, so I wonder if this has already been done. Also if $$f (x_0) = 0$$ with $$x_0 neq 0$$ and $$F (x, y)$$ is the homogenization of $$f (x)$$then $$F (x_0,1) = 0$$, implying that $$g (y): = F (1, y)$$ satisfied $$g (1 / x_0) = 0$$. So, the question above is related to the following question: are there estimates of the probability that a random polynomial of the form $$1 + a_1x + cdots + a_nx ^ n$$ has few real zeros?

## pde – How to perform a Fourier transformation on polynomial functions

I am currently trying to solve the following EDP, which captures the temperature change (T) as a function of time (t) and displacement in the tank (x):

$$left ( rho C_p right) _m frac {{ partial T} _m} { partial t} + G.C_ {p, f} frac {{ partial T} _m} { partial x} = frac { partial} { xx partial} left (k_m frac {{partial T} _m} { xx partial} right)$$

What can be better written as:

$$A frac { partial T} { partial t} + B frac { partial T} { partial x} = C frac { partial ^ 2T} {{ partial x} ^ 2}$$

With the following initial condition:

$$T (x, 0) = 20 ° C$$

And the following boundary conditions:

$$frac { T partial} { partial x} (0, t) = 0$$

$$frac { T partial} { partial x} (L, t) = 0$$

Since then, I've got literature search results where the loading process of the compacted bed system was as follows:

t = 1 hour

Polynomial solution: $$y = 13728x ^ 6-50447x ^ 5 + 67135x ^ 4-37308x ^ 3 + 6810x ^ 2-445.52x + 554.58$$

t = 1.5 hours

Polynomial solution: $$y = -2710.5x ^ 6 + 4928.9x ^ 5 + 2182.7x ^ 4-7837.1x ^ 3 + 3379.5x ^ 2-480.39x + 566.51$$

t = 2 hours

Polynomial solution: $$y = -5493.2x ^ 6 + 21095x ^ 5-29059x ^ 4 + 17302x ^ 3-4758.7x ^ 2 + 535.14x + 533.75$$

t = 2.5 hours

Polynomial solution: $$y = 1090.7x ^ 6-2259,7x ^ 5 + 742.42x ^ 4 + 496.34x ^ 3-319.33x ^ 2 + 50.762x + 548.15$$

t = 3 hours

Polynomial solution: $$y = 872.72x ^ 6-2893.5x ^ 5 + 3281x ^ 4-1720.8x ^ 3 + 435.16x ^ 2-46.906x + 551.42$$

I hoped that it might be possible to combine these polynomial equations and to approximate them using a sinusoidal sum of waves, maybe by Fourier transform, so that I could build a mathematical model to predict this charging time performance ?? I became completely stuck with this so that any help would be received with gratitude.