linear algebra – Does the derivative of a function affect to its linearity?

I have a (maybe too simple) question: I have to prove if different sets of equations are or aren’t lineally independent. The case which is making trouble is ${sinx, cosx, 1}$. I have found out about the Wronskian, so I guess this implies that if a combination of functions (i.e. $asinx+bcosx+c=0$) is lineally independent, then so will be its derivative? Is that approach right? My idea was to derivate twice, and then try to solve the equations system for $a, b, c$ to prove that $a=b=c=0$, and thus the three functions are lineally independent. Is this also right?
Thank you very much.

multivariable calculus – How define the derivative of a mapp between two manifolds?

What shown below is a reference from the text Differential from by Victor Gullemin an Peter Haine. I point out that if you know the definitoin of Manifold and tangent space you can read only the part of the image beyond the theorem $4.2.10$.

enter image description here

So I ask effectively why the derivative of the map $g$ at the points of $X$ does not by the extension $tilde g$ any why moreover by a particular parametrization: indeed if $phi$ and $psi$ are two different local patches about $p$ then $dtilde g=d(gcircphi)big(phi^{-1}(p)big)$ and $dtilde g=d(gcircpsi)big(psi^{-1}(p)big)$ but by the chain rule
$$
dtilde g(p)=d(gcircphi)big(phi^{-1}(p)big)=d(gcircpsi)big(psi^{-1}(p)big)cdot d(psi^{-1}circphi)big(phi^{-1}(p)big)=dtilde g(p)cdot d(psi^{-1}circphi)big(phi^{-1}(p)big)
$$

and unfortunately I think that $big(phi^{-1}(p)big)neq 1$ generally. So could someone help me, please?

real analysis – Limit definition of the second derivative

I’m given that a function $f: mathbb{R} to mathbb{R}$ is differentiable, and the second derivative $f”(a)$ exists at a point $x=a$. I want to prove that
$$lim_{h to 0} frac{f(a+h) + f(a-h) – 2 f(a)}{h^2} = f”(a)$$
Without use of L’Hopital’s rule or Taylor’s Theorem. I’ve so far managed to show that
$$f”(a) = lim_{h to 0} lim_{t to 0} frac{f(a+h) – f(a+h-t) – f(a) + f(a-t)}{ht}$$
And that you may interchange limiting processes $t to 0$ and $h to 0$. However, I can’t find a way to show that you may take $t to h$ and $h to 0$ and end up with the same limit (Which would complete the proof). Can anyone give me a hint on how I could prove that this is valid?

multivariable calculus – When can we interchange operations involving partial derivative and integral, and how does Hamiltonian formulation affect that

Say we have an integral

$$ B=int dt frac{partial}{partial q} f(t,q,dot{q}), $$

where $t$ is an independent parameter, $q=q(t)$ is a dependent variable, $dot{q}=dq/dt$ and $f$ is an arbitrary function of them all. For example, in Hamiltonian formalism, $q$ is a canonical variable and $t$ is the evolution time parameter along the orbit of motion.

Question (1): generally speaking in calculus, when facing such types of integrals, when are we allowed to interchange the partial derivative $partial/partial q$ with the integral sign above, so that we may put $partial/partial q$ outside, to yield: $frac{partial}{partial q} int dt f(t,q,dot{q})$ ?

Question (2): if we are now talking about $q$ as a Hamiltonian canonical variable and $t$ as the time evolution parameter along the orbit, does the answer to Question (1) change (since now $partial q/partial t$ will vanish)?

Questions (3): And if we think of the integral $int dt (cdots)$ as anti-derivative operation, is it strictly the “anti” of a full derivative ($d/dt$) or the partial derivate ($partial/partial t$)? I ask this because it may help understanding when we can exchange such operations. For example, in Hamiltonian dynamical system where $t$ is the time parameter along the orbit of motion, the canonical variable $q$ can have $dq/dtneq 0$, but it will always have $partial q/partial t=0$ by definition and then $q$ will not explicitly depend on $t$ so we probably can exchange the order then, I think (but not sure).

When can you not use the second derivative test?

According to google, it’s when f'(x) doesn’t exist. I was given the following functions:

y = -(1/3)x^3 -4x + 16x

y = xe^(-x/4)

y = -cos(x-4)

y = -x^2 + 8x

I was able to automatically rule out the first and last because they’re polynomials. Then I was stuck with the middle two. But they both have first derivatives. AND second derivatives. So, is there any other way to determine when I can’t use the second derivative test?

oa.operator algebras – What’s the matrix of logarithm of derivative operator ($ln D$)? What is the role of this operator in various math fields?

This paper gives some great results:

$(ln D) 1 = -ln x -gamma$

$(ln D) x^n = x^n (psi (n+1)-ln x)$

$(ln D) ln x = -zeta(2) -(gamma+ln x)ln x$

I wonder, what is its matrix, or otherwise, is there a method of applying it to a function?

What is its intuitive role in various fields of math?

linear algebra – Derivative of trace uniformly bounded

I have that $boldsymbol{Omega{(boldsymbol{theta})}}$ is symmetric, positive definite and uniformly bounded where $boldsymbol{theta} in boldsymbol{Theta}$ with $boldsymbol{Theta}$ a compact subset of $R^{n}$ and $n$ is fixed.

I want to show that $frac {partial}{partialboldsymbol{theta’}}trace (
boldsymbol{Omega{(boldsymbol{theta})}}^{-1}boldsymbol{Omega{(boldsymbol{theta_0})}})$
is O(1) uniformly on $boldsymbol{Theta}$, where $boldsymbol{theta_{0}} in boldsymbol{Theta}$ .

For each j=1,2,…,n does it hold that

$frac {partial}{partial{theta_{j}}}trace (
boldsymbol{Omega{(boldsymbol{theta})}}^{-1}boldsymbol{Omega{(boldsymbol{theta_0})}})=trace (frac {partial}{partial{theta_{j}}}(boldsymbol{Omega{(boldsymbol{theta})}}^{-1}boldsymbol{Omega{(boldsymbol{theta_0})}}))=trace (
-boldsymbol{Omega{(boldsymbol{theta})}}^{-1}boldsymbol{Omega{(boldsymbol{theta_0})}} frac {partial{Omega{(boldsymbol{theta})}}}{partial{theta_{j}}}boldsymbol{Omega{(boldsymbol{theta})}}^{-1}boldsymbol{Omega{(boldsymbol{theta_0})}})$
?

and then since $boldsymbol{Omega{(boldsymbol{theta})}}$ is uniformly bounded, so is its inverse, and its first derivative (or not necessarily and that the first derivative is uniformly bounded needs to be an assumption?) and therefore the whole trace on the right hand side. Subsequently, if the right hand side trace is bounded uniformly for each j, then so is $frac {partial}{partialboldsymbol{theta’}}trace (
boldsymbol{Omega{(boldsymbol{theta})}}^{-1}boldsymbol{Omega{(boldsymbol{theta_0})}})$
.

integration – How would you find the original function from the Second Derivative given 2 coordinates of the original equation?

Given the second derivative and two coordinates of the original function, how would I find the original function? Most problems I have found online seem to give an xy pair for both the first derivative and the original function, but this question does not. How would you solve for C for the first derivative without that pair? Here’s an example of one of the questions:

$f”(t) = 2e^t +3sin(t)$
$f(0) = -7, f(pi) = -8 $

Finding nth derivative of $frac{1}{x^4+4}$

I am supposed to find the nth order derivative of:
$$frac{1}{x^4+4}$$

I tried to resolve into partial fractions. But it didn’t work out for me. I am a beginner in this subject. Please help.

ap.analysis of pdes – eliminating first derivative terms from second order elliptic partial differential equation

I have a partial differential equation
$-afrac{partial^2 G}{partial x_1 partial x_2}-b(cosh(x_1)+cosh(x_2))G(x_1,x_2)+sinh(x_1)frac{partial G(x_1,x_2)}{partial x_1}+sinh(x_2)frac{partial G(x_1,x_2)}{partial x_2}=epsilon ,,G(x_1,x_2)$

I have to cast it in a form like
$-frac{1}{m_1}frac{partial^2 psi}{partial x^2}-frac{1}{m_2}frac{partial^2 psi}{partial y^2}+U(x,y),psi=epsilon ,,psi$

for some variable transformation from {$x_1,x_2rightarrow x,y$} and also some transformation in $G(x,y)$. How to do this?