Suppose I have a finite horizon dynamic programming problem, i.e. I want to minimize the following expression

$$E(sum_{t=0}^Tg(X_t, U_t) + G(X_T)|X_0 = x_0) $$

The standard trick is to solve this via backward induction using that the optimal value function is given by

$$ J_t(x_t) = min_{u_t} g(x_t, u_t) + E(J_{t+1}(X_{t+1})|X_{t+1}=x_t)$$

I have seen an example of the following form

$$E(exp(sum_{t=0}^Tg(X_t, U_t) + G(X_T))|X_0 = x_0)$$

The notes say, one should introduce $V_t(x_t) = ln(J_t(x_t))$

For $T$ I then get $V_T(x_T)=ln(J_T(x_T)) = G(X_T)$. What I don’t understand is how they derive the following:

$$V_t(x_t)=min_{u_t}ln(J_t(x_t))=min_{u_t}ln E(exp(g(x_t,u_t)J_{t+1}(X_{t+1})|X_{t}=x_t)$$

The second equality is puzzling me. How they derive this form?