I am trying to refocus the images of a microlens array light field using Ren Ng's Four Ng slice photography theorem found in his thesis chapter 5, equation 5.7, which is available at https : //stanford.edu/class/ee367/reading/Ren%20Ng -thesis% 20Lytro.pdf

In terms, the Fourier slice photography theorem means that a photograph is the inverse 2D Fourier transform of a dilated 2D slice in the 4D Fourier transform of the light field.

Part of my confusion comes from the notation and its application in python to an acquired light field image. To start with, I understand what the Fourier transform is but when Ng says Fourier transform of the light field 4D, I don't know how to interpret it. Currently, my light field is a NumPy array such that my values u = i, v = j index different under-aperture images where each pixel of the sub-aperture image has an x and y coordinate, which So gives me a 4D light field array.

```
lf_img = cv2.imread(lf_img, -cv2.IMREAD_ANYDEPTH)
row_lens = lf_img.shape(0)/14 #14 is pixels under a microlens in this direction
col_lens = lf_img.shape(1)/14 # 14 is pixels under a microlens in this direction
vp_imgs = np.zeros((14,14,int(row_lens),int(col_lens),3))
for i in range(14):
for j in range(14):
vp_imgs(i, j, :, :, :) = lf_img(i::14, j::14, 0:3) ##U,V,X,Y,(RGB)
```

Then I see images like this http://graphics.stanford.edu/papers/fourierphoto/ in Ng's thesis where it seems to me that he just took the 2D Fourier transform of each lenslet and to arrange them in a grid. So I don't really know how to interpret this Fourier 4D transform and how the splitting is done. Thank you!

Example of the type of light field image I work with: