When I make a GluLookAt (0.0, 500.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.0) so that I am at 500 units on the positive Y axis when looking at the origin with the upper vector of the camera pointing to the negative Z axis, this means that my unit vector, mathematically speaking, should be (0,0, -1.0, 0.0) correct? If I now look towards the negative Y axis and want to get closer to the origin, I am supposed to subtract the number of units from my camera's current position by 500 so that the value Y decreases, which means I'm moving the camera forward in the direction I'm pointing. How is it that when I do a glGetFloatv (GL_MODELVIEW_MATRIX, mycopy) to get the current matrix right after executing the GluLookAt command, I end up getting a look vector of (0,0, 1.0, 0.0) so that it is a positive Y value? mycopy (2), mycopy (6) and mycopy (10) are the values of the look vector. I know I can deny it to get the negative Y value, but it seems like a cheap move. Why is OpenGL doing this in the modelview matrix when REAL math indicates that it should be a negative Y value for the look vector? If anyone can help me understand this, I would greatly appreciate it. 🙂