I am a beginner in OpenGL. I am learning about textures in OpenGL. What I don’t understand is how many texture units in GPU. I heard someone said that you can see how many texture units by writing the following code.

int total_units;
glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &total_units);
std::cout << total_units << 'n';  // the result is 192

Is there 192 texture units in my GPU? In, it says that

params returns one value, the maximum supported texture image units that can be used to access texture maps from the vertex shader and the fragment processor combined. If both the vertex shader and the fragment processing stage access the same texture image unit, then that counts as using two texture image units against this limit. The value must be at least 48. See glActiveTexture.

So I wanted to know how many texture units can be used to access texture maps from the vertex and fragment shaders. So I wrote and run the following code.

int vertex_units, fragment_units;
glGetIntegerv(GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, &vertex_units);
std::cout << vertex_units << "n";   // the result is 32
glGetInteferv(GL_MAX_TEXTURE_IMAGE_UNITS, &fragment_units);
std::cout << fragment_units << "n"; // the result is also 32

So 32+32=64. But why GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS shows me the result of 192? I think I am missing something. What do I need to calculate to get 192?

And also, in OpenGL, why are there only GL_TEXTURE0 to GL_TEXTURE31 macros? I think these macros are for each shaders. Am I right?

opengl – 3D projection matrix: how does z influence x and y?

I’m reading about 3D projections and I’m trying to understand how the z coordinate can have any impact on the x and y parts of a vertex. See, for example,

w  0   0     0
0  h   0     0
0  0   Q     1
0  0  -Q*Z_n 0

How is it even possible to introduce a perspective if – clearly – there is no z involved in computing x and y (the above matrix works on column vectors).

shaders – Need help getting an objects orientation / all my OpenGL rotations are reversed


I am bringing blender models into a home made renderer / game engine.
I parse a text file containing object descriptions to load models.


begin_object generic_object
generic_object_name lampost
generic_object_parent_name world
generic_object_position -10.0000 -10.0000 2.000000
generic_object_rotation 90.000000 0.000000 0.000000
generic_object_scale 1.000000 1.000000 1.000000

begin_object ...

The "generic_object_rotation 90.000000 0.000000 0.000000" line describes 3 values. 

Rotation around Z ( XY ).
Rotation around X ( YZ ).
Rotation around Y ( XZ ).

After going through all the headaches of euler angles and their gimball lock and singularities ,
I switched all my code to quaternions. ( Highly recommended )

I am told that a counter-clockwise rotation around the Z axis , looking down the Z axis toward
the origin uses a rotation matrix of..

cos(theta)  -sin(theta)  0  0
sin(theta)  cos(theta)   0  0
0           0            1  0
0           0            0  1

I got this from a document rotgen.pdf off the Song-Ho website.

If i replace theta with +90.0 ( just like my input file above ) , the result is.

  0.0 ,  -1.0 ,  0.0 ,  0.0  
  1.0 ,   0.0 ,  0.0 ,  0.0 
  0.0 ,   0.0 ,  1.0 ,  0.0 
-10.0 , -10.0 ,  2.0 ,  1.0 

So i make a quaternion for +90.0 deg , turn it into a matrix , and then print out the matrix ,
to see if it is the same, and I get the same matrix.

  0.0 ,  -1.0 ,  0.0 ,  0.0  
  1.0 ,   0.0 ,  0.0 ,  0.0 
  0.0 ,   0.0 ,  1.0 ,  0.0 
-10.0 , -10.0 ,  2.0 ,  1.0 

All is well...

Then I send this matrix to the shader to draw this object and it rotates my object CW instead. 

In the shader is:

"gl_Position = projection_matrix * view_matrix * model_matrix * vec4( aPos , 1.0 );n"

which seems correct.

So I drew a cube in blender , attached different color textures to each side of the cube so
I could verify that my input data was good , and as long as the model_matrix is identity , then
the object is oriented correctly in space. Any time I add rotations , the models rotate in the
opposite direction. This happens in all 3 axes.

My current goal/project is the parenting system.  I want to be able to extract orientation and
position from the model matrix of any object. ( That data is stored with the object )

Specifically , right now , I wanted to extract the forward vector from the model_matrix so I
could attach a light source to this rotating object. Set its light direction for the fragment
shader. That is when I found this error.

What I am seeing:
The rotation of the object is opposite what I command. When I rotate 0-360 over and over again
the forward vector I am reading from the objects model_matrix diverges from the direction of the
object , until it gets to 180 deg , where the face of the object and the forward vector are co-
incident again , then they diverge again until we reach 360 and they are again co-incident.

What I expect: ( And this may be part of my issue )

I want the rotation part of the model_matrix that rotates the object to BE the current
orientation of the object. And it LOOKS like it is , but the object does not render that way.
The object rotates in the opposite direction. ( Which is preventing me from getting the correct
light direction vector ( forward vector ) )

Is this an OpenGL thing?

Is the orientation of an object the transpose of the 3x3 rotation section of the model_matrix?

c++ – How to a select a single 3D object out of multiple objects using mouse click and move it using mouse drag in OpenGL

I am rendering multiple 3d objects on the screen but i want to select one particular object by clicking on it and then move it using mouse drag but i cant figure it how to do it because on clicking on the screen all i get is the 2D coordinates of the mouse pointer

I am new to OpenGl Plsss help

opengl – Not clearing FBO’s Texture error in battery economy mode

When rendering inside a FBO’s texture, I’m not using glClear() but overwriting each fragment, GL_BLEND is set to true.

This works just fine, but I just realised when my laptop switch to economy mode, and I guess switch to egpu (intel 630) , the texture is full of garbage data when drawn.

here you can see first the working render, and then the garbage one. It should be exactly the same the only difference is my laptop is not plugged in AC

enter image description here
enter image description here

opengl – Screen space reflections shown at incorrect position

I have been trying to add a SSR post-processing effect into my engine for a while now, but it always seems to fail on the same thing: Reflections are not properly positioned below the object instead they are skewed and disappears from and to whilst moving the camera.

I’m using a g-buffer system which renders positions and normals in view-space (i have also tried to reconstruct the position from the depth buffer but it gives the same result). Shaders will be listed below.

I suspect that my normals might be wrong, but, I’m also using SSAO with the same buffers which works just fine.

I have read severals tutorials on this topic and tried them, but it always fails with this problem.

Screenshot showing reflections which are skewed.

enter image description here

Screenshot 2, rotated ~270 degrees, no reflections
enter image description here

G-buffer vertex shader:

#version 450 core

layout (location = 0) in vec3 in_position;
layout (location = 1) in vec3 in_normal;
layout (location = 2) in vec2 in_uv;
layout (location = 3) in vec3 in_tangent;
layout (location = 4) in vec3 in_bitangent;

out vec3 worldPosition;
out vec3 viewNormal;
out vec3 viewPosition;
out vec2 texCoord;
out mat3 TBN;

uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform mat3 normal_matrix;

void CalculateTBN(mat4 modelViewMatrix, vec3 tangent, vec3 bitangent, vec3 normal) {
    TBN = mat3(
        normalize(vec3(modelViewMatrix * vec4(tangent, 0.0))),
        normalize(vec3(modelViewMatrix * vec4(bitangent, 0.0))),
        normalize(vec3(modelViewMatrix * vec4(normal, 0.0)))

void main()
    vec4 position = vec4(in_position, 1.0);
    vec3 normal = in_normal;

    mat4 modelViewMatrix = view * model;
    CalculateTBN(modelViewMatrix, in_tangent, in_bitangent, normal);

    worldPosition = vec3(model * position);
    viewNormal = vec3(normalize(modelViewMatrix * vec4(normal, 1.0)));

    viewPosition = vec3(view * vec4(worldPosition, 1.0));
    texCoord = in_uv;    
    gl_Position = projection * view * model * position; 

G-buffer fragment shader:

#version 450 core

layout (location = 0) out vec3 g_position;
layout (location = 1) out vec3 g_normal;
layout (location = 2) out vec4 g_albedo;
layout (location = 3) out vec3 g_metallness_roughness;
layout (location = 4) out vec4 g_emissive;
layout (location = 5) out float g_depth;

in vec3 worldPosition; // Position in world space
in vec3 viewNormal; // Normal in view space
in vec3 viewPosition; // Position in view space
in vec2 texCoord; 
in mat3 TBN; 

vec2 uv = texCoord;

struct Material {
    float shininess;
    vec3 diffuse_color;
    bool is_solid;

    bool has_specular;
    bool has_normal;
    bool has_emissive;
    bool has_ao;
    bool has_metallic;
    bool has_roughness;

uniform Material material;
uniform bool force_solid = false;
uniform vec3 force_color = vec3(0.);
uniform float emissive_pow = 1.0;
uniform bool flip_uv = false;
uniform float mesh_transparency = 1.0;
uniform vec3 tint = vec3(0.); 

layout (binding = 0) uniform sampler2D albedoMap;
layout (binding = 1) uniform sampler2D normalMap;
layout (binding = 2) uniform sampler2D metallicMap;
layout (binding = 3) uniform sampler2D roughnessMap;
layout (binding = 4) uniform sampler2D emissiveMap;

float get_metallic(vec2 uv) {
    if (material.has_metallic) return texture(metallicMap, uv).r;
    return 1.;

float get_roughness(vec2 uv) {
    if (material.has_roughness) return texture(roughnessMap, uv).r;
    return 1.;

vec3 get_emissive(vec2 uv) {
    if (material.has_emissive) return texture(emissiveMap, uv).rgb * emissive_pow;
    return vec3(0.);

vec2 get_uv() {
    if (flip_uv) return vec2(uv.x, 1. - uv.y);
    return uv;

void main()
    vec3 viewNormal;
    bool use_sampler = material.has_normal;
    if (use_sampler) {
        viewNormal = texture2D(normalMap, texCoord).rgb;
        viewNormal = normalize(viewNormal * 2.0 - 1.0);
        viewNormal = normalize(TBN * viewNormal);
        viewNormal = viewNormal;    

    g_position = viewPosition;
    g_normal = viewNormal;
    g_albedo.rgb = texture(albedoMap, get_uv()).rgb;

    float spec = (g_albedo.r + g_albedo.g + g_albedo.b)/3.0;
    g_albedo.a = spec;
    g_metallness_roughness.r = get_metallic(get_uv());
    g_metallness_roughness.g = get_roughness(get_uv());
    g_emissive.rgb = get_emissive(get_uv());
    g_emissive.a = mesh_transparency;

    g_depth.r = gl_FragCoord.z;

The normal buffer is declared as RGB32F, same for the position buffer.

And the SSR shader is declared like this:
(based on

#version 450 core

layout (location = 0) uniform sampler2D gAlbedo;
layout (location = 1) uniform sampler2D gPosition;
layout (location = 2) uniform sampler2D gNormal;
layout (location = 3) uniform sampler2D gMetallicRoughness;

out vec4 FragColor;

uniform mat4 invView;
uniform mat4 projection;
uniform mat4 invProjection;
uniform mat4 view;
uniform float near = 0.1;
uniform float far = 100.0;
uniform vec2 resolution = vec2(1440.0, 810.0);
uniform vec3 cameraPos;

float Near = near;
float Far = far;

in vec2 TexCoords;
vec2 TexCoord = TexCoords;
vec2 texCoord = TexCoords;

uniform int raymarch_iterations = 60;
uniform float raymarch_step_size = 0.25;
uniform float raymarch_min_steps = 0.1;
uniform int numBinarySearchSteps = 10;

uniform vec3 skyColor = vec3(0.0);
uniform int binarySearchCount = 20;
uniform float LLimiter = 0.9;

// SSR based on tutorial by Imanol Fotia
#define GetPosition(texCoord) texture(gPosition, texCoord).xyz

vec2 BinarySearch(inout vec3 dir, inout vec3 hitCoord, inout float dDepth) {
    float depth;

    vec4 projectedCoord;
    for (int i = 0; i < binarySearchCount; i++) {
        projectedCoord = projection * vec4(hitCoord, 1.0);
        projectedCoord.xy /= projectedCoord.w;
        projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
        depth = GetPosition(projectedCoord.xy).z;
        dDepth = hitCoord.z - depth;

        dir *= 0.5;

        if (dDepth > 0.0) {
            hitCoord += dir;
        } else {
            hitCoord -= dir;

    projectedCoord = projection * vec4(hitCoord, 1.0);
    projectedCoord.xy /= projectedCoord.w;
    projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
    return vec2(projectedCoord.xy);

vec2 RayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) {
    dir *= raymarch_step_size;
    for (int i = 0; i < raymarch_iterations; i++) {
        hitCoord += dir;

        vec4 projectedCoord = projection * vec4(hitCoord, 1.0);
        projectedCoord.xy /= projectedCoord.w;
        projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5; 

        float depth = GetPosition(projectedCoord.xy).z;

        dDepth = hitCoord.z - depth;

        if ((dir.z - dDepth) < 1.2 && dDepth <= 0.0) {
            return BinarySearch(dir, hitCoord, dDepth);

    return vec2(-1.0);

#define Scale vec3(.8, .8, .8)
#define k 19.19

vec3 Hash(vec3 a) {
    a = fract(a * Scale);
    a += dot(a, a.yxz + k);
    return fract((a.xxy + a.yxx)*a.zyx);

// source:
#define fresnelExp 15.0

float Fresnel(vec3 direction, vec3 normal) {
    vec3 halfDirection = normalize(normal + direction);
    float cosine = dot(halfDirection, direction);
    float product = max(cosine, 0.0);
    float factor = 1.0 - pow(product, fresnelExp);
    return factor;

void main() {
    float reflectionStrength = 1. - texture(gMetallicRoughness, texCoord).r; // metallic in r component
    if (reflectionStrength == 0.0) {
        FragColor = vec4(0., 0., 0., 1.); 

    vec3 normal = texture(gNormal, texCoord).xyz;
    vec3 viewPos = GetPosition(texCoord);

    vec3 worldPos = vec3(vec4(viewPos, 1.0) * inverse(view));
    vec3 jitt = Hash(worldPos) * texture(gMetallicRoughness, texCoord).g; // roughness in g component

    vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal)));

    vec3 hitPos = viewPos;
    float dDepth; 
    vec2 coords = RayCast(jitt + reflected * max(-viewPos.z, raymarch_min_steps), hitPos, dDepth);

    float L = length(GetPosition(coords) - viewPos);
    L = clamp(L * LLimiter, 0, 1);
    float error = 1 - L;

    float fresnel = Fresnel(reflected, normal);
    vec3 color = texture(gAlbedo, coords.xy).rgb * error * fresnel;

    if (coords.xy != vec2(-1.0)) {
        vec3 res = mix(texture(gAlbedo, texCoord), vec4(color, 1.0), reflectionStrength).rgb;
        FragColor = vec4(res, 1.0);
    vec3 rescol = mix(texture(gAlbedo, texCoord), vec4(skyColor, 1.0), reflectionStrength).rgb;
    FragColor = vec4(rescol, 1.0);

If you have faced the same situation, can lead me to some example that can give me more information here, please let me know since I have fighted with this one for over two weeks now. All help is highly appreciated!

Thanks in advance!

opengl – Lag Spike When Creating Model

I am creating a game using OpenGl in c++. Whenever I create a new model while the game is running, such as fire a bullet, there is a huge lag spike. The function that creates the model is below.

    std::string jsonString;
    jsonString = file->load(type);
    json jf = json::parse(jsonString); //Might be causing the lag
    indicesSizeTexture = jf("textureIndices").size();
    verticesSizeTexture = jf("textureVertices").size();
    indicesSizeCollision = jf("collisionIndices").size();
    verticesSizeCollision = jf("collisionVertices").size();
    verticesTexture = new float(verticesSizeTexture * 8);
    verticesCollision = new float(verticesSizeCollision * 8);
    verticesCollisionUpdated = new float(verticesSizeCollision * 8);

    indicesTexture = new int(indicesSizeTexture);
    indicesCollision = new int(indicesSizeCollision);

    for (int i = 0; i < verticesSizeTexture; i++) {  //responsible for just the texture vertices
        verticesTexture(i) = jf("textureVertices")(i);
    for (int i = 0; i < indicesSizeTexture; i++) { // responsible for just the texture indices
        indicesTexture(i) = jf("textureIndices")(i);
    for (int i = 0; i < verticesSizeCollision; i++) {  //responsible for just the collision vertices
        verticesCollision(i) = jf("collisionVertices")(i);
        verticesCollisionUpdated(i) = verticesCollision(i);
    for (int i = 0; i < indicesSizeCollision; i++) { // responsible for just the collision indices
        indicesCollision(i) = jf("collisionIndices")(i);

    //binds id
    glGenBuffers(1, &VBO);
    glGenVertexArrays(1, &VAO);
    glGenBuffers(1, &EBO);
    glGenTextures(1, &texture);

    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, verticesSizeTexture * 8 * sizeof(float), verticesTexture, GL_STATIC_DRAW);
    // position attribute
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)0);
    glBindTexture(GL_TEXTURE_2D, texture);
    glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(6 * sizeof(float)));

    unsigned char* data = stbi_load(texturePathString.c_str(), &width, &height, &nrChannels, 0);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

    glBindBuffer(GL_ARRAY_BUFFER, 0);

I stripped out a lot of parts that I am almost certain aren’t causing the lag. There is a lot of stuff going on, but it is mostly simple mathematical operations. The only parts that I think could be causing the lag is the json section used for loading the model data. The model data is stored in a variable from file as a string. I need the json section for the data storage though. What could be causing the lag? should I find a different data storage type? What if I created a bullet offscreen on startup, then copied it whenever I needed it? The specific json library I am using is

openGL – Updating instanced model transform in vbo every frame

I am using OpenGL to render a large number of models by instanced rendering (using LWJGL wrapper).
As far as I can tell I have implemented the instancing correctly, although, after profiling, I’ve come upon an issue.

The program is able to render a million cubes at 60fps when their model (world) transformations are not changing. Once I make them all spin though, the performance drops significantly. I deduced from the profiler that this is due to the way I write the matrix data to the VBO.

My current approach is to give each unique mesh a new VAO (so all instances of cubes come under 1 VAO), have 1 VBO for vertex positions, textures, and normals and 1 instance array (VBO) for storing instance model matrices. All VBOs are interwoven.

In order to make the cubes spin, I need to update the instance VBO every frame. I do that by iterating through every instance and copying the matrix values into the VBO.

The code is something like this:

float() matrices = new float(models_by_mesh.get(mesh).size() * 16);

for (int i = 0; i < models.size(); i++){
    Model cube = models.get(i);
    float() matrix = new float(16);
    cube.getModelMatrix(matrix);    //store model matrix into array
    System.arraycopy(matrix, 0, matrices, i * 16, 16);

glBindBuffer(GL_ARRAY_BUFFER, instance_buffers_by_mesh.get(mesh);
glBufferData(GL_ARRAY_BUFFER, matrices, GL_STATIC_DRAW);


I realise that I create new buffer storage and float array every frame by calling glBufferData instead of glBufferSubData but when I write:

//outside loop soon after VBO creation
glBufferData(GL_ARRAY_BUFFER, null, GL_DYNAMIC_DRAW); //or stream

//when updating models
glBufferSubData(GL_ARRAY_BUFFER, 0, matrices)

nothing is displaying; I’m not sure why, perhaps I’m misusing subData but that’s another issue.

I have been looking at examples of particle simulators (in OpenGL) and most of them update the instance VBO the same way as me.

I’m not sure what the problem could be and I can’t think of a more efficient way of updating the VBO. I’m asking for suggestions / potential improvements with my code.

Many thanks 🙂

Which is more efficient between texelfetch() and imageLoad() in OpenGL?

If I just need to read data from BO,I have two methods.First is to load data to buffer textures and then read data by texelfetch() in shaders. Second,load data to buffer textures and bind buffer textures to image unit, then I can read data by imageLoad().I want to know which is more efficient if there is great deal of data?

opengl – OpenGLHow to fit a texture to a curved plane with GL_CLAMP

I’m trying to fit a texture on a curved plane made from some triangles with GL_CLAMP, because GL_CLAMP_TO_EDGE is not available in my OPenGL version, but the texture (512 x 512 pixels) appears so small when render:

     glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // If the u,v coordinates overflow the range 0,1 the image is repeated
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // The magnification function ("linear" produces better results)
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //The minifying function


enter image description here

I load the texture like this:

glTexImage2D(GL_TEXTURE_2D, 0, 4, infoheader.biWidth, infoheader.biHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, l_texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, 4, infoheader.biWidth, infoheader.biHeight, GL_RGBA, GL_UNSIGNED_BYTE, l_texture);