opengl – Is creating Windows windows faster / more efficient than creating GLFW windows?

It depends

GLFW is just a wrapper around Windows API calls, so whether you create a window using GLFW or create one using GLFW API, the same calls are finally made.

However, you can expect a wrapper like GLFW to be robust, check for errors, select optimal formats, etc., which you might not do if you just wrote the code yourself. So your own code may be faster to create a window but may also be more likely to do the wrong thing.

At this point, you really need to read the OpenGL wiki start pages to understand the process of initializing OpenGL.

But ultimately, what does it matter? Creating a window takes a few milliseconds at the start of your application, and once the window is created, OpenGL takes over, so no matter how it was created – all else being equal (pixel formats, acceleration hardware, etc.), the performance of OpenGL in your program will be the same.

opengl – Rendering to framebuffer object does not work

There is a problem when I try to render the texture from the framebuffer object. I get the message that the FrameBuffer object is finished but apparently all I get is a black texture.

this is the code to create the FBO:

void CubeMap::createEmptyCubeMap(int size) {
    this->size = size;


    // create texture
    glGenTextures(1, &textureID);
    glBindTexture(GL_TEXTURE_CUBE_MAP, textureID);

    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);


    // Allocate space for each side of the cube map
    // RGBA color texturing
    for (GLuint i = 0; i < 6; i++)
    {
        glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGBA8, size,
            size, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    }

    // create the framebuffer object
    glGenFramebuffers(1, &fbo);
    glBindFramebuffer(GL_FRAMEBUFFER, fbo);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, textureID, 0);


    glGenRenderbuffers(1, &depthBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, this->size, this->size);

    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBuffer);

    verifyStatus();
    printFramebufferLimits();

    // attach color
    glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, textureID, 0);
    glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthBuffer, 0);

    glBindFramebuffer(GL_FRAMEBUFFER, 0);

}

and here when I return it:

void CubeMap::renderEnviromentMap(glm::vec3 center, CubeMap obj, Shader* shader) {

    CubeMapCamera camera = CubeMapCamera(center);

    glBindFramebuffer(GL_FRAMEBUFFER, fbo);
    glViewport(0, 0, this->size, this->size);

    for (int i = 0; i < 6; i++) {
        camera.switchToFace(i);
        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, textureID, 0);


        obj.renderCubeMap(shader);  //render random object for testing

    }

    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    glBindTexture(GL_TEXTURE_CUBE_MAP,0);    
    glViewport(0, 0, screenWidth, screenHeight);    
}

Does anyone know what i am doing wrong with this?

opengl – GLM conversion of euler angles to quaternion and back does not hold

I am trying to convert the orientation of an OpenVR controller that I have stored as glm::vec3 Euler angles in a glm::fquat and back, but I get extremely different results and the behavior in the game is just wrong (difficult to explain, but the orientation of the object behaves normally for a small range of angles, then switches in strange axes).

Here is my conversion code:

// get `orientation` from OpenVR controller sensor data

const glm::vec3 eulerAnglesInDegrees{orientation(PITCH), orientation(YAW), orientation(ROLL)};
debugPrint(eulerAnglesInDegrees);

const glm::fquat quaternion{glm::radians(eulerAnglesInDegrees)};
const glm::vec3 result{glm::degrees(glm::eulerAngles(quaternion))};
debugPrint(result);

// `result` should represent the same orientation as `eulerAnglesInDegrees`

I would expect eulerAnglesInDegrees and result be either the same representations, or equivalent representations of the same orientation, but this is apparently not the case. Here are some examples of values ​​I get printed:

39.3851 5.17816 3.29104 
39.3851 5.17816 3.29104 

32.7636 144.849 44.3845 
-147.236 35.1512 -135.616 

39.3851 5.17816 3.29104 
39.3851 5.17816 3.29104 

32.0103 137.415 45.1592 
-147.99 42.5846 -134.841 

As you can see above, for some orientation ranges, the conversion is correct, but for others it is completely different.

What am i doing wrong?

I looked at the existing questions and tried a few things, including trying all of the possible rotation orders listed here, conjugating the quaternion and other random things like reversing the pitch / yaw / roll. Nothing has given me the expected result.

How can I convert euler angles to quaternions and back, representing the original orientation, using glm?


Some other examples of discrepancies:

original:      4; 175;   26; 
computed:   -175;   4; -153; 
difference:  179; 171;  179; 

original:     -6; 173;   32; 
computed:    173;   6; -147; 
difference: -179; 167;  179; 

original:      9; 268;  -46; 
computed:   -170; -88;  133; 
difference:  179; 356; -179; 

original:    -27; -73;  266; 
computed:    -27; -73;  -93; 
difference:    0;   0;  359; 

original:    -33; 111;  205; 
computed:    146;  68;   25; 
difference: -179;  43;  180; 

I tried to find a pattern to fix the finale computed results, but there does not seem to be an easy one to identify.


GIF + behavior video:

Video clip


Visual representation of my current intuition / understanding:

Visual diagram

  • The image above shows a sphere, and I'm in the center. When I point the gun towards the green half of the sphere, the orientation is correct. When I aim the gun at the red half of the sphere, it's incorrect – it seems like every axis is reversed, but I'm not 100% sure that it is.

opengl – Incorrect UV sphere mesh generation

I am the same algorithm to generate the mesh of a UV sphere as described in this wiki: http://wiki.unity3d.com/index.php/ProceduralPrimitives#C.23_-_Sphere

My implementation is in C ++. I don't know what's wrong with my implementation. The clues seem incorrect. Does anyone know what i am doing wrong?

Rendering:Procedural UV sphere

Mesh generation code:

struct Sphere {
    float radius_; 
    math::Vec3f center_; 
}; 

struct Vertex {
    math::Vec3f position_;
    math::Vec3f normal_;
    math::Vec2f texture_coordinate_;
};

void MeshGenerator::Generate(const math::Sphere& sphere,
                             std::vector& vertices,
                             std::vector& indices) {

    constexpr uint32_t latitude_count = 16;
    constexpr uint32_t longitude_count = 24;

    vertices.clear();
    indices.clear();

    const uint32_t vertex_count = (longitude_count + 1) * latitude_count + 2;
    vertices.resize(vertex_count);

    // Generate Vertices
    vertices(0).normal_ = math::Vec3f::Up();
    vertices(0).position_ = (vertices(0).normal_ * sphere.radius_) + sphere.center_;
    vertices(0).texture_coordinate_ = math::Vec2f(0.0F, 1.0F);
    for(uint32_t lat = 0; lat < latitude_count; ++lat) {
        float a1 = math::kPi * static_cast(lat + 1) / (latitude_count + 1);
        float sin1 = math::Sin(a1);
        float cos1 = math::Cos(a1);
        for(uint32_t lon = 0; lon <= longitude_count; ++lon) {
            float a2 = math::kTwoPi * static_cast(lon == longitude_count ? 0 : lon) / longitude_count;
            float sin2 = math::Sin(a2);
            float cos2 = math::Cos(a2);
            Vertex vertex{};
            vertex.normal_.x_ = sin1 * cos2;
            vertex.normal_.y_ = cos1;
            vertex.normal_.z_ = sin1 * sin2;
            vertex.position_ = (vertex.normal_ * sphere.radius_) + sphere.center_;
            vertex.texture_coordinate_.x_ = static_cast(lon) / longitude_count;
            vertex.texture_coordinate_.y_ = static_cast(lat) / latitude_count;
            vertices(lon + lat * (longitude_count + 1) + 1) = vertex;
        }
    }
    vertices(vertex_count - 1).normal_ = math::Vec3f::Down();
    vertices(vertex_count - 1).position_ = (vertices(vertex_count - 1).normal_ * sphere.radius_) + sphere.center_;
    vertices(vertex_count - 1).texture_coordinate_ = math::Vec2f::Zero();

    // Generate Indices
    // Top
    for (uint32_t lon = 0; lon < longitude_count; ++lon) {
        indices.push_back(lon + 2);
        indices.push_back(lon + 1);
        indices.push_back(0);
    }

    // Middle
    for(uint32_t lat = 0; lat < latitude_count - 1; ++lat) {
        for(uint32_t lon = 0; lon < longitude_count; ++lon) {
            const uint32_t current = lon + lat * (longitude_count + 1) + 1;
            const uint32_t next = current + longitude_count + 1;

            indices.push_back(current);
            indices.push_back(current + 1);
            indices.push_back(next + 1);

            indices.push_back(current);
            indices.push_back(next + 1);
            indices.push_back(next);
        }
    }

    // Bottom
    for (uint32_t lon = 0; lon < longitude_count; ++lon) {
        indices.push_back(vertex_count - 1);
        indices.push_back(vertex_count - (lon + 2) - 1);
        indices.push_back(vertex_count - (lon + 1) - 1);
    }
}

Here is the OpenGL mesh rendering code. I omitted the shader program and the configuration code.

... set view port ... 
... clear color/depth/stencil buffers ... 
... create/use shader program and set uniforms ... 
glGenVertexArrays(1, &vao_);
glGenBuffers(1, &vbo_);
glGenBuffers(1, &ebo_);

glBindVertexArray(vao_);

// Set vertex data
glBindBuffer(GL_ARRAY_BUFFER, vbo_);
glBufferData(GL_ARRAY_BUFFER,
             vertices_.size() * sizeof(decltype(vertices_)::value_type),
             vertices_.data(),
             GL_STATIC_DRAW);

// Set index data
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo_);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
             indices_.size() * sizeof(decltype(indices_)::value_type),
             indices_.data(),
             GL_STATIC_DRAW);

// Set vertex attribute pointers
// Position
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, position_));
// Normal
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, normal_));
// Texture Coordinate
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, texture_coordinate_));

glDrawArrays(GL_TRIANGLES, 0, indices_.size());

sdl – Skia and OpenGL cannot create GrContext

I am currently trying to implement a 2D graphical interface in my game.

I use Skia. I built it just now, but when importing into C ++, the incomplete type GrContext is not allowed.

I include things like this:

#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

I use SDL.

opengl – Geometric rendering problem

I am currently working on a 3D game engine with OpenGL 4 and C ++. The problem is that, I don't know why, my geometry is not well rendered, except for the primitives.

Example

On the right, you can see a cube, it is rendered as expected.
In the center you can see a mesh that I created in Blender, it is not rendered as expected, on the left you can see my mesh rendered as expected BUT I have adjusted the mesh ; Z scale (and I shouldn't)

So, to be short: my stitches are not well proportioned.

I checked the coordinates of each vertex in blender and my project: they are the same.

I don't know if there is a problem with my dies, the only thing I know is that this problem only appears on the Z axis (Z is in place ).

Everything makes me think that somewhere a number is rounded but I do not see where

I leave some code here, which could be useful:

Mesh rendering code:

void RD_Mesh::render(RenderMode rndrMode) {
    if (rndrMode == RenderMode::Filled) {
        glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
    }
    else {
        glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
    }

    //m_shader->useShader();
    m_mat->BindMaterial();

    glm::mat4 mdl = glm::mat4(1.0f); //Declaring Model Matrix

    glm::mat4 translate = glm::mat4(1.0f);
    glm::mat4 scale = glm::mat4(1.0f);
    glm::mat4 rotation = glm::mat4(1.0f);

    //Position
    translate = glm::translate(translate, glm::vec3(m_position.getX(), m_position.getY(), m_position.getZ()));

    //Scale
    scale = glm::scale(scale, glm::vec3(m_scale.getX(), m_scale.getY(), m_scale.getZ()));

    //Rotation
    rotation = glm::rotate(rotation, glm::radians(m_rotation.getX()), glm::vec3(1.0f, 0.0f, 0.0f));
    rotation = glm::rotate(rotation, glm::radians(m_rotation.getY()), glm::vec3(0.0f, 1.0f, 0.0f));
    rotation = glm::rotate(rotation, glm::radians(m_rotation.getZ()), glm::vec3(0.0f, 0.0f, 1.0f));

    mdl = translate * rotation * scale;

    m_shader->SetMatrix("model", mdl);

    glBindVertexArray(VAO);
    glDrawElements(GL_TRIANGLES, RAWindices.size(), GL_UNSIGNED_INT, 0);
    glBindVertexArray(0);
}

Camera code:

void RD_Camera::SetupCamera() {
    projection = glm::perspective(glm::radians(FOV), (float)m_rndr->getWindowWidth() / m_rndr->getWindowHeigh(), m_near, m_far); //Projection matrix

    view = glm::lookAt(glm::vec3(m_pos.getX(), m_pos.getY(), m_pos.getZ()), glm::vec3(m_subject.getX(), m_subject.getY(), m_subject.getZ()), glm::vec3(0.0f, 0.0f, 1.0f)); //View matrix

    m_rndr->GetCurrentShader()->SetMatrix("projection", projection);
    m_rndr->GetCurrentShader()->SetMatrix("view", view);
    m_rndr->GetCurrentShader()->SetVec3("CamPos", m_pos);
}

My Vertex Shader:

#version 450 core

layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;

out vec3 Normal;
out vec3 FragPos;

uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;

void main()
{
    gl_Position = projection * view * model * vec4(aPos, 1.0);

    Normal = normalize(mat3(transpose(inverse(model))) * aNormal);
    FragPos = vec3(model * vec4(aPos, 1.0));
}

Ps: I am sorry for my English, it is not my mother tongue.

opengl – I don't understand why my projection matrix works

My projection matrix was buggy, I'm not very good at math, but I checked it against the songho tutorial, and the broken one seems correct but the transition from nearplane to farplane seems to have corrected it . What am I missing? My nearplane and farplane values ​​are positive, nearplane is small about 0.01, 1.0f the last time I ran against both; farplane is generally relatively large around 1000.0f, 500.0f the last time I ran it against both.

f32 l = left;
    f32 r = right;
    f32 t = top;
    f32 b = bottom;
    f32 n = nearplane;
    f32 f = farplane;

    m4x4 Result =  //TODO why did changing n to f in 0 and 5 fix it? and make sure it is fixed
    {
#if 0 // works
        2*f/(r-l), 0, (r+l)/(r-l), 0,
        0, 2*f/(t-b), (t+b)/(t-b), 0,
        0, 0, -(f+n)/(f-n), -2*f*n/(f-n),
        0, 0, -1, 0,
#else // doesn't
        (2*n)/(r-l), 0, (r+l)/(r-l), 0,
        0, (2*n)/(t-b), (t+b)/(t-b), 0,
        0, 0, -(f+n)/(f-n), (-2*f*n)/(f-n),
        0, 0, -1, 0,
    };
#endif

opengl – How to convert the screen to global coordinates while using gluLookAt / gluPerspective or similar matrix transformations?

I'm just starting an adventure looking under the hood of graphics for a game project I've been working on for a while, and I could use some tips.

I am using Python / Kivy (although this is not part of the concern) and I am trying to use projection and model visualization matrices to perform screen-to-world coordinate conversion. I use something similar to the gluLookAt and gluPerspective matrix transformations for these.

The problem I encounter is that the coordinates that I get out of multiplying the matrices mv and p together and inverting them, then multiplying by the screen hearts NDC, the resulting coordinates are only 39; at a fraction of a pixel the world position look_at is currently centered on, or a maximum of a few pixels +/- this point.

I know I am missing something and I would like someone to help me understand. I wrote a standalone example and made a short YouTube video showing what problem I'm having.

https://youtu.be/UxbWQO9e0NE
https://gist.github.com/spinningD20/951e49cb836f08c434a0e9ab0e90c766

The code in question is the screen_to_world method in essence, when using the camera_look_at_perspective method to create the MVP, which I will list here:

p = Matrix()
p.perspective(90., 16 / 9, 1, 1000)

self.canvas('projection_mat') = p
self.canvas('modelview_mat') = Matrix().look_at(w_x, w_y - 30, self.camera_scale * 350, w_x, w_y, 0, 0, 1, 0)

This is to create the matrices, and …

def screen_to_world(self, x, y):
    proj = self.canvas('projection_mat')
    model = self.canvas('modelview_mat')

    # get the inverse of the current matrices, MVP
    m = Matrix().multiply(proj).multiply(model)
    inverse = m.inverse()
    w, h = self.size

    # normalize pos in window
    norm_x = x / w * 2.0 - 1.0
    norm_y = y / h * 2.0 - 1.0

    p = inverse.transform_point(norm_x, norm_y, 0)
    # print('convert_from_screen_to_world', x, y, p)
    return p(:2)

Was originally written to convert coordinates when using the previous projection matrix constructed using translation and creating a clip space (also included in the example).

Although the implementation seems to be specific to Kivy, it is only a model visualization matrix and a projection matrix used, and their Matrix.transform_point method used above is the same as the multiplication of a vec against the matrix in question. It can also include what appears to be the W part of a vec4, which I have also experienced, with no apparent change.

Here is a screenshot of the standalone example, painting where I moved the mouse over the screen (red) and where the resulting world coordinate ends up being (green). The goal is that coordinates converted to the world fall directly under red.

incorrectly converted coordinates

opengl – What are the options when the floating textures are not precise enough?

I am currently experimenting with rendering a terrain on a planetary scale.

I generate the terrain on the GPU with noise, and to solve 32-bit floating point accuracy issues, I generate the elevation maps that require the most precision (the deepest levels of the quadtree that I & # 39; Uses for level of detail) by bicubically interpolating the elevation maps generated for the parent elevation maps (which are generated with normal 32-bit floats, as they do not need to much precision).

Doing it this way limits the amount of detail the terrain can have, since the smallest details are generated with 32-bit floats, and if more detail is needed, bicubic interpolation is used to generate the intermediate height values.

The problem with this is that if I want extremely detailed terrain, the 32-bit floating textures (I store the height maps there) are not precise enough, which results in terraced terrain.

Is there a way to store more distinct values ​​inside a texture?

Thank you.

opengl – PBR lighting calculation returns strange results at (0, 0, 0) or lower positions

I am writing a physically based rendering using this tutorial here https://learnopengl.com/PBR/Lighting

The vertex shader they use can be found here: https://learnopengl.com/code_viewer_gh.php?code=src/6.pbr/1.2.lighting_textured/1.2.pbr.vs

and the shader fragment here: https://learnopengl.com/code_viewer_gh.php?code=src/6.pbr/1.2.lighting_textured/1.2.pbr.fs

To calculate the lighting of a fragment, there are four main functions used in the calculation of the lighting used in the above tutorial, the one with which I have problems and the one with seems to have problems with lower or equal viewing positions. at zero in one of the axes is as follows:

float GeometrySmith(vec3 N, vec3 V, vec3 L, float roughness)
{
    float NdotV = max(dot(N, V), 0.0);
    float NdotL = max(dot(N, L), 0.0);
    float ggx2  = GeometrySchlickGGX(NdotV, roughness);
    float ggx1  = GeometrySchlickGGX(NdotL, roughness);

    return ggx1 * ggx2;
}

Which calls the following method:

float GeometrySchlickGGX(float NdotV, float roughness)
{
    float r = (roughness + 1.0);
    float k = (r*r) / 8.0;

    float num   = NdotV;
    float denom = NdotV * (1.0 - k) + k;

    return num / denom;
}

Where N is the normal vector, V is the direction of view calculated by subtracting the view position of the camera and the position of the fragment, and L is the direction of light calculated by normalizing the subtraction of the position of light by position of the fragment.

To calculate the final illumination of a fragment, with point light, the following code is used:

vec3 CalcPointLight(PointLight light, vec3 normal, vec3 viewDir, vec3 fragPos, vec3 albedo, float roughness, float metallic, vec3 F0)
{
    // calculate per-light radiance
    vec3 L = normalize(light.position - fragPos);
    vec3 H = normalize(viewDir + L);
    float distance    = length(light.position - fragPos);
    float attenuation = 1.0 / (distance * distance);
    vec3 radiance     = light.diffuse;        

    // cook-torrance brdf
    float NDF = DistributionGGX(normal, H, roughness);        
    float G   = GeometrySmith(normal, viewDir, L, roughness);      
    vec3 F    = fresnelSchlick(clamp(dot(H, viewDir), 0.0, 1.0), F0);

    vec3 kS = F;
    vec3 kD = vec3(1.0) - kS;
    kD *= 1.0 - metallic;     

    vec3 numerator    = NDF * G * F;
    float denominator = 4.0 * max(dot(normal, viewDir), 0.0) * max(dot(normal, L), 0.0);
    vec3 specular     = numerator / max(denominator, 0.001);  

    // add to outgoing radiance Lo
    float NdotL = max(dot(normal, L), 0.0);                
    return (kD * albedo / PI + specular) * radiance * NdotL; 
}

The float G, calculated by calling the geometry function smith, behaves strangely when the light is positioned at (0,0,0) or lower in one or more of its axes or if the camera does the same. It would seem that in the calculation, this code fails unless in the camera and the light are positioned with a positive value of each of the axes.

Can anyone understand why this could be the case?