directx – Specifying a root signature in the HLSL code of a DXR shader

I’ve noticed that I cannot specify a root signature in the HLSL code of a DXR shader. For example, if I got a ray generation shader with the following declaration

(rootsignature(
    "RootFlags(LOCAL_ROOT_SIGNATURE),"  
    "DescriptorTable("                  
    "UAV(u0, numDescriptors = 1),"  
    "SRV(t0, numDescriptors = 1))"))
(shader("raygeneration"))
void RayGen()
{}

CreateRootSignature yields the error message

No root signature was found in the dxil library provided to CreateRootSignature. ( STATE_CREATION ERROR #696: CREATE_ROOT_SIGNATURE_BLOB_NOT_FOUND).

I’ve noticed that even when I add a typo (for example, write roosignature instead of rootsignature), the compiler doesn’t complain about this typo. So, it seems like the whole attribute declaration is simply ignored.

If I change the code to a simple rasterization shader, everything works as expected.

So, is the specification of a root signature in the HLSL code of a DXR shader not supported?

directx – Blue color instead of alpha using Alpha Blending

I am testing rendering with alpha blending state according to this guide.
The aim is to add snow on terrain grass texture.
Finally, I got the wrong result — the blue color fillings up all alpha = 0 pixels.

I checked the instruction step triple time, can’t find any mistakes.

Could someone explain to me what’s is wrong with my code?

PS and VS:

SnowPSIn SnowVSMain(SnowVSIn Input)
{
    SnowPSIn Output;
    float fY = g_txHeightMap.SampleLevel(g_samLinear, Input.vTexCoord, 0).a * g_fHeightScale;
    float4 vWorldPos = float4(Input.vPos + float3(0.0, fY, 0.0), 1.0);
    Output.vPos = mul(vWorldPos, g_mViewProj);
    Output.vTexCoord.xy = Input.vTexCoord;
    Output.vTexCoord.z = FogValue(length(vWorldPos - g_mInvCamView(3).xyz));
    Output.vTexCoord.w = length(vWorldPos - g_mInvCamView(3).xyz);
    //Output.vShadowPos = mul(vWorldPos, g_mLightViewProj);

    return Output;
}

float4 SnowPSMain(SnowPSIn Input) : SV_Target
{
    float4 vSnowColor = g_txTerrSnow.Sample(g_samLinear, Input.vTexCoord.xy * 64);
    return vSnowColor;
}

BlendState:

BlendState SrcAlphaBlendingAdd
{
    BlendEnable(0) = TRUE;
    SrcBlend = SRC_ALPHA;
    DestBlend = INV_SRC_ALPHA;
    BlendOp = ADD;
    SrcBlendAlpha = ZERO;
    DestBlendAlpha = ZERO;
    BlendOpAlpha = ADD;
    RenderTargetWriteMask(0) = 0x0F;
};

Pass:

pass RenderShowPass
{
    SetVertexShader(CompileShader(vs_5_0, SnowVSMain()));
    SetGeometryShader(NULL);
    SetPixelShader(CompileShader(ps_4_0, SnowPSMain()));
    SetBlendState(SrcAlphaBlendingAdd, float4(0.0f, 0.0f, 0.0f, 0.0f), 0xFFFFFFFF);
}

all sources

Before: initial scene

After: final scene

Apply multiple Shaders to one texture with DirectX

I’m beginning with DirectX development and I’m quite confused by the documentation about how to do the following:

I have 1 image (Texture2D), I’d like to apply 2 independent HLSL, one after the other, and render it.

For instance, one shader makes the texture semi-transparent, the other turns it to black and white.
Also, I’m computing everything off-screen, so I don’t have a SwapChain.

So far the output texture I get has the 1st shader applied but not the second. If I switch the order where the shaders are applied, then I’m seeing the 1st shader applied (in the new order) but not the second. In other words, both shaders work separately but not together.
I’m using SharpDX and C#.

Here is what I made

Setup

  1. Create D3D11 device.
  2. Set Blend state
            BlendStateDescription blendStateDescription = new BlendStateDescription
            {
                AlphaToCoverageEnable = false,
            };
            blendStateDescription.RenderTarget(0).IsBlendEnabled = true;
            blendStateDescription.RenderTarget(0).SourceBlend = BlendOption.SourceAlpha;
            blendStateDescription.RenderTarget(0).DestinationBlend = BlendOption.InverseSourceAlpha;
            blendStateDescription.RenderTarget(0).BlendOperation = BlendOperation.Add;
            blendStateDescription.RenderTarget(0).SourceAlphaBlend = BlendOption.Zero;
            blendStateDescription.RenderTarget(0).DestinationAlphaBlend = BlendOption.Zero;
            blendStateDescription.RenderTarget(0).AlphaBlendOperation = BlendOperation.Add;
            blendStateDescription.RenderTarget(0).RenderTargetWriteMask = ColorWriteMaskFlags.All;

            device.ImmediateContext.OutputMerger.SetBlendState(new BlendState(device, blendStateDescription));
  1. Set Depth stencil state
            var depthStencilState = new DepthStencilState(device, DepthStencilStateDescription.Default());
            device.ImmediateContext.OutputMerger.SetDepthStencilState(depthStencilState);
  1. Create a RenderTargetView and a texture
  2. Create DepthStencilView and a texture
  3. Set the RenderTargetView and DepthStencilView are targets of the output merger.

Run

  1. Load an image from the hard drive to a Texture2D
  2. Create a ShaderResourceView from this texture.
  3. For each effects I want to apply:
    • Load the shader from the bytecode into an Effect.
    • Create a vertice, then create a buffer from it.
    • Set this buffer as a vertex buffer in the Input Assembler: deviceContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(myBuffer, Marshal.SizeOf(typeof(VertexPositionTexture)), 0));
    • Set the input layout.
    • Get a technique from the shader and for each pass, I effectPass.Apply then do a deviceContext.Draw.
  4. deviceContext.Flush();
  5. Save the texture associated to the RenderTargetView as a PNG.

Question

Is my approach correct or am I doing something wrong in that method?

I wonder if the issue could be in the blend state or depth stencil.
Let me know if you’d like me to share more code to see my implementation details.

Thank you in advance for the help. 🙂

Correct way to set up camera buffer [DirectX]

I would like to play with two different implementation of particle system in my project (ok, actually not mine, but I am working on it).
I copied the particle system successfully, however, faced the camera buffer initialization problem, highlight this code:

bool ParticleShader::Render(Direct3DManager* direct, ParticleSystem* particlesystem, Camera* camera)
{
    bool result;

    result = SetShaderParameters(direct->GetDeviceContext(), camera, direct->GetWorldMatrix(), camera->GetViewMatrix(), direct->GetProjectionMatrix(), particlesystem->GetTexture());
    if (!result)
        return false;

    RenderShader(direct->GetDeviceContext(), particlesystem->GetVertexCount(), particlesystem->GetInstaceCount(), particlesystem->GetIndexCount());
    return true;
}

If I just call SetShaderParameters with the same args, there is a bug obviously: particle system scene sticks to the camera (originally, it looks like this).

I checked the params in debug mode and found out that the difference is in the World matrix. The DXUT CFirstPersonCamera is used in my project and it changed the World matrix while moving around the scene whereas in the original particle system project it’s constant (identity matrix). I even checked my assumption and hardcoded it, but got another bug.

I understand that there is a difference between DXUT camera and the default project camera at the ideological level. Nevertheless, I am a newbie in Graphic it’s too difficult to realize how to change the code in a proper way.

Thank you in advance

directx11 – How to correctly initialize Direct2D with DirectX 11

I have a problem with creating Direct2D with DirectX11 .I have tried two methods to initialize Direct2D.

In the first attempt, I’ve create the surface pointer to the back buffer in DX11 and passed it to CreateDxgiSurfaceRenderTarget. I get an error from the function stating The parameter is incorrect.

In the second attempt, I did the same but more complicated. I used DXGI and I need to use a new interface Direct2D 1.1 instead of using ID2D1RenderTarget. I need to use ID2D1DeviceContex but in here i get a error from the function direct2d.factory1->CreateDevice() and the error is the same, the parameter is incorrect.

struct Direct2D {

    ID2D1Device *device = NULL;
    ID2D1DeviceContext *device_context = NULL;
    ID2D1Factory *factory = NULL;
    ID2D1Factory1 *factory1 = NULL;
    ID2D1RenderTarget *render_target = NULL;
    ID2D1SolidColorBrush *gray_brush = NULL;
    ID2D1SolidColorBrush *blue_brush = NULL;

    void init();
    void draw();
};

struct Direct3D {

    Direct2D direct2d;
    ID3D11Device *device = NULL;
    ID3D11DeviceContext *device_context = NULL;
    IDXGISwapChain *swap_chain = NULL;
    
    ID3D11RenderTargetView *render_target_view = NULL;
    ID3D11DepthStencilView *depth_stencil_view = NULL;
    ID3D11Texture2D *depth_stencil_buffer = NULL;
    ID3D11Texture2D* back_buffer = NULL;
    IDXGISurface* back_buffer2 = NULL;
    
    UINT quality_levels;

    Matrix4 perspective_matrix;

    void init(const Win32_State *win32);
    void shutdown();
    void resize(const Win32_State *win32);
};

void Direct2D::init()
{

    HR(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &factory1));
    HR(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &factory));

    float dpi_x;
    float dpi_y;
    factory->GetDesktopDpi(&dpi_x, &dpi_y);

    D2D1_RENDER_TARGET_PROPERTIES rtDesc = D2D1::RenderTargetProperties(
        D2D1_RENDER_TARGET_TYPE_HARDWARE,
        D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED), dpi_x, dpi_y
    );

    IDXGISurface *surface = NULL;
    HR(direct3d.swap_chain->GetBuffer(0, IID_PPV_ARGS(&surface)));

    //HR(factory->CreateDxgiSurfaceRenderTarget(surface, &rtDesc, &render_target));
    
    //HR(render_target->CreateSolidColorBrush(D2D1::ColorF(D2D1::ColorF::LightSlateGray),&gray_brush));
    //HR(render_target->CreateSolidColorBrush(D2D1::ColorF(D2D1::ColorF::CornflowerBlue),&blue_brush));
}

void Direct3D::init(const Win32_State *win32) 
{

    D3D_FEATURE_LEVEL feature_level;
    HRESULT hr = D3D11CreateDevice(0, D3D_DRIVER_TYPE_HARDWARE, 0, create_device_flag, 0, 0, D3D11_SDK_VERSION,&device, &feature_level, &device_context);

    if (FAILED(hr)) {
        MessageBox(0, "D3D11CreateDevice Failed.", 0, 0);
        return;
    }

    if (feature_level != D3D_FEATURE_LEVEL_11_0) {
        MessageBox(0, "Direct3D Feature Level 11 unsupported.", 0, 0);
        return;
    }


    HR(device->CheckMultisampleQualityLevels(
        DXGI_FORMAT_R8G8B8A8_UNORM, 4, &quality_levels));
    //assert(m4xMsaaQuality > 0);

    DXGI_SWAP_CHAIN_DESC sd;
    sd.BufferDesc.Width = win32->window_width;
    sd.BufferDesc.Height = win32->window_height;
    sd.BufferDesc.RefreshRate.Numerator = 60;
    sd.BufferDesc.RefreshRate.Denominator = 1;
    sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
    sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
    sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
    
    if (true) {
        sd.SampleDesc.Count = 4;
        sd.SampleDesc.Quality = quality_levels - 1;
    } else {
        sd.SampleDesc.Count = 1;
        sd.SampleDesc.Quality = 0;
    }

    sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
    sd.BufferCount = 1;
    sd.OutputWindow = win32->window;
    sd.Windowed = true;
    sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
    sd.Flags = 0;

    IDXGIDevice* dxgi_device = 0;
    HR(device->QueryInterface(__uuidof(IDXGIDevice), (void**)&dxgi_device));

    IDXGIAdapter* dxgi_adapter = 0;
    HR(dxgi_device->GetParent(__uuidof(IDXGIAdapter), (void**)&dxgi_adapter));

    IDXGIFactory* dxgi_factory = 0;
    HR(dxgi_adapter->GetParent(__uuidof(IDXGIFactory), (void**)&dxgi_factory));

    HR(dxgi_factory->CreateSwapChain(device, &sd, &swap_chain));
    

    // Init directx 2d
    HR(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &direct2d.factory));
    HR(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &direct2d.factory1));
    
    HR(direct2d.factory1->CreateDevice(dxgi_device, &direct2d.device));
    HR(direct2d.device->CreateDeviceContext(D2D1_DEVICE_CONTEXT_OPTIONS_NONE, &direct2d.device_context));
    
    IDXGISurface *surface = NULL;
    HR(swap_chain->GetBuffer(0, __uuidof(IDXGISurface), (void **)surface));

    auto props = BitmapProperties1(D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW, PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE));

    ID2D1Bitmap1 *bitmap = NULL;
    HR(direct2d.device_context->CreateBitmapFromDxgiSurface(surface, props, &bitmap));

    direct2d.device_context->SetTarget(bitmap);

    float32 dpi_x;
    float32 dpi_y;
    direct2d.factory->GetDesktopDpi(&dpi_x, &dpi_y);

    direct2d.device_context->SetDpi(dpi_x, dpi_y);


    RELEASE_COM(dxgi_device);
    RELEASE_COM(dxgi_adapter);
    RELEASE_COM(dxgi_factory);

    resize(win32);
}

void Direct3D::resize(const Win32_State *win32)
{
    assert(device);
    assert(device_context);
    assert(swap_chain);


    RELEASE_COM(render_target_view);
    RELEASE_COM(depth_stencil_view);
    RELEASE_COM(depth_stencil_buffer);


    // Resize the swap chain and recreate the render target view.

    HR(swap_chain->ResizeBuffers(1, win32->window_width, win32->window_height, DXGI_FORMAT_R8G8B8A8_UNORM, 0));

    ID3D11Texture2D* back_buffer = NULL;
    HR(swap_chain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&back_buffer)));
    HR(device->CreateRenderTargetView(back_buffer, 0, &render_target_view));
    RELEASE_COM(back_buffer);

    // Create the depth/stencil buffer and view.

    D3D11_TEXTURE2D_DESC depth_stencil_desc;

    depth_stencil_desc.Width = win32->window_width;
    depth_stencil_desc.Height = win32->window_height;
    depth_stencil_desc.MipLevels = 1;
    depth_stencil_desc.ArraySize = 1;
    depth_stencil_desc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;

    // Use 4X MSAA? --must match swap chain MSAA values.
    if (true) {
        depth_stencil_desc.SampleDesc.Count = 4;
        depth_stencil_desc.SampleDesc.Quality = quality_levels - 1;
    } else {
        depth_stencil_desc.SampleDesc.Count = 1;
        depth_stencil_desc.SampleDesc.Quality = 0;
    }

    depth_stencil_desc.Usage = D3D11_USAGE_DEFAULT;
    depth_stencil_desc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
    depth_stencil_desc.CPUAccessFlags = 0;
    depth_stencil_desc.MiscFlags = 0;

    HR(device->CreateTexture2D(&depth_stencil_desc, 0, &depth_stencil_buffer));
    HR(device->CreateDepthStencilView(depth_stencil_buffer, 0, &depth_stencil_view));


    // Bind the render target view and depth/stencil view to the pipeline.

    device_context->OMSetRenderTargets(1, &render_target_view, depth_stencil_view);


    // Set the viewport transform.

    D3D11_VIEWPORT mScreenViewport;
    mScreenViewport.TopLeftX = 0;
    mScreenViewport.TopLeftY = 0;
    mScreenViewport.Width = static_cast<float>(win32->window_width);
    mScreenViewport.Height = static_cast<float>(win32->window_height);
    mScreenViewport.MinDepth = 0.0f;
    mScreenViewport.MaxDepth = 1.0f;

    device_context->RSSetViewports(1, &mScreenViewport);

    perspective_matrix = get_perspective_matrix(win32->window_width, win32->window_height, 1.0f, 1000.0f);
}

directx – DX12 – how to update part of a buffer?

I’m just getting started in DX12 after a bit of time in Vulkan, I am trying to update a part of a dynamically indexed buffer I’m using to hold mesh transforms.

I’m using the MSFT MiniEngine examples which include wrappers for upload buffers, and am not sure if I can use them to update an existing buffer, or if they’re supposed to be the main resource used in place of the “GpuBuffer” in the same engine.

Any specific insight into the use of those classes would be really useful, but more generally, is it possible to update part of a buffer, or will I have to upload a copy of the whole thing?

Obviously a simple example would be brilliant if anyone has one..

directx – Rendering a ID3D11Texture2D into a SkImage (SkiaSharp/Avalonia)

I’m currently trying to create an interop layer to render my render target texture into a Skia SkImage. This is being done to facilitate rendering from my graphics API into Avalonia.

I’ve managed to piece together enough code to get everything running without any errors (at least, none that I can see), but when I draw the SkImage I see nothing but a black image.

Of course, these things are easier to describe with code:

private EglPlatformOpenGlInterface _platform;
private AngleWin32EglDisplay _angleDisplay;
private readonly int() _glTexHandle = new int(1);

IDrawingContextImpl context // <-- From Avalonia

_platform = (EglPlatformOpenGlInterface)platform;
_angleDisplay = (AngleWin32EglDisplay)_platform.Display;

IntPtr d3dDevicePtr = _angleDisplay.GetDirect3DDevice();

// Device5 is from SharpDX.
_d3dDevice = new Device5(d3dDevicePtr);  

// Texture.GetSharedHandle() is the shared handle of my render target.
_eglTarget = _d3dDevice.OpenSharedResource<Texture2D>(_target.Texture.GetSharedHandle());

// WrapDirect3D11Texture calls eglCreatePbufferFromClientBuffer.
_glSurface = _angleDisplay.WrapDirect3D11Texture(_platform, _eglTarget.NativePointer);

using (_platform.PrimaryEglContext.MakeCurrent())
{                
   _platform.PrimaryEglContext.GlInterface.GenTextures(1, _glTexHandle);
}

var fbInfo = new GRGlTextureInfo(GlConsts.GL_TEXTURE_2D, (uint)_glTexHandle(0), GlConsts.GL_RGBA8);
_backendTarget = new GRBackendTexture(_target.Width, _target.Height, false, fbInfo);            

using (_platform.PrimaryEglContext.MakeCurrent())
{                
   // Here's where we find the gl surface to our texture object apparently.
   _platform.PrimaryEglContext.GlInterface.BindTexture(GlConsts.GL_TEXTURE_2D, _glTexHandle(0));

   EglBindTexImage(_angleDisplay.Handle, _glSurface.DangerousGetHandle(), EglConsts.EGL_BACK_BUFFER);

   _platform.PrimaryEglContext.GlInterface.BindTexture(GlConsts.GL_TEXTURE_2D, 0);
}

// context is a GRContext
_skiaSurface = SKImage.FromTexture(context, _backendTarget, GRSurfaceOrigin.BottomLeft, SKColorType.Rgba8888, SKAlphaType.Premul);

// This clears my render target (obviously). I should be seeing this when I draw the image right?
_target.Clear(GorgonColor.CornFlowerBlue);

canvas.DrawImage(_skiaSurface, new SKPoint(320, 240));

So, as far as I can tell, this should be working. But as I said before, it’s only showing me a black image. It’s supposed to be cornflower blue. I’ve tried calling Flush on the ID3D11DeviceContext, but I’m still getting the black image.

Anyone have any idea what I could be doing wrong?

graphics – How can i abstract DirectX 9 object into class? Vertex buffer, index buffer

I am learning DirectX 9 and I want to create some code that can help me later. I tried to do it but it did not work. Can anyone suggest me some way to abstract things like vertex buffer, index buffer, shader … into classes ? Thank you very much.

My specs: Windows 7(x86)

directx – Problem with DirectxTex and Directx11

I am trying to render a Texture in a 3D cube with a box texture using DirectX 11 and the DirectXTex Library, but for some reason I can see only a brown color.

This is the Texture Image

inserir a descrição da imagem aqui

This is what i see

inserir a descrição da imagem aqui

Im Using this Input layout

D3D11_INPUT_ELEMENT_DESC ied() =
{
    {"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
    { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, 
    { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },  
};

And that is the way im loading the Texture

DirectX::ScratchImage image_data;
HRESULT res = DirectX::LoadFromWICFile( t_szTextureFile, DirectX::WIC_FLAGS_NONE, nullptr, image_data );

if (SUCCEEDED(res))
{
    res = DirectX::CreateTexture(m_d3dDevice, image_data.GetImages(),
        image_data.GetImageCount(), image_data.GetMetadata(), &m_texture);

    D3D11_SHADER_RESOURCE_VIEW_DESC desc = {};
    desc.Format = image_data.GetMetadata().format;
    desc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
    desc.Texture2D.MipLevels = (UINT)image_data.GetMetadata().mipLevels;
    desc.Texture2D.MostDetailedMip = 0;

    m_d3dDevice->CreateShaderResourceView(m_texture, &desc,
        &m_shader_res_view);

    D3D11_SAMPLER_DESC sampDesc;
    ZeroMemory( &sampDesc, sizeof(sampDesc) );
    sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
    sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
    sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
    sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
    sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
    sampDesc.MinLOD = 0;
    sampDesc.MaxLOD = D3D11_FLOAT32_MAX;

    m_d3dDevice->CreateSamplerState( &sampDesc, &CubesTexSamplerState );

    m_d3dImmediateContext->PSSetSamplers( 0, 1, &CubesTexSamplerState );
    m_d3dImmediateContext->PSSetShaderResources( 0, 1, &m_shader_res_view )
}

Is something wrong with the texture loading? Has anyone had this kind of problem and can help me?

directx – Applying HalfSpace Fog

How do I render halfspace fog. From my interpretation, I have a transparent plane in the scene. Then I use a function that returns a fog color in my shader.

Do I render the transparent plane by itself or with the other objects in the scene?
And also, do i make the plane transparent?