How to get/set specific properties of a video texture in pixi.js?

I managed to get a video to play in Pixi using the following line:

    this._texture = PIXI.Texture.from(require("Images/video.mp4"));

The problem is that I can’t find any properties to do things such as pausing it, forwarding/backwarding, adjusting volume, adjusting playing-speed etc.

Neither the PIXI.Texture or PIXI.Sprite seem to have any properties for this. Is this legitimately all the control PIXI gives you or am I missing something?

Unity: Exporting the game creates texture flaws

In the editor, the textures are good, but when exporting, I am constantly faced with problems like these:

enter description of image here

I mainly use the built-in standard specular shader and the texture that slides is always the detail texture. When you re-export the game several times, the texture is glitched in exactly the same way in the same places.

So far, this has happened in almost all of the scenes.

enter description of image here

How to apply the texture of the ring of Saturn in Unreal Engine 4?

I am working on a "solar system" model project. And while trying to apply the texture of the ring of Saturn which is this:
enter description of image here

it ended up looking like this:
enter description of image here

I am new to UE4 and this branch in general. So I have no idea how to fix this.
Your help would be appreciated

glsl – Bilinear texture search ignoring pixels with an alpha of 0

Thank you for providing a response to Game Development Stack Exchange!

  • Make sure you respond to the question. Give details and share your research!

But to avoid

  • Ask for help, clarification or respond to other responses.
  • Make statements based on opinion; save them with references or personal experience.

Use MathJax to format the equations. Reference MathJax.

For more information, see our tips on writing correct answers.

unit – Punch holes through a texture and make them regenerate again effectively

I'm working on a prototype where the user can use the mouse to punch holes in a texture, making that texture transparent in that area for a while. After this time, the hole will "regenerate" again, simply increasing the alpha value to 1 until it is opaque. I already have all this work, but I fear that my current implementation is not really good in terms of performance, and that it is definitely not suitable for large textures. Here's what I'm working on right now, using small textures (the hair and the head are separate textures):

enter description of image here

What I am doing is using a depth mask shader and drawing the holes (black circles) in the second texture with each click of the mouse. It works perfectly fine, but then I have to regenerate the holes, so what I'm doing is (and I realize this is a really hacky approach) is using the red channel in the color of each pixel to determine if this pixel must return to opaque (return to an alpha value of 1). Like I said, this kind of work works for small textures, but as soon as I try it on a large one, it's just unable to handle that amount of pixels at once. I have tried to summarize the code of my implementation here:

public class TextureHoles : MonoBehaviour
{
    public Texture2D sourceTexture;
    public Camera sceneCamera;
    public LayerMask targetLayerMask;

    public Color drawColor;
    public float circleRadiusSize;

    private Texture2D currentTexture;

    private Renderer rend;
    private MeshCollider meshCollider;

    private Color() colors;
    private Color() targetColors;

    private List holes = new List();
    private float redChannelValue = 0f;
    private const float HOLES_MAX_TIME = 3f;

    (System.Serializable)
    public class Hole
    {
        public Hole(Color _color)
        {
            color = _color;
            timer = 0f;
        }

        public void Run()
        {
            timer += Time.deltaTime;
        }

        public Color32 color;
        public float timer;
    }

    private void Start()
    {
        // create copy of source tex
        currentTexture = new Texture2D(sourceTexture.width, sourceTexture.height, TextureFormat.RGBA32, false);
        currentTexture.SetPixels(sourceTexture.GetPixels());
        currentTexture.Apply();

        rend.material.SetTexture("_Mask", currentTexture);

        targetColors = currentTexture.GetPixels();
    }

    private void Update()
    {
        if(Input.GetMouseButtonDown(0))
        {
            CutHair();
        }

        for(int i = 0; i < holes.Count; i++)
        {
            RecoverHair(holes(i));
        }
    }

    public void CutHair()
    {
        RaycastHit hit;

        if(Physics.Raycast(Cursor.Instance.GetRayInitialPos(), Vector3.forward, out hit, Mathf.Infinity, targetLayerMask))
        {
            Vector2 uv;

            uv.x = (hit.point.x - hit.collider.bounds.min.x) / hit.collider.bounds.size.x;
            uv.y = (hit.point.y - hit.collider.bounds.min.y) / hit.collider.bounds.size.y;

            Vector2 coord = new Vector2((int)(uv.x * currentTexture.width), (int)(uv.y * currentTexture.height));

            int circleRadius = (int)(Cursor.Instance.GetCircleSize() * circleRadiusSize);

            // we use the red channel value to identify which color we should lerp back to opaque
            redChannelValue += .03f;
            Color holeColor = new Color(redChannelValue, 0f, 0f, 0f);

            currentTexture.DrawCircle(holeColor, (int)coord.x, (int)coord.y, circleRadius);
            currentTexture.Apply(true);

            holes.Add(new Hole(holeColor));
        }
    }

    private void RecoverHair(Hole hole)
    {   
        if(hole.timer >= HOLES_MAX_TIME) return;

        hole.Run();

        colors = currentTexture.GetPixels();

        // go through all colors in the texture
        for(int i = 0; i < colors.Length; i++)
        {
            if(colors(i).r == 1) continue;  // dont touch colors with full red channel value

            if(((Color32)colors(i)).r == hole.color.r)
            {
                // lerp alpha value of colors with matching red channel value
                float newAlpha = Mathf.Lerp(hole.color.a, targetColors(i).a, hole.timer / HOLES_MAX_TIME);
                colors(i) = colors(i).WithA(newAlpha);

                int y = i / currentTexture.width;
                int x = i - (y * currentTexture.width);
                currentTexture.SetPixel(x, y, colors(i));
            }
        }

        currentTexture.Apply();
    }
}

public static class TextureExtensions
{
    public static Texture2D DrawCircle(this Texture2D tex, Color32 color, int x, int y, int radius = 3)
    {
        float rSquared = radius * radius;

        for (int u = x - radius; u < x + radius + 1; u++)
            for (int v = y - radius; v < y + radius + 1; v++)
                if ((x - u) * (x - u) + (y - v) * (y - v) < rSquared)
                    tex.SetPixel(u, v, color);

        return tex;
    }
}

I know i use GetPixels and SetPixels a lot, and that’s probably what drains all of the CPU resources and gives me such poor performance in larger textures, but I’m a little lost right now when it comes to how I should approach the # 39; optimization of this algorithm. And that’s really my main question, which method can i use to regenerate (alpha lerp value) each individual hole at its own pace in a efficient way? Every suggestion or hint on a potential approach is appreciated, thank you.

blender – How to apply a texture to an object by face rather than to the whole object? (unreal 4)

I am trying to apply different textures to a model that I have in front of rather than the whole object. I can only add one material to the whole object. How can I apply different textures to different faces of my model separately? I imported my model from Blender. For reference, I have a detailed building with a roof, windows, etc. which are all one object and I would like to texture different parts of the building with different megascans separately. I would appreciate any advice.

graphics – Unity – The depth texture is too pixelated

I am trying to combine two cameras together: background camera, foreground camera.

I created 3 cameras in my project, 2 for the background and the foreground and one for the depth of the foreground.

I then created a simple shader which combines the first two cameras (they render the texture as well as the foreground depth), the problem I have is that because the depth buffer is too pixelated, the result looks funny and you can clearly see the lines around the foreground (players in my case).

I have created a depth camera with these properties:

Unity camera properties

Notice the Output Texture, I set the rendering texture to:

Depth rendering texture properties

Here is the depth texture of the result (zoomed so you can see the pixels):

enter description of image here

Any idea how I can create this effect using maybe something different than the depth buffer? or can i improve the quality of the depth? what can i do to achieve a good end result?

Online Resources – Old Texture Pack or Mod for Minecraft

Do you remember an old resource pack or a mod for Minecraft (I think it was a resource pack; I don't remember however) which included a church or cathedral in ruins with werewolves or vampires?

How to remove the texture from a scanned photo?

This photo will be used for the funeral of a deceased family member. What would be the best way to remove the texture?enter description of image here

unit – UV mapping. Blurred and overlapping texture. Unity3d

In my voxel game, here is how I calculate the UV coordinates for the faces of my cubes:

static public Vector2 () faceUVs (Direction dir)
{

    Vector2() sideUV = new Vector2(4){
        new Vector2(0, 1),
        new Vector2(1f/3, 1),
        new Vector2(1f/3, 0),
        new Vector2(0, 0)
    };
    Vector2() topUV = new Vector2(4){
        new Vector2(1f/3, 0),
        new Vector2(1f/3, 1),
        new Vector2(2f/3, 1),
        new Vector2(2f/3, 0)
    };
    Vector2() bottomUV = new Vector2(4){
        new Vector2(2f/3, 0),
        new Vector2(2f/3, 1),
        new Vector2(1, 1),
        new Vector2(1, 0)
    };
    if ((int)dir == 4)
    {
        return topUV;
    }
    if ((int)dir == 5)
    {
        return bottomUV;
    }

    return sideUV;
}

Then, depending on the generated face, I send it the appropriate UV coordinates.

It’s the texture itself (pretty clean, each block is exactly 16 pixels wide and 16 pixels high. The total length of the image is 48 pixels):

enter description of image here

But the result is terrible (the faces are almost correctly aligned with the appropriate texture part, but the annoying keyword here is "almost"):

enter description of image here

As you can see, the edges of the side faces contain a few pixels of what should have been the texture of the top face.

I have also set the filtering mode to point.

If I use a 16×16 pixel texture instead, for example only from the side, then I get a good result:

enter description of image here

I suspect that the problem here could be floating point precision. In this case, how to resolve this error?