visual studio – Unity Text Mesh Pro can not get referenced in VSCode

I am using Text Mesh Pro in my Unity 2019.3.4f. The TMP works fine in unity, but in the code editor, Visual Studio Code, the TMP related things are marked with red wave underline, which means the VSCode could not get the reference of TMP.

In my csproj file, the TMP reference exists, here are some related codes:

<ProjectReference Include="Unity.TextMeshPro.csproj">
  <Project>{e8ae9ed6-ed86-3de7-e777-7e6794b003b2}</Project>
<Name>Unity.TextMeshPro</Name>

ProjectReference Include="Unity.TextMeshPro.Editor.csproj">
  <Project>{af8771c0-66d3-f01a-48b5-78951bce8a23}</Project>
<Name>Unity.TextMeshPro.Editor</Name>

My TMP works fine in every aspects, but there are code warnings in the VScode, “could not fine the namespace of TMPro”. Is there anyone know how to fix it?

unity – Create Mesh between gaps of the tiles in unity3d

I am loading two kinds of tiles (Meshes in tiles format) in unity both are placed in the environment correctly but there is a gap between these two tiles. Like the picture belowenter image description here

I want to fill this gap automatically either through scripting or a shader. With Scriptings, here is the algorithm I thinking but its first step taking too much time.

  1. Get Each tile and get the outer edges of the Mesh (it’s getting to much time to calculate with each tiles as there are thousands of tiles)
  2. Then ray cast from the outer edges of the mesh and get intersection point of outer tiles
  3. take vertices and draw the mesh.

I have get stuck in the first steps as it taking so much time to calculate the outer edges of a tile. Maybe there is a solution available in form of a shader that work between gapes of two meshes?

graphics – Change mesh density of Graphics3D object made of Triangles

I am new to mesh discretisation on Mathematica. I have a Graphics3D object made up of Triangles, that I would like to convert into a MeshRegion object using DiscretizeGraphics (see https://reference.wolfram.com/language/ref/DiscretizeGraphics.html).

In particular, I would like to control the mesh density. The above link tells me to use the MaxCellMeasure option, but it doesn’t seem to make any difference to my graphics!

Thus,

Table(DiscretizeGraphics(g, 
  MaxCellMeasure -> {"Area" -> m}), {m, {0.3, 0.01, 0.001}})

gives:
enter image description here

As you can see, the meshing is unchanged. It doesn’t matter if I replace “Area” by “Volume” or “Length”.

Can someone please tell me how to do this properly? Is this happening because my Graphics is already made up of triangles?

c# – How to create fallout intensity on a generated mesh in Unity?

So I was able to generate a circle mesh in Unity to basically see the other characters when they are inside of it, and hide the characters when they are outside of it, and partially hide and show the character if they are partially in or outside of it. Below is an image of what I was able to generate, but the thing is, the edges are very sharp and I would like to have some Fallout Intensity on the mesh. Just like with Lights in Unity, where they have a fallout intensity at the edge of the lights.

Image of Generated Mesh
enter image description here

As you can see, the edges of the mesh are very sharp and thats not the kind of effect am after, I would like to add some fallout to that, and adjust it.

Here is The Code That Generates The Mesh

using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class FieldOfView : MonoBehaviour {

    public float fieldOfView = 360f;
    public int numberEdges = 360;
    public float initalAngle = 0;
    public float visionDistance = 8f;
    public LayerMask layerMask;

    private Mesh mesh;
    private Vector3 origin;

    private void Start()
    {
        mesh = new Mesh();
        GetComponent<MeshFilter>().mesh = mesh;
        origin = Vector3.zero;
    }

    private void LateUpdate()
    {
        GenerateUpdateMesh();
    }

    private void GenerateUpdateMesh()
    {
        float actualAngle = initalAngle;
        float incrementAngle = fieldOfView / numberEdges;

        Vector3() vertices = new Vector3(numberEdges + 1);
        int() triangles = new int(numberEdges * 3);

        vertices(0) = origin;

        int verticeIndex = 1;
        int triangleIndex = 0;

        for (int i = 0; i < numberEdges; i++)
        {
            Vector3 actualVertices;
            RaycastHit2D raycastHit2D = Physics2D.Raycast(origin, GetVectorFromAngle(actualAngle), visionDistance, layerMask);
            if (raycastHit2D.collider == null)
            {
                // No hit
                actualVertices = origin + GetVectorFromAngle(actualAngle) * visionDistance;
            }
            else
            {
                // Hit object
                actualVertices = raycastHit2D.point;
            }

            vertices(verticeIndex) = actualVertices;

            if (i > 0)
            {
                triangles(triangleIndex + 0) = 0;
                triangles(triangleIndex + 1) = verticeIndex - 1;
                triangles(triangleIndex + 2) = verticeIndex;

                triangleIndex += 3;
            }

            verticeIndex++;
            actualAngle -= incrementAngle;
        }

        // We form the last triangle
        triangles(triangleIndex + 0) = 0;
        triangles(triangleIndex + 1) = verticeIndex - 1;
        triangles(triangleIndex + 2) = 1;


        mesh.vertices = vertices;
        mesh.triangles = triangles;
    }

    Vector3 GetVectorFromAngle(float angle)
    {
        float angleRad = angle * (Mathf.PI / 180f);
        return new Vector3(Mathf.Cos(angleRad), Mathf.Sin(angleRad));
    }

    public void SetOrigin(Vector3 newOrigin)
    {
        origin = newOrigin;
    }
}

What do I do here to add some Fallout Intensity? All the help is really appreciated. And Thank you in advance.

unity – Creating the Vertices and Triangle Indices for Voxel Generated Mesh

I am running into a problem with a compute shader I am writing to generate the vertices and triangle indices for a voxel generated mesh.

Currently, I am creating an AppendStructuredBuffer of triangle structs which just have the three vertices of a triangle and reading from the AppendStructuredBuffer to the CPU. When read to the CPU, I then read from the buffer and set it in a RWStructuredBuffer in the GPU. Following that, I run a compute shader kernel to parse the triangle buffer.

Obviously if I can do this all on the GPU I should since reading and writing between the CPU and GPU is expensive.

When trying to put it all in one kernel I run into problems however. Each voxel has a range of possible triangles (0-5) that can form in it. Because of that, I can’t simply use the dispatch id to put it in a RWStructuredBuffer (at least I don’t think so). That’s why using an AppendStructuredBuffer seems natural; it allows for the variable amount of triangles.

After getting the array of triangle vertices and array of triangle vertex indices, I bring them back to the CPU and set them to a mesh and render it. In the future I want to use a geometry shader to render the mesh since that is most likely more efficient, but I’m trying to take this one step at a time, and geometry shaders are a whole ‘nother beast I know nothing about.

unreal 4 – How do you import submeshes on a skeletal mesh?

I’m trying to help out a friend on an Unreal issue. He has a Unity scene he’s trying to convert to run on Unreal, and getting the models right has been a bit of a bumpy road.

Like many models, this has several sub-meshes on it. Some of them are mutually exclusive and should only have one out of a group of submeshes turned on at once, so that the same basic model can be repurposed as multiple different characters with different base geometry. In Unity, this “just works.” Import the FBX file and you get a prefab with all of the submeshes as sub-objects, and everyone’s happy.

In Unreal, it’s a bit more complicated. A bit of searching came up with this question from 4 years ago, which is exactly the problem I’m having here, and says exactly how to deal with it: unselect the “Combine Meshes” option in the importer. Except that this model is rigged, and for whatever reason, when you check the “Skeletal Mesh” option in the importer, the Combine Meshes option vanishes and everything gets lumped together in one single blob of mutually exclusive geometry!

I find it difficult to believe this this would work perfectly right out of the box in Unity, while Unreal, which has been around almost a decade longer, has no support at all for such a fundamentally important operation. But at least at first glance, that appears to be the case. Are there any more experienced Unreal devs out there who know how to get sub-meshes to import correctly on a skeletal mesh?

3d meshes – Unity mesh only rendering one set of triangles

I’ve been using Unity3D to procedurally generate terrain with Perlin Noise and I’ve come across a problem where the mesh that I’ve constructed only renders one set of triangles.

enter image description here

The following is my MeshGeneration code:

using System.Collections;
using System.Collections.Generic;
using System.Runtime.CompilerServices;
using NUnit.Framework.Internal.Execution;
using UnityEngine;

public static class MeshGenerator
{
    public static MeshData GenerateMesh(float(,) heightMap)
    {
        int height = heightMap.GetLength(0);
        int width = heightMap.GetLength(1);
        int vertexIndex = 0;
        
        MeshData meshData = new MeshData(width, height);

        for (int y = 0; y < height; y++)
        {
            for (int x = 0; x < width; x++)
            {
                meshData.vertices(vertexIndex) = new Vector3(x, heightMap(y, x), y);
                meshData.uvs(vertexIndex) = new Vector2( x/(float)width, y/(float)height);
                 
                // If we are not on the edge, then add two triangles to the mesh
                if ((x != width - 1) && (y != height - 1))
                {
                    meshData.AddTriangle(
                        vertexIndex,
                        vertexIndex + width,
                        vertexIndex + width + 1
                    );
                    meshData.AddTriangle(
                        vertexIndex,
                        vertexIndex + 1,
                        vertexIndex + width + 1
                    );
                }
                
                vertexIndex++;
            }
        }

        return meshData;
    }
}

public class MeshData
{
    public Vector3() vertices;
    public Vector2() uvs;
    public int() triangles;

    public int triangleIndex;
    public MeshData(int meshWidth, int meshHeight)
    {
        vertices = new Vector3(meshWidth * meshHeight);
        uvs = new Vector2(meshWidth * meshHeight);
        triangles = new int((meshWidth - 1) * (meshHeight - 1) * 6);
    }

    public void AddTriangle(int a, int b, int c)
    {
        triangles(triangleIndex) = a;
        triangles(triangleIndex + 1) = b;
        triangles(triangleIndex + 2) = c;
        triangleIndex += 3;
    }

    public Mesh CreateMesh()
    {
        Mesh mesh = new Mesh();
        mesh.vertices = this.vertices;
        mesh.uv = this.uvs;
        mesh.triangles = this.triangles;
        
        mesh.RecalculateNormals();
        return mesh;
    }
}

I’m then passing in the mesh that I get from MeshData.CreateMesh() into the following function.

public void BuildMesh(MeshData meshData, Texture2D texture)
    {
        meshFilter.sharedMesh = meshData.CreateMesh();
        meshRenderer.sharedMaterial.mainTexture = texture;
    }

I’m following this tutorial: https://www.youtube.com/watch?v=4RpVBYW1r5M&list=PLFt_AvWsXl0eBW2EiBtl_sxmDtSgZBxB3&index=5

The Mesh generation code works by creating arrays of vertices, uvs, and triangles, and the populating them by iterating over a Vector3() heightMap that I created with perlin noise.

c++ – How to update indices for dynamic mesh in OpenGL?

So I am making a 3D batchrenderer for my engine, and the basic concept is that we make large enough VBO and IBO to accompany all the vertex data(positions, normals, uv etc.) and update the VBO using glMapBuffer or using glbuffersubdata everyframe if we want to make any changes, but in this case the pattern of the IBO (Index Buffer) is predefined (i.e. we assume we have quads and fill it with 012230 for the entirety of the IBO size) but when using 3D models this won’t be the case the IBOs data would be different, so how do I change the IBO data if I instantiate a new model or suppose if I am generating a UV sphere and want to change it’s subdivisions? I have no idea how to deal with dynamically changing(need not be frequent) indices data when using VAOs, VBOs and Index Buffers. afaik we cannot map the ELEMENT_ARRAY_BUFFER, and glDrawElements does accept a pointer to indices data but only when we are not using VAOs and VBOs, if we are using them it takes the last argument as a pointer offset in the currently bound ELEMENT_ARRAY_BUFFER. So what’s the best way to deal with dynamic indices?

import – create a cubic mesh from a STL mesh file

I would like to create a cubic or 3d grid mesh with a constant size for a surface mesh that represents a torus saved in an STL file.

I am import the vertex using:

pts = Import("Torus.stl", "VertexData")
size = Length(pts)

 xmin = Min(pts((All, 1)));   ymin = Min(pts((All, 2)));   zmin =  Min(pts((All, 3)));
 xmax = Max(pts((All, 1)));    ymax = Min(pts((All, 2)));   zmax = Min(pts((All, 3)));


 (*positive points*)

   pts((All,1)) =  pts((All,1))+ Abs(xmin)
   pts((All,2)) =  pts((All,2))+ Abs(ymin)
   pts((All,3)) =  pts((All,3))+ Abs(zmin)

 (*Create the points of the space, then I need to evaluate if these points are near to the vertex*)

    xspace = Table(i, {i, xmin , xmax, 0.4}); 
    yspace = Table(i, {i, ymin , ymax, 0.4});  
    zspace = Table(i, {i, zmin , zmax, 0.4});


  (*Use a for loop to determine If the points of the background mesh are near to the vertex or not, and save the points near to the vertex points*)
  dataMesh = {};
  For(k = 1, k < size + 1, k++,
     For(j = 1, j < size + 1, j++,
       For(i = 1, i < size + 1, i++,
  tol = 0.4;
  Posx = xspace((i)); Posy = yspace((j)); Posz = zspace((k));
   .
   .
   .
  )))

In evaluating this loop, Mathematica takes several hours. Is there another way to generate this type of mesh from an STL File? . Since it is a curved surface, it is to be expected that the meshing is not perfect.

Torus.stl file

networking – Mesh Wifi network for large coverage

Let’s say i am living in remote rural area (~10km2), and likely the only one having sufficient internet connection bandwith to share. What is the best and most affordable way to do it?

I was thinking of implementing mesh wifi network with good signal amplifier in each certain radius to cover whole area, while maintaining just 1 SSID

So this will be inter wifi connection (wifi1 connect to my facility, wifi2 connect to wifi1, wifi3 connect to wifi2, etc), where each wifi point act as extender

Will that even work?

How about maximum user & device connected, is there any? Earlier i thought there’s no limit,

But after I read this How many devices can be connected to my home WiFi connection? and other Google result show various answer confuses me even further

https://i.imgur.com/WD34fWe.png (sorry its said i can’t attach image in my post yet)

I am not a knowledgeable in this area and simply want to share. Any input is really appreciated