ImportMesh can only handle meshes of single type element?

I’m using ImportMesh in FEMAddOns package. I can use it to import quadrilateral meshes from Abaqus, but I can’t import quadrilateral combined triangular meshes from Abaqus. Did I do something wrong?

Their meshs are as follows

enter image description here

enter image description here

enter image description here

enter image description here

3d – Fast self collision/intersection detection algorithm/library for tetrahedral meshes?

I want to play with deformation of tetrahedral mesh (soft-body simulation) but i don’t want to implement self-collision detection stuff manually. Can anyone suggest me a library for this problem? I found SOFA collision detection but i’m not sure that it fits for self-intersection of tet mesh.

If there are no good library for this problem, can anyone suggest me good algorithm for self-collision detection? As far as i can understand, something like BVH of tetrahedra can help me, but it would be great if somebody with expertise shows me right direction

physics – 3D collision response advice for complex meshes

I am trying to use my own collision detection system in Unity to speed up my game. My problem is that when rigid bodies sit on top of static bodies, they will balance unrealistically on one edge or a corner. I really just need a better way of fixing this issue, but I will explain further.

I have tried adding a force to the point of collision to try and “settle” the rigid body into place, and this sort of works, it’s just that once it has settled, it vibrates wildly just sitting on top of a static mesh… I can get it to either slowly float down into a settled position or it does it fast but vibrates wildly after settling on a static mesh.

I would appreciate any ideas at all, even if they are bad ones. Thank you.

python – Plotting meshes for numerical applications

I work on 3d meshing algorithms for numerical applications in python. Such meshes contain hexahedrons, 6 corner prisms, 5 corner pyramids and tetrahedrons. For debugging I heavily rely on good visualizations.
I currently use two approaches to plot meshes: The gmsh library (FLTK based) and plotting individual edges using line plots with matplotlib. However both do not satisfy all my needs. I am now looking for a way to get a mesh plotting method which fulfills most of my needs as listed below:
| need | gmsh | matplotlib |
|:-:|:-:|:-:|
|plot all stated element types |yes|yes|
|optional opaque faces|no|on|
|handle big meshes |yes|no|
|mark selected nodes or edges by color|no|yes|
|rotate, translate and zoom|yes|sort of|

Are you aware of any library which serves this needs or would you rather advise me to fork gmsh, matplotlib or a different library?

blender – Exported FBX to Monogame doesn’t translate all meshes

I’m taking a crack at a minecraft clone in Monogame, and I am trying to get my model to render as it’s shown in Blender 2.9.1. When I look at the model in Blender, it looks great. When displaying in Monogame, the head is distinct but the body, arms, and legs are all mashed together. When I inspect the ParentBone for each ModelMesh, all except for the head have the same ModelTransform and Transform (but ModelTransform != Transform).

This is what I made in Blender:
enter image description here

But this is how it renders in Monogame:

enter image description here

I then tried creating an armature and adding bones, but did not seem to change anything other than the model in Monogame had a lot more bones in it. I thought it had something to do with how I exported, but I tried both FBX All and Local All for scaling with no change. I’ve also tried exporting as DAE instead of FBX (still need to use the FBX Importer for this to work) and no change there either.
enter image description here

Where I am missing a step that is causing what looks to be a translation issue? Minimally reproduceable code is below. I did find something on this site about storing the transforms for the bones, but when I tried to use that all I could see was the top of the head and couldn’t rotate it? Or maybe I’m holding it wrong, I left it the code below commented out.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace test
{
    public class Game1 : Game
    {
        private GraphicsDeviceManager _graphics;
        private Model model;
        private Texture2D texture;
        private Vector3 position;
        private Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0));
        private Matrix view = Matrix.CreateLookAt(new Vector3(0, 0, 10), new Vector3(0, 0, 0), Vector3.UnitY);
        private Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45), 800/480f, 0.1f, 100f);

        public Game1()
        {
            _graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
            IsMouseVisible = true;
        }

        protected override void Initialize()
        {
            // TODO: Add your initialization logic here
            base.Initialize();
        }

        protected override void LoadContent()
        {
            model = Content.Load<Model>("human");
            texture = Content.Load<Texture2D>("human_texture");;
            position = new Vector3(0, 0, 0);
            world = Matrix.CreateRotationX(-0.5f) * Matrix.CreateRotationY(0.5f) * Matrix.CreateRotationZ(0.1f)  * Matrix.CreateTranslation(position);

            // TODO: use this.Content to load your game content here
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
                world = Matrix.CreateRotationX(-0.5f) * Matrix.CreateRotationY(0.5f) * Matrix.CreateRotationZ(0.1f)  * Matrix.CreateTranslation(position);
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);
            // pixelate textures
            GraphicsDevice.SamplerStates(0) = SamplerState.PointWrap;

            // lifted this from somewhere on the internet, but all I see is the top of the head and can't rotate when I use it?
            Matrix() transforms = new Matrix(model.Bones.Count);
            model.CopyAbsoluteBoneTransformsTo(transforms);

            foreach (ModelMesh mesh in model.Meshes)
            {
                foreach (BasicEffect effect in mesh.Effects)
                {
                    effect.World = world;//transforms(mesh.ParentBone.Index);
                    effect.View = view;
                    effect.Projection = projection;
                    effect.Texture = texture;
                }
                mesh.Draw();
            }

            base.Draw(gameTime);
        }
    }
}

I’ve also uploaded my full .blend file, the generated FBX, and the texture here.

3d meshes – Unity mesh only rendering one set of triangles

I’ve been using Unity3D to procedurally generate terrain with Perlin Noise and I’ve come across a problem where the mesh that I’ve constructed only renders one set of triangles.

enter image description here

The following is my MeshGeneration code:

using System.Collections;
using System.Collections.Generic;
using System.Runtime.CompilerServices;
using NUnit.Framework.Internal.Execution;
using UnityEngine;

public static class MeshGenerator
{
    public static MeshData GenerateMesh(float(,) heightMap)
    {
        int height = heightMap.GetLength(0);
        int width = heightMap.GetLength(1);
        int vertexIndex = 0;
        
        MeshData meshData = new MeshData(width, height);

        for (int y = 0; y < height; y++)
        {
            for (int x = 0; x < width; x++)
            {
                meshData.vertices(vertexIndex) = new Vector3(x, heightMap(y, x), y);
                meshData.uvs(vertexIndex) = new Vector2( x/(float)width, y/(float)height);
                 
                // If we are not on the edge, then add two triangles to the mesh
                if ((x != width - 1) && (y != height - 1))
                {
                    meshData.AddTriangle(
                        vertexIndex,
                        vertexIndex + width,
                        vertexIndex + width + 1
                    );
                    meshData.AddTriangle(
                        vertexIndex,
                        vertexIndex + 1,
                        vertexIndex + width + 1
                    );
                }
                
                vertexIndex++;
            }
        }

        return meshData;
    }
}

public class MeshData
{
    public Vector3() vertices;
    public Vector2() uvs;
    public int() triangles;

    public int triangleIndex;
    public MeshData(int meshWidth, int meshHeight)
    {
        vertices = new Vector3(meshWidth * meshHeight);
        uvs = new Vector2(meshWidth * meshHeight);
        triangles = new int((meshWidth - 1) * (meshHeight - 1) * 6);
    }

    public void AddTriangle(int a, int b, int c)
    {
        triangles(triangleIndex) = a;
        triangles(triangleIndex + 1) = b;
        triangles(triangleIndex + 2) = c;
        triangleIndex += 3;
    }

    public Mesh CreateMesh()
    {
        Mesh mesh = new Mesh();
        mesh.vertices = this.vertices;
        mesh.uv = this.uvs;
        mesh.triangles = this.triangles;
        
        mesh.RecalculateNormals();
        return mesh;
    }
}

I’m then passing in the mesh that I get from MeshData.CreateMesh() into the following function.

public void BuildMesh(MeshData meshData, Texture2D texture)
    {
        meshFilter.sharedMesh = meshData.CreateMesh();
        meshRenderer.sharedMaterial.mainTexture = texture;
    }

I’m following this tutorial: https://www.youtube.com/watch?v=4RpVBYW1r5M&list=PLFt_AvWsXl0eBW2EiBtl_sxmDtSgZBxB3&index=5

The Mesh generation code works by creating arrays of vertices, uvs, and triangles, and the populating them by iterating over a Vector3() heightMap that I created with perlin noise.

opengl – Knowing the size of a framebuffer when rendering transformed meshes to a texture

I have a couple of 2D meshes that make a hierarchical animated model.
I want to do some post-processing on it, so I decided to render this model to a texture, so that I could do the post-processing with a fragment shader while rendering it as a textured quad.

But I don’t suppose that it would be very smart to have the render texture’s size as large as the entire screen for every layer that I’d like to compose – it would be nicer if I could use a smaller render texture, just big enough to fit every element of my hierarchical model, right?

But how am I supposed to know the size of the render target before I actually render it?

Is there any way to figure out the bounding rectangle of a transformed mesh?
(Keep in mind that the model is hierarchical, so there might be multiple meshes translated/rotated/scaled to their proper positions during rendering to make the final result.)

I mean, sure, I could transform all the vertices of my meshes myself to get their world space / screen space coordinates and then take their minima / maxima in both directions to get the size of the image required. But isn’t that what vertex shaders were supposed to to so that I wouldn’t have to calculate that myself on the CPU? (I mean, if I have to transform everything myself anyway, what’s the point of having a vertex shader in the first place? :q )

It would be nice if I could just pass those meshes through the vertex shader first somehow without rasterizing it yet, just to let the vertex shader transform those vertices for me, then get their min/max extents and create a render texture of that particular size, and only after that let the fragment shader rasterize those vertices into that texture. Is such thing possible to do though? If it isn’t, then what would be a better way to do that? Is rendering the entire screen for each composition layer my only option?

unity – Shaders to texturate minecraft like meshes

I’m beginning in the shader area and i would like if it possible to do a block texturation with shaders. I have a cubic world (like minecraft) where i’m generating chunks. In those chunks, only visible vertex are generated.

Currently, it means that if i have a 2x2x1 chunk, it will generate 8 triangles for the above part (2 for each block). I’m trying to generate only two following this method https://0fps.net/2012/06/30/meshing-in-a-minecraft-game/

But before that, i would like to know, if i can thanks to shaders, generate something like that: https://imgur.com/fTLRINH

As you can see it has 3 different tiles; grass, dirt-grass and dirt. Currently, with my shader, i’m doing something like that: https://paste.ofcode.org/naMTAC3LaNqGS74ambD4n4

Is it possible through shaders to achieve something like i desire do i have to stick with sides with multiple triangles and handle them individually?

Thanks!

unreal 4 – Can I “unmerge” merged meshes UE4?

I set up my character to be modular, by following the UE4 docs.

I used the code at the bottom of the page to implement mesh merging functionality.
It does work well, I can merge meshes runtime.
I use this to add clothes to my characters.

Just wondering, if there is a way to take off clothes, by unmerging stuff:
Something like:

  • store added clothes in a TArray or something
  • then call a function to remove the desired cloth from the array

Of course, I would like to do it at runtime…

level of detail – How to set LOD specifications for all meshes in unreal at once?

I have a large model and want to avoid separately setting the LOD specifications for each component in the model. Is there are way set LOD specifications for all meshes in unreal at once since I want to set the same specifications for each one? (percentages, how many LOD’s, and at what screen distance they kick in, using Unreal’s autogenerated LOD’s).