Unity: how to imitate the lighting of a mesh as it appears in the editor view

I have a camera that shows a "bird's eye view" of a mesh generated procedurally, like this:

camera looking down on the mesh

Currently in the view of the editor (right), the pitch is exactly as I would like it. However, in the game (left), I have not managed to make it appear as I would like:

game view next to the stage

I used a directional light above, which caused the effect when the center of the mesh appears "shiny". I have therefore looked at some articles and questions on the forum and decided to instead use ambient lighting by changing the settings in Window> Render> Lighting Settings, but here is the result:

ambient lighting, mesh looks dark

This is the material that I use (EDIT: I'm using a particle shader because the staining of my meshes is based on changing mesh colors. ):

material parameters

I tried to play with different types of materials, but I could not get better results. My question is: how can I configure my game to use the same lighting as used in the editor?

Thanks in advance

opengl – Optimizing mesh rendering performance

I'm working on a libgdx implementation of meshed terrain and I'm having some performance issues.

The terrain is represented by a number of mesh tiles, each mesh consisting of vertices arranged on a 2D plane.

The mesh implementation is done via libgdx Mesh, each of them being cached after its initial generation. I'm using GL3.0 and so the vertices are managed via VertexBufferObjectWithVAO which, if I understood correctly, should allow GPU caching.

The performances are slow, I tried to increase the mesh size, but strangely, the performance got worse rather than better.

I think I'm missing something in the OpenGL pipeline; in my mind, the way to work is that the meshes are sent to the GPU once, the bigger the mesh, the better the calls, but that does not correspond to the results. m get.

To give you an idea of ​​the mesh size, the initial configuration was as follows: 7,200 vertices * 625 meshes in an ingle call draw. The FPS were worse by trying 100 meshes with about 45,000 vertices.

Thank you

3D Mesh – Convert 2D Doodles to 3D Models in Real Time

I wanted to create an application in which I had to rebuild a complete 3D model of an object from scanned images and display it in real time as part of the application.

The pictures will basically be just a simple doodle since I wanted this app to be accessible to children. Children can draw squiggles and scan them. So I wanted Unity to detect colors and patterns in the image and build a 3D model.

I've used the photogrammetry technique, but I do not see how this technique is suitable for my application, because I just need to generate 3D models from simple 2D doodles and I do not need more than one plans to build a 3D model.

Can someone therefore give advice on how to proceed? Thanks in advance!

course – How to make the AI ​​jump on the navigation mesh system?

hi: D I currently use unreal to make a sandbox game for learning purposes.

and because of the unique attribute of the sandbox game, I'm looking for how the AI ​​moves naturally on the map (which changes so often).

I find a way to place navigation links on the map dynamically and work that way. but I think if I can make the AI ​​jump or fall, climb like a wall, it would be more convenient than placing lots of navigation links.

so I looked for it, and I found "how to get navigation-mesh in unreal", "how to replace the next path" kind of things. but I really can not find and I'm not talking about "how to make the AI ​​decide to jump when it needs it"

enter the description of the image here
I put a picture for a detailed question. The man stick is the AI ​​and although the flag at the top right is the goal (location of the target's movement), there are also 2 obstacles (A and B) to reach the goal.

Here is my question:

  • while the navigation mesh is not linked between the platform and over obstacle A. How does path-finder know that he can use "A" when he can jump?
  • I think the obstacle B is a more difficult situation. unlike situation A, the "navigation mesh B" has the same or the same inner positions as "the navigation mesh platform". while the only noticeable difference is the height, how does the AI ​​or trajectory researcher know that it is a valid path and that one must jump?
  • quite the same question about B I think. I forgot to draw on the picture though. so, if the AI ​​determines the path and tries to make the jump to reach the goal, but an obstacle comes to be placed that blocks the jump. how he knows and sets the path?

I know it's a very difficult and very difficult question to explain. in particular questions 2 and 3. Thus, I do not even understand that answer to question 1. And even in this case, I feel very grateful.

Since English is not my first language, please excuse my poor English skills. and I hope you do not care :).

thanks for reading and hope you spend a wonderful day and stay happy 🙂

Unity – An appropriate algorithm for a character controller that automatically adapts to the mesh?

I'm trying to create an algorithm that will automatically generate a CharacterController component (on a GameObject parent object) that should automatically be resized to the size of the current mesh.

To explain: a capsule object is acting as a "parent", this object has a CharacterController, and all together, the CharacterController radius / height must act as a selection frame around the current mesh / FBX I load, regardless of whether He's coming. shape or size.

And then, once the size is correct, I simply move the mesh to the center of the parent game object and parent it to it. All of this works, in fact, but when the mesh is bigger, things start to get weird.

In addition, I have various hats included in the FBX file that need to be displayed / hidden, so I do not necessarily want to include it in the selection box / radius (since the hats a = have a wide edge, if the radius suits them, then the collision will depend on the hat).

Another factor involved is that I have to have a proper skin width for the character controller to work properly, but I do not want this skin to push the mesh to the ground, nor is it visible at all (as in the case of In the walls, I do not want the width of the skin to give the impression that there is an invisible border around it).

But also, all the meshes imported will have no clothes. In my 3D software, I named the mesh of the body "solid" and in the algorithm, I check if the solid mesh exists and if so, use it for the radius, and if it is not the case, use all the mesh.

I am currently able to get a selection framework encapsulating all the child meshes, and here is this function:

public Bounds LocalBounds (GameObject gb)
{
Quaternion currentRotation = gb.transform.rotation;
gb.transform.rotation = Quaternion.Euler (0f, 0f, 0f);
Limits of limits = new limits (gb.transform.position, Vector3.zero);
foreach (rendered in gb.GetComponentsInChildren())
{
bounds.Encapsulate (renderer.bounds);
}
Vector3 localCenter = bounds.center - gb.transform.position;
bounds.center = localCenter;
// Debug.Log ("The local limits of this model are" + bounds);
gb.transform.rotation = currentRotation;
return the limits;
}

And here is the rest of my algorithm to try to calculate the radius / height of the character controller:

Cancel MakeParentCapsule ()
{
parentGameObject = GameObject.CreatePrimitive (PrimitiveType.Capsule);



var controller = parentGameObject.AddComponent();

var capsuleCollider = parentGameObject.GetComponent();

controller.slopeLimit = 85;
float skinWidth = 0.1f;
floating radius = 0.3f;

controller.skinWidth = skinWidth;

var center = controller.center;

controller.center = center;
capsuleCollider.radius = controller.radius;
controller.radius - = skin width * 2;
if (controller.radius <= 0)
{
controller.radius + = radius;
}


Vector3 capsuleSize = controller.bounds.size;
GameObject radiusRef = null;
var solidAttempt = gameObject.transform.FindDeepChild ("solid");
if (solidAttempt! = null)
{
Debug.Log ("found a solid!");
radiusRef = solidAttempt.gameObject;
} other
{
Debug.Log ("NO solid found");
}

var body = LocalBounds (gameObject);
var scaleFactor = body.size.y / capsuleSize.y;
controller.skinWidth * = scaleFactor;

var boundsToUse = radiusRef! = null? LocalBounds (radiusRef): body;
controller.radius = Mathf.Max (boundsToUse.extents.x, boundsToUse.extents.z);
// controller.radius * = 0.5f;
controller.radius - = controller.skinWidth * 2;

parentGameObject.transform.localScale = parentGameObject.transform.localScale * scaleFactor;

controller.height - = skin width * 4 * scaleFactor;
capsuleCollider.height - = width of the skin;

parentGameObject.transform.position = gameObject.transform.position;

Vector3 tempPos = parentGameObject.transform.position;
tempPos.y + = capsuleSize.y / 2 * scaleFactor - skinWidth * 1.5f;
parentGameObject.transform.position = tempPos;

gameObject.transform.SetParent (parentGameObject.transform);
parentGameObject.GetComponent() .enabled = false;
}

I was trying a lot to modify the radius to reduce the width of the skin (and therefore also the height), and it works with a small basic model, but when I simply increase the size (in the l & 39; 3D editor), things do not start up online anymore.

SO: Is there anyone who knows a simpler / functional algorithm that can accurately create a character controller whose radius and height exactly match those of a mesh?

Sometimes you can not use the mesh rendered as collider in Unity?

Most of the time, I can use the Mesh filtermesh as a collider in collider

But sometimes there is nothing. No collider. Why?

The tree does not have a collider

enter the description of the image here

In unity, convert the mesh to navmesh

Everyone, I am new to unity. I wonder if there is a way to convert a net that I built well in a mixer, to become a navmesh in Unity. It means using the mesh completely without cooking.

The specific situation is that I build a scene on a small planet and that baking will always give me the result on the half-top of the planet. But what I want is to create a navmesh all around the little planet.

How could I report it?

mesh – The unit inflates the number of vertebrae / vertexes but I do not have any light?

Ok, I have a problem with several meshes that have a lower poly number in the mixer, then when I export as fbx and I put in the unit, the number of poly / vertex goes up. I read here

https://forum.unity.com/threads/why-does-light-add-verts-and-tris.291872/

that Unity recalculates well others for real time lights, but I have no light in my scene and the lighting is on:

enter the description of the image here

Here are the mesh import settings:

enter the description of the image here

The number of polyphonies slips significantly down (by 10) and I have to remedy that. The mesh has not yet applied materials. What can I do?

python – Calculation of the Doppler delay on a mesh

Goal

Draw the contour of the iso-doppler and iso-delay lines for a transceiver reflection on a specular plane.

Implementation

This Doppler shift can be expressed as follows:
$$ f_ {D, 0} ( vec {r}, t_0) = [vec{V_t} cdot vec{m}(vec{r},t_0) – vec{V_r} cdot vec{n}(vec{r},t_0)]/ lambda $$

where for a given time $ t_0 $, $ vec {m} $ is the unit vector reflected, $ vec {n} $ is the unit vector of incident, $ vec {V_t} $ is the speed of the transmitter, $ vec {V_r} $ is the speed of the receiver, and $ lambda $ is the wave length of the transmitted electromagnetic wave.

The timing of the electromagnetic wave is simply the path traveled divided by the speed of light, assuming that the vacuum propagates.

#! / usr / bin / env python

import scipy.integrate as integrated
import numpy as np
import matplotlib.pyplot as a plt
import matplotlib.ticker as ticker

h_t = 20000e3 # meters
h_r = 500e3 # meters
altitude = 60 * np.pi / 180 # rad

# Coordinate frame as defined in Figure 2
# J. F. Marchan-Hernandez, A. Camps, N. Rodriguez-Alvarez, E. Valencia, X.
# Bosch-Lluis and I. Ramos-Perez, "An effective algorithm for the simulation of
# Delay - Doppler Maps of Reflected Signals from the Global Navigation Satellite System »
# IEEE Geoscience and Remote Sensing Transactions, Vol. 47, no. 8, pp.
No. 2733-2740, August 2009.
r_t = np.array ([0,h_t/np.tan(elevation),h_t])
r_r = np.array ([0,-h_r/np.tan(elevation),h_r])

# Speed
v_t = np.array ([2121, 2121, 5]) # Mrs
v_r = np.array ([2210, 7299, 199]) # Mrs

light_speed = 299792458 # m / s

# The GPS center frequency L1 is defined relative to a reference frequency
# f_0 = 10.23e6, so that f_carrier = 154 * f_0 = 1575.42e6 # Hz
# Explained in the "GPS SIGNAL DESCRIPTION" section EMIS & # 39; in Zarotny
# and Voronovich 2000
f_0 = 10.23e6 # Hz
f_carrier = 154 * f_0;

def doppler_shift (r):
& # 39; & # 39; & # 39;
Doppler shift as a contribution of the relative motion of the transmitter and
receiver as well as the point of reflection.

Implements the equation 14
V. U. Zavorotny and A. G. Voronovich, "Dispersion of GPS signals from
the ocean with the application of remote sensing of wind, "says IEEE
Geoscience and Remote Sensing, Vol. 38, no. 2, pages 951 to 964, March 2000.
& # 39; & # 39; & # 39;
wavelength = speed of light / f_carrier
f_D_0 = (1 / wavelength) * (
np.inner (v_t, incident_vector (r)) 
-np.inner (v_r, reflection_vector (r))
)
#f_surface = scattering_vector (r) * v_surface (r) / 2 * pi
f_surface = 0
returns f_D_0 + f_surface

def doppler_increment (r):
returns doppler_shift (r) - doppler_shift (np.array ([0,0,0]))

def reflexion_vector (r):
reflection_vector = (r_r - r)
reflection_vector_norm = np.linalg.norm (r_r - r)
reflection vector[0] / = reflexion_vector_norm
reflection vector[1] / = reflexion_vector_norm
reflection vector[2] / = reflexion_vector_norm
returns vector_flection

def incident_vector (r):
incident_vector = (r - r_t)
incident_vector_norm = np.linalg.norm (r - r_t)
incident_vector[0] / = incident_vector_norm
incident_vector[1] / = incident_vector_norm
incident_vector[2] / = incident_vector_norm
returns incident_vector

def time_delay (r):
path_r = np.linalg.norm (r-r_t) + np.linalg.norm (r_r-r)
path_specular = np.linalg.norm (r_t) + np.linalg.norm (r_r)
return (1 / light_speed) * (path_r - path_specular)

Tracing area

x_0 = -100e3 # meters
x_1 = 100e3 # meters
n_x = 500

y_0 = -100e3 # meters
y_1 = 100e3 # meters
n_y = 500

x_grid, y_grid = np.meshgrid (
np.linspace (x_0, x_1, n_x),
np.linspace (y_0, y_1, n_y)
)

r = [x_grid, y_grid, 0]
z_grid_delay = time_delay (r) / delay_chip
z_grid_doppler = doppler_increment (r)

delay_start = 0 # chips C / A
delay_increment = 0.5 # chips C / A
delay_end = 15 # chips C / A
iso_delay_values ​​= list (np.arange (delay_start, delay_end, delay_increment))

doppler_start = -3000 # Hz
doppler_increment = 500 # Hz
doppler_end = 3000 # Hz
iso_doppler_values ​​= list (np.arange (doppler_start, doppler_end, doppler_increment))

fig_lines, ax_lines = plt.subplots (1, figsize = (10, 4))
contour_delay = ax_lines.contour (x_grid, y_grid, z_grid_delay, iso_delay_values, cmap => winter & # 39;)
fig_lines.colorbar (outline_delay, label = C / A chips,)

contour_doppler = ax_lines.contour (x_grid, y_grid, z_grid_doppler, iso_doppler_values, cmap => winter)
fig_lines.colorbar (outline_doppler, label = "Hz")

ticks_y = ticker.FuncFormatter (lambda y, pos: # {0: g}. format (y / 1000))
ticks_x = ticker.FuncFormatter (lambda x, pos: {0: g}. format (x / 1000))
ax_lines.xaxis.set_major_formatter (ticks_x)
ax_lines.yaxis.set_major_formatter (ticks_y)
plt.xlabel (& # 39;[km]& # 39;)
plt.ylabel (& # 39;[km]& # 39;)

plt.show ()

What produces this output supposedly right:

enter the description of the image here

Do not hesitate to provide recommendations on implementation and style.

Questions

In order to calculate the incident vector from a point $ r_t $ I have implemented the following code:

def incident_vector (r):
incident_vector = (r - r_t)
incident_vector_norm = np.linalg.norm (r - r_t)
incident_vector[0] / = incident_vector_norm
incident_vector[1] / = incident_vector_norm
incident_vector[2] / = incident_vector_norm
returns incident_vector

It works perfectly fine, but I think there must be a cleaner way of writing that. I would like to write something like this:

def incident_vector (r):
return (r - r_t) /np.linalg.norm (r - r_t)

But unfortunately, it does not work with the grid mesh, because she does not know how to multiply the scalar grid with the grid vector:

ValueError: Operands Can not Be Broadcast with Shapes (3,) (500,500)

3d – Is it possible to generate a 2D surface that approaches a certain mesh, then a specialized ray tracing function?

For example, suppose we try to draw a sphere. There are two options. The first is to build a mesh of triangles approaching the sphere and to draw it. Unfortunately, the more triangles we have, the more expensive this calculation will be, because we have to call the rayTriangleIntersect () once for each triangle. For the specific case of a sphere, however, there is a better way to do it: we can use a raySphereIntersect () function, generated by solving the equation of the radius with the equation of a sphere, x ^ 2 + y ^ 2 + z ^ 2 = R ^ 2. This only requires one call and is therefore much cheaper.

My question is: is it possible to apply the same optimization for different forms? In other words, is there a technique to generate a surface that approaches a certain mesh, thus allowing us to rayIntersection () to work for this surface?