## unit – How can I create a plexus effect with VFX Graph?

Thank you for contributing an answer to Game Development Stack Exchange!

• Please make sure to respond to the question. Provide details and share your research!

But to avoid

• Make statements based on the opinion; save them with references or personal experience.

Use MathJax to format equations. MathJax reference.

## unit – Texture2D GetPixels – Search for pixels that are not transparent. Problem because finding more than expected

I'm trying to make a level design system where I use a low resolution png file with some pixels filled. My code will transform the location of the pixel in the image into a placement of platforms in my game.

But I test the following code with a png that I did with only 1 pixel in (the rest is transparent). But the result is that he finds 4 each time. Is it possible that there is a difference between the Gimp png file and the texture visible by Unity? (In GIMP, I used a 100% hardness pencil, 1px width .There are also "resolution" settings of the image, which I left by default):

Public class LevelBuilder: MonoBehaviour
{
public Texture2D sourceTex;
Couleur32[] pixels;
int pixelsFound = 0;

private void Start ()
{
pixels = sourceTex.GetPixels32 ();
for (int i = 0; i <pixels.length; i ++)
{
if (pixels[i].at! = 0)
{
Debug.Log ("found color pixel Total:" + ++ pixelsFound);
}
}
}
}


## unit – How to apply force in the direction using the mouse and / or the direction of tactile input?

I have to object. I can move the object 1 (my player) with the help of the mouse (or Touch for the smartphone version).

I want to apply a force to the object 2 when the object 1 hits it. The direction of the force must be the direction of the mouse vector or the direction of the touch vector.

For mouse input, I use the OnMouseDown event and OnMouseDrag to determine the original position of the mouse.

But I think this is not the best approach. The behavior is not what I expect

void OnMouseDown ()
{

mouseDragStartPosition = Input.mousePosition;

// translate the position of the cubes of the world in Screen Point
screenSpace = Camera.main.WorldToScreenPoint (transform.position);

// calculate any difference between the world position in cubes and that of the mouse The position of the screen converted to a point of the world
offset = transform.position - Camera.main.ScreenToWorldPoint (new vector3 (Input.mousePosition.x, Input.mousePosition.y, screenSpace.z));

}

void OnMouseDrag ()
{

// keep track of the position of the mouse
var curScreenSpace = new Vector3 (Input.mousePosition.x, Input.mousePosition.y, screenSpace.z);

// converts the position of the mouse on the screen to the world point and adjusts with offset
var curPosition = Camera.main.ScreenToWorldPoint (curScreenSpace) + offset;

transform.position = new Vector3 (curPosition.x, ClampY, curPosition.z);
}

OnCollisionEnter (Collision Collision)
{
Debug.Log ("OnCollisionEnter");
if (collision.collider.tag == "Disc")
{
ApplyForce ();
}

}

Undo ApplyForce ()
{
Direction Vector3 = mouseDragStartPosition - Input.mousePosition;
direction = new Vector3 (direction.x, ClampY, direction.z); // ClampY used to not allow + -Y
}


## unit – how to make sure that the UI hardware ignores the lighting of the scenes and has the same brightness as the albedo?

Oh. So I tried to apply a show before and it did not work because I thought I could not use the albedo texture for the show.

But it's totally wrong.

Using the same texture as that used for albedo as an emitter board gives the material the same brightness that you have in an image viewer (assuming all the other stage lights ignore the material).

EDIT: a wrinkle: If you dare to adjust the multiplier, it is very difficult to be sure to return to the default setting.

## unit – Switch between scenes and continue to receive the Bluetooth stream of the update

I have a scene where I am connected to a BLE device and receive data in bytes. I want to change the scene since the current update and continue to receive the data without disconnection. How should I do that? Here is my current code in Update:

                                                box States.Subscribe:
HM10_Status.text = "Subscribe to HM10";

float floatprint = EnterDataQueue (bytes);
Debug.Log ("floatprint" + floatprint.ToString ());
HM10_Status.text = "Received series:" + floatprint.ToString ();
//HM10_Status.text = "Received series:" + Encoding.UTF8.GetString (bytes);
});


And the method on the outside:

EnterDataQueue static float (byte[] bytes)
{
ArduinoHM10Test aTest = new ArduinoHM10Test ();
// still load the array of data bytes in full
for (int i = 0; i < bytes.Length; i++)
{
aTest.myQueue.Enqueue(bytes[i]);
}

//Dequeue the queue and check if 4 bytes have been dequeued
if (aTest.myQueue.Count >= 4)
{

byte[] byteArray = new byte[4];
for (int i = 0; i <byteArray.Length; i ++)
{
byteArray[i] = aTest.myQueue.Dequeue ();
if (i == 3)
{
Pause;
}
}

aTest.floatnum = BitConverter.ToSingle (byteArray, 0);
}

Debug.Log ("floatnum:" + aTest.floatnum.ToString ());
send back to Test.floatnum;
// throws new NotImplementedException ();
}


## unit – Facilitator user interface on the desktop screen for virtual reality

Hey and thanks in advance for any help / pointers

The solution is probably simple, but I'm still a little inexperienced (although I create several VR Unity projects mainly related to studies, I've never really built anything, so the editor has allowed some shortcuts, see below).

I am currently developing for Windows a mixed reality with unity. I can not understand how to run an application window (possibly secondary) on the desktop, which can be used during virtual reality experiments facilitated to change the settings and generally provide a contribution of l & # 39; The facilitator in the application while the participant would perceive only the world of virtual reality.

Now everything works perfectly when you use the Unity editor. The mere fact of creating a canvas screen space will prevent it from being displayed for as long as it will be possible to interact with it. the problem is that the facilitators themselves are a third party and a simplified version would simply make it easier for them to handle the solution. Not to mention the performance gains related to the construction and cleaning of the project (?).

it will probably mean creating a secondary window that will run on the desktop, since everything only works on the wmr portal. But I do not know how to do it and hoped someone would have had trouble with it before. But I did not find anything.

## How to make the unit recognize the Nintendo Joycon?

How to make the unit recognize the Nintendo Joycon because it is connected and the computer recognizes it, but that the unit does not recognize the Joycon

## how to reset the function and get GetJoystickNames in the unit?

the problem is that if I connect a command from the array
GetJoystickNames increases an index but if you log out, it does not reduce the index and continues counting as it was connected.

## unit – LOD for large complex meshes (and mixer)

I have done quite extensive research and I can not find any information on what I am looking for. I really hope this does not mean that it is not possible … so if anyone can give me a glimpse or at least head me in the right direction, it would really be appreciated !

I have a 3D model in the mixer of a client, not one that I have designed, and I can not disclose a lot of details because of a NES. In the end, the manual editing of the mixer model will not help because this procedure must inevitably be applied to several models of similar composition. What I'm trying to do is apply groups of levels of detail to the sub-meshes of the model, in order to improve the display performance of this model in Unity.

I already have multiple LOD variances for each sub-mesh of the model, but when I import the model into Unity, it groups them all together rather than in separate sub-meshes. This goes a bit against the object because it restores the entire model in LOD0 when you approach, when I want only the closest parts of the model to be LOD0 and all that you are not directly neighbor is reduced.

Currently, I'm exporting the mixer model as FBX, but this is not necessary if there is a better alternative to help you.

I can not show the model in question because of the NDA, so I will try to give an example using monkeys. Again, I need a procedure that can be applied to multiple models, so changing everything manually is not the answer. However, to demonstrate what I am looking for, I have manually developed an example.

The first image is what I want. Each party has its own LOD which calculates individually from the others. The second image is what I'm getting now, where all the parts are calculated as one LD.

Desired result

CurrentResult

If there is a parameter, process or script that can be applied to Blender or Unity, which can allow me to advance the desired results, help me! A bit desperate because none of my plans predicted this would be a problem, and I would have a major problem if it can not be solved.

## c # – How to apply a script to an imported OBJ file in the unit?

I am new to the unit

I've imported an OBJ model to which I've assigned materials

I want to run it with the mouse in the game

The script is:

using UnityEngine;

Public class MouseDragRotate: MonoBehaviour {
float rotationSpeed ​​= 0.2f;

void OnMouseDrag ()
{
float XaxisRotation = Input.GetAxis ("Mouse X") * rotationSpeed;
float YaxisRotation = Input.GetAxis ("Mouse Y") * rotationSpeed;
// select the axis by which you want to rotate the GameObject
transform.RotateAround (Vector3.down, XaxisRotation);
transform.RotateAround (Vector3.right, YaxisRotation);
}
}


So I dragged it into the Hierarchy panel

It has been recreated

Then I applied materials again

Then I added the rotation script using add component. But when running the game in game mode, I can not rotate it

So I created a cube and a plan and applied the script

The script works perfectly for the cube and the plan but not for the imported objects