quaternion – Calculate the orientation of GameObject in the custom coordinate system in Unity

I'm trying to build a custom coodinate system. To be more precise, the user chooses three points on the screen to define the coordinate system. The first is the origin, the second is used to build the OX axis, the third OZ and the last OY. The system is properly constructed. The axes are perpendicular to each other.

Once this coordinate system is built, I need to convert the orientation and position of objects placed on the screen from world space to this particular world space. To do this, I tried the solution proposed from here 1. It seems to work, but only for the post. The orientation seems not to change at all.

For example, if I place a cube at (0, 0, 0) without rotating it in world space, in my own coordinate system, it must also be at (0, 0, 0) and it does not should not be rotated relative to my own coordinate system. So the cube in my coordinate system should have a rotation relative to the world space. However, in reality, my code places the cube at my own origin, but does not rotate it accordingly.

In the image below, the first coordinate system (the smallest) is my own system. The other is of world origin. We can see that the WorldObject cube is placed at (0, 0, 0) with rotation (0, 0, 0). The OwnObject should be the representation of WorldObject in my system. We can see that the position has changed, but not the rotation.

My question is how can I rotate the second cube accordingly? I tried to define the rotation of the cube to be the orientation of the structure proposed in this solution 1, but it did not seem to rotate correctly. I also tried to use Quaternion.LookRotation, but it didn't work either.

enter description of image here
1 How to create a custom set of coordinates based on two points in Unity?

Unity – Rotate an object on itself, from one random Quaternion to another

I have a situation:
I have a 3D object in the world. say a sphere.
I have 2 random directional vectors: vector A and vector B:
enter the description of the image here

My question is: how to rotate in time my object, from A to B?
Vector A is important: I do not just want to turn the front of the object forward towards B.

I know I can use the vector3 function C = Vector3.SmoothDamp (…) in the unit for lerp between my 2 vectors A & B.

but then ok, I have vector3 C, how to apply the rotation of my object to C?
if you do not want to do:
gameObject.transform.forward = C;

I want something like:
gameObject.transform.rotation = SomeQuaternion (C, initial rotation A).
or something.

Thanks for the help!

PS: I do not want to be parent / non parent gameObject or something like that, I want the math answer, using Quaternion.

Aggressive Geometry – Reference Request: Shimura Curve and Quaternion Algebra

Let $ F $ to be a totally real field, and $ D $ to be a quaternion algebra on $ F $ which is divided to exactly an infinite prime number of $ F $, then we have the associated quaternionic Shimura curve $ Sh $ more than $ F $ which is of abelian type and not of PEL type in general if $ F not = mathbb Q $. People usually introduce / define the Shimura Curve in this way.

Is it true that $ 1 $Does Shimura variety have the form above? In particular, everything $ 1 $Shimura variety is abelian type? I can not find a good reference for the proof.

quaternion – Generates a random vector relative to the current vector to some extent in the 3D space

I'm creating a simple ray tracer and I think that would be the best place to ask. I created my own engine in Go and I went quite far. I'm setting up diffuse surfaces, so I'll have to generate a random direction to send scattered rays to fake dummies.

I've looked at the other issues here, and found that it was similar to my question, but it was for 2D space, not 3D.

Let's say I have a direction a, and I want to generate a new direction that is random, and within 45 degrees of direction a. As I work in 3D space, I can not just find the perpendicular angle and the lerp between them, so I will have to do something else. For example, here's what I would do in 2D:

dir = Vector2(1, 1).Normal()

perpDir = dir.Perp() // gets the perpendicular direction of `dir`

randDir = dir.Lerp(perpDir, rand() - 0.5) // rand is (0, 1)

I do not know mathematics very well, so it has to be simple enough for me to understand. Is there a simple way to find a random direction based on a current direction with a maximum rotation of a defined maximum rotation such as 90 degrees?

Adding vectors and applying the rotation matrix to quaternion help

I have a problem and I am looking for help, see the attached picture. I have a number of points (x, y, z) that are treated as connected vectors (first point as first vector base, second point as first vector end AND second vector base) etc.) and I need to know the minimum and maximum coordinates. points of the whole connected structure. I only know that the rotation matrix using quaternion should be applied to the process, but I do not have that experience. Any idea or link to a similar example would be appreciated.

   auto rx = _coord_x.data(); auto ry = _coord_y.data(); auto rz = _coord_z.data();

// set up dimensions of coordinates

auto ncoords = number_of_vectors();


for (int32_t i = 0; i < ncoords; i++)
{
    //normalization
    float fact = 1.0f / std::sqrt(rx(i) * rx(i) + ry(i) * ry(i) + rz(i) * rz(i));

    rx(i) *= fact; ry(i) *= fact; rz(i) = fact;

    // rotation matrix ??

  }

}

example

quaternion – Storing transformations in game objects. (GLM, C ++)

I am writing a game engine from scratch for learning purposes. I have just begun to implement transformations. I know how they work in general but I'm not sure how to implement them effectively in a hierarchical structure.

Should I keep each transformation separately or group them together in a transformation matrix? I have the impression that a single matrix would be much more efficient than calculating the model transformation matrix each image for each object in the component tree. On the other hand, I would like to access position, rotation and scale individually, as I plan to use them later for other optimizations.

I am aware of glm::decompose but I'm not sure it's a good idea to call each picture.

Should I keep the matrix and components, decompose it to each image or calculate it for each image?

c # – Mathematics of Quaternion and Vector3 Transformation

hi i've done and AI for my enemies that works perfectly but a particular code section is repetitive and i feel that it can be optimized

void ChangeDirection()
    {
        if (movement == Vector3.left)
            transformModel.rotation = Quaternion.Euler(0, 270, 0);
        else if (movement == Vector3.right)
            transformModel.rotation = Quaternion.Euler(0, 90, 0);
        else if (movement == Vector3.back)
            transformModel.rotation = Quaternion.Euler(0, 180, 0);
        else if (movement == Vector3.forward)
            transformModel.rotation = Quaternion.Euler(0, 0, 0);
    }

the enemy will be in a narrow corridor where only the following movements are possible, that is to say that the enemy can only move the vector3. right, left, back and forward. so the code works without any problem,
However, the code is repetitive and I want the enemy to move regardless of its motion vector.

here is the rest my code

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class EnemyMovement2 : MonoBehaviour
{
    public float speed;
    public Vector3 movement;
    public Animator animator;
    private GameObject gameManager;

    void Start()
    {

        movement = Vector3.back;
        gameManager = GameObject.FindWithTag("GameController");


    }
    void OnCollisionEnter(Collision collision)
    {
        if (collision.gameObject.tag == "Wall")
        {

            CorrectPosition(new Vector3(1, 1, 1));
            movement = Vector3.zero;

        }
    }

    public Transform transformModel;
    void ChangeDirection()
    {
        if (movement == Vector3.left)
            transformModel.rotation = Quaternion.Euler(0, 270, 0);
        else if (movement == Vector3.right)
            transformModel.rotation = Quaternion.Euler(0, 90, 0);
        else if (movement == Vector3.back)
            transformModel.rotation = Quaternion.Euler(0, 180, 0);
        else if (movement == Vector3.forward)
            transformModel.rotation = Quaternion.Euler(0, 0, 0);
    }
    void CorrectPosition(Vector3 CorrectPositionOF)
    {
        Vector3 posNow = transform.position;
        if (CorrectPositionOF.x == 1)
            posNow.x = Mathf.RoundToInt(gameObject.transform.position.x);
        if (CorrectPositionOF.y == 1)
            posNow.y = Mathf.RoundToInt(gameObject.transform.position.y);
        if (CorrectPositionOF.z == 1)
            posNow.z = Mathf.RoundToInt(gameObject.transform.position.z);
        transform.position = posNow;
    }
    public LayerMask whatIsWall;
    private Vector3() direction = { Vector3.left, Vector3.forward, Vector3.back, Vector3.right };
    void FixedUpdate()
    {
        GetComponent().velocity = movement * speed;
        animator.SetFloat("runningSpeed", GetComponent().velocity.magnitude);
        for (int i = 0; i < direction.Length; i++)
        {
            RaycastHit hit;
            if (Physics.Raycast(transform.position, direction(i), out hit, 18, 0 << 11 | 1 << 8 | 1 << 9)
                && hit.collider.gameObject.tag == "Player"
                && movement == Vector3.zero)
            {
                movement = direction(i);
                ChangeDirection();
                return;
            }
        }


    }
}
```

glsl – The rotation of the quaternion is the opposite of what I expect

I'm trying to learn the quaternions and I decided to set up my own quaternion class.

To test it, I created two vertex shaders, one that gets a template matrix (calculated from the quaternion) and the other that gets the quaternion of rotation directly and rotates the vertices in the shader.

I then feed these quaternion which revolves around the Y axis t / 10000 radians:

const rotation = Quat.fromAxisAngle (new Vec3 ([0, 1, 0]), t / 10000)

The result surprises me because my model turns counterclockwise, but I expected it to turn clockwise, because the Angle increases. The rotation around the X or Z axes is also reversed.

I suspect that my formulas are wrong and I guess + Z is ahead. If I transpose my model matrix (or post-multiply it), it runs as expected (except Z, which is now reversed).

Where are my false formulas and how can I understand the calculation behind this?

Details

  • My matrices are important columns in memory (to match the GLSL mat4).
  • My coordinate system is + X right, + Y up, -Z forward.

My code

Quaternion of axis + angle

function fromAxisAngle (axis: Vec3, angle: number, dest = new Quat ()): Quat {
angle * = 0.5
const sin = Math.sin (angle)

dest.x = axis.x * sin
dest.y = axis.y * sin
dest.z = axis.z * sin
dest.w = Math.cos (angle)

return dest
}

Note: I've noticed that if I deny the x, y, and z of this quaternion, everything works as expected. Is this the primary cause? I thought the quaternions did not have the hand!

As it is actually about inverting the quaternion (and thus its rotation), I am afraid to simply correct the root cause. I do not want to fix the problem, I want to fix my calculations!

Matrix of quaternion

toMat4 (dest = new Mat4 ()): Mat4 {
const {x, y, z, w} = this

const x2 = x + x
const y2 = y + y
const z2 = z + z

const xx = x * x2
const xy = x * y2
const xz = x * z2
const yy = y * y2
const yz = y * z2
const zz = z * z2
const wx = w * x2
const wy = w * y2
const wz = w * z2

dest.init ([
    1 - (yy + zz), xy - wz,       xz + wy,       0,
    xy + wz,       1 - (xx + zz), yz - wx,       0,
    xz - wy,       yz + wx,       1 - (xx + yy), 0,
    0,             0,             0,             1,
  ])

return dest
}

Matrix vertex shader

float mediump precision;

attribute vec3 aVertexPosition;
attribute vec2 aVertexUV;

uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;

variant vec2 vUV;

main void (empty) {
vUV = aVertexUV;

gl_Position = uProjectionMatrix * uViewMatrix * uModelMatrix * vec4 (aVertexPosition.xyz, 1.0);
}

Quaternion vertex shader

float mediump precision;

attribute vec3 aVertexPosition;
attribute vec2 aVertexUV;

struct Transform {
float scale;
translation vec3;
rotation vec4;
};

uniform Transformer uModel;
uniform Transformer uView;
uniform mat4 uProjection;

variant vec2 vUV;

vec3 rotateVector (vec4 quat, vec3 vec) {
return vec + 2.0 * cross (cross (vec, quat.xyz) + quat.w * vec, quat.xyz);
}

main void (empty) {
vUV = aVertexUV;

vec3 world = rotateVector (uModel.rotation, aVertexPosition * uModel.scale) + uModel.translation;
vec3 view = rotateVector (uView.rotation, world * uView.scale) + uView.translation;

gl_Position = uProjection * vec4 (view, 1.0);
}

opengl – Quaternion Rotation after performing previous rotations

Have quaternion $ Q $ which turns on the $ X $ axis 90 degrees.

$ Q $ is now $ (0.707106, 0.707106, 0, 0) $.

I want to do a rotation $ Q $ another 90 degrees by a different pivot $ P $ so I'm going to calculate $ Q * -P * Q ^ {- 1} $, save the $ x, y, z $ coordinates in $ res (x, y, z) $ so what $ res = res + P $. This should give me the gap between the new coordinates and the old ones. However, in my case, a voluminous translation appears on the screen. $ Y $ axis.

Example 1

Here, I make the two rotations around the same pivot, compared to the first center then pivot.

Example 2

I do both rotations around the same point $ C (-0.5,0.5,0.5) $ if this counts, for the second rotation, start by rotating the pivot with the current rotation to get the actual coordinates of the pivot.

I do all the calculations in the object space where the center of the object is at 0,0,0

quaternion – How can I use slerp to prevent my orbital camera from "shortening" up to its target position?

I work on a camera that moves around the player at a fixed distance. This is the standard type of right stick to move. The camera follows the player but remains at a fixed angle of (0,0,1) until it is moved by the input and the player is free to move in. any direction without changing the rotation of the camera relative to the world. Imagine the camera as a point on an invisible sphere with the player in the center.

However, this poses a problem: when I turn the camera around the player too quickly, the lerping I add to try to smooth out the movement tries to "shorten" the movement instead of moving gracefully around the sphere when the desired position is moving away one way. The image below is a view from top to bottom, the camera takes the red road, but I want it to take the green road while staying smooth.
Bad Cam

Currently I am:

  • Check if the display controls are inverted and set a modifier value.
  • Update the current angle to (0,0,1) using standard X component of a 2 vector defined by keyboard input (left cam, right cam).
  • Update the current angle on the Y axis using the Y component, defined by the input (cam at the top, cam at the bottom).
  • Search for the desired cam position from both angles and the position of the focus point (reader).
  • Lerping between the current position of the cam and the desired one and update the vector lookAt.

Here is the update code (it's in Go that's not particularly common for game developers, but hope the syntax should be pretty easy to understand to understand what's going on):

// Update the game camera with focusPos focus.
func StepGameplayCam (focusPos math32.Vector3, delta float64) {
invertLookMod: = float32 (1)
if settings.Control (). InvertLook {
invertLookMod = -1
}

// Gets the components up / down and left / right and sets the angles.
currentAngleToZ = user.WrapAngleRad (currentAngleToZ + float32 (movementVec.X * float32 (delta) * settings.Control (). CamSpeed ​​* invertLookMod))

if! ((movementVec.Y <0 && currentAngleToY <= math32.DegToRad(15)) || (movementVec.Y > 0 && currentAngleToY> = math32.DegToRad (165))) {
currentAngleToY = user.WrapAngleRad (currentAngleToY + float32 (movementVec.Y * float32 (delta) * settings.Control (). CamSpeed ​​* invertLookMod))
}

distanceToFocus = maxDistanceToFocus // Camera Collisions TODO / do not cut things

// Set the desired cam position.
targetPos.X = focusPos.X + distanceToFocus * math32.Cos (currentAngleToZ) * math32.Sin (currentAngleToY)
targetPos.Z = focusPos.Z + distanceToFocus * math32.Sin (currentAngleToZ) * math32.Sin (currentAngleToY)
targetPos.Y = focusPos.Y + distanceToFocus * math32.Cos (currentAngleToY)
camPos.Lerp (targetPos, float32 (delta)) // The TODO problem is here, we'll be between vectors, but we have to do it spherically.
gameCam.SetPositionVec (camPos)

gameCam.LookAt (& focusPos)

if settings.Dev (). DebugEnabled {
updateCamDebug (focusPos.X, focusPos.Y, focusPos.Z)
}
}

I have a strong feeling slerp That's what I'm looking for here, but my understanding of 3D mathematics is beginning to emerge, and I do not know how to use it in this context.

How can I prevent my camera from "shortening" through the imaginary sphere surrounding the player to which it is linked?