field of view – Geocoordinate calculation for aerial oblique image using camera and plane yaw, pitch, roll, and position data

I have a requirement to calculate the ground footprint for an aerial camera. The photos are TerraPhotos. TerraPhoto user guide provide camera position and plane orientation in .IML file. Additionally, I have the camera calibration file.

In TerraPhoto guide, the yaw, pitch, and roll of the aircraft are defined as follows:

  • yaw (heading): from North clock-wise direction
  • roll: positive, if left-wing is up.
  • pitch: positive, if the nose of the aircraft is up

The camera calibration details are as follows:

(TerraPhoto calibration)
Version=20050513
Description= Nikon D800E BW 50mm
TimeOffset= 0.0000
Exposure= 0.00000
LeverArm= 0.0000 0.0000 0.0000
AntennaToCameraOffset= 0.0000 0.0000 0.0000
AttitudeCorrections(HRP)= -0.4546 0.7553 -34.7538
PlateSize= 7630.00000000 4912.00000000
ImageSize= 7630 4912
Margin= 0
FiducialRadius= 40
FiducialMarks= 0
Orientation= BOTTOM
PrincipalPoint(XoYoZo)= -77.40000000 112.80000000 -10476.54389508
LensModel=Balanced
LensK0=0.000000E+000
LensK1=0.000000E+000
LensK2=0.000000E+000
LensP1=0.000000E+000
LensP2=0.000000E+000

Here, I see that AttitudeCorrection for the camera is given. Hence, I believe it is the orientation of the aerial camera according to the local frame (i.e. aircraft).

with respect to a given aerial photo, I have the following details, which I obtained from the.IML file (please check page 344 for more info).

Image=SLR2_443_20150326_144759_C_B_3489_DUBLIN_AREA_2KM2_FL_300_2232888
Time=402972.957799
Xyz=316440.819 234424.606 312.938
Hrp=-113.33234 2.03435 -1.87426
  • Image represent the name of the image
  • XYZ (i.e. camera easting, northing, and elevation)
  • aircraft yaw, pitch, roll

With this specific information at hand, I am attempting to calculate the ground coordinates of the Image. I intend to use Horizontal FoV, and vertical FoV.

I’ve been attempting this for some time, but still unable to estimate the geocoordinates properly. I did attempt, pin-hole model as well. I obtain results around the area of interest, but my results do not confirm the actual geolocations.

I intend to use either pinhole model or Horizontal and Vertical field of view (FoV) to calculate my geocoordinates.

A guide in the right direction is appreciated.

Code with respect to FoV calculation is provided.

def createRollMatrix(yaw,pitch,roll):
  '''
     Uses the Eigen formatted rotation matrix
     pulled directly from Eigen base code to python
  '''
  # convert degrees to radians
  yaw = np.radians(yaw)
  pitch = np.radians(pitch)
  roll = np.radians(roll)

  su = np.sin(roll)
  cu = np.cos(roll)
  sv = np.sin(pitch)
  cv = np.cos(pitch)
  sw = np.sin(yaw)
  cw = np.cos(yaw)

  rotation_matrix = np.zeros((3,3))
  
  rotation_matrix(0)(0) = cv*cw
  rotation_matrix(0)(1) = su*sv*cw - cu*sw
  #rotation_matrix(0)(2) = su*sw + cu - cu*sw
  rotation_matrix(0)(2) = su*sw + cu*sv*cw
  
  rotation_matrix(1)(0) = cv*sw
  rotation_matrix(1)(1) = cu*cw + su*sv*sw
  rotation_matrix(1)(2) = cu*sv*sw - su*cw

  rotation_matrix(2)(0) = -sv
  rotation_matrix(2)(1) = su*cv
  rotation_matrix(2)(2) = cu*cv

  return rotation_matrix

#### CAMERA misalignment angles
yaw = -0.4546 #  
pitch = -34.7538  # 
roll = 0.7553 #  0 

#### aircraft's yaw pitch roll
yaw1 =  -113.33234
pitch1 =  -1.87426
roll1 = 2.03435

R = createRollMatrix(yaw,pitch,roll)
R2 = createRollMatrix(yaw1,pitch1,roll1)

Corrected_R = (R2.dot(R))

yaw = math.atan(Corrected_R(1)(0)/ Corrected_R(0)(0))
yaw

roll =  math.atan(Corrected_R(2)(1)/ Corrected_R(2)(2))
roll

pitch = math.atan(-Corrected_R(2)(0)/ math.sqrt( (math.pow(Corrected_R(2)(1), 2) + math.pow(Corrected_R(2)(2), 2))))
pitch

Subsequently, I use the following code to calculate the geocoordinates.

import math
import numpy as np 

# pip install vector3d
from vector3d.vector import Vector


class CameraCalculator:
    """Porting of CameraCalculator.java
    This code is a 1to1 python porting of the java code:
        https://github.com/zelenmi6/thesis/blob/master/src/geometry/CameraCalculator.java
    referred in:
        https://stackoverflow.com/questions/38099915/calculating-coordinates-of-an-oblique-aerial-image
    The only part not ported are that explicetly abandoned or not used at all by the main
    call to getBoundingPolygon method.
    by: milan zelenka
    https://github.com/zelenmi6
    https://stackoverflow.com/users/6528363/milan-zelenka
    example:
        c=CameraCalculator()
        bbox=c.getBoundingPolygon(
            math.radians(62),SLR2_443_20150326_144759_C_B_3489_DUBLIN_AREA_2KM2_FL_300_2233046
            math.radians(84),
            117.1, 
            math.radians(0),
            math.radians(33.6),
            math.radians(39.1))
        for i, p in enumerate(bbox):
            print("point:", i, '-', p.x, p.y, p.z)
    """

    def __init__(self):
        pass

    def __del__(delf):
        pass

    @staticmethod
    def getBoundingPolygon(FOVh, FOVv, altitude, roll, pitch, heading):
        '''Get corners of the polygon captured by the camera on the ground. 
        The calculations are performed in the axes origin (0, 0, altitude)
        and the points are not yet translated to camera's X-Y coordinates.
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
            altitude (float): Altitude of the camera in meters
            heading (float): Heading of the camera (z axis) in radians
            roll (float): Roll of the camera (x axis) in radians
            pitch (float): Pitch of the camera (y axis) in radians
        Returns:
            vector3d.vector.Vector: Array with 4 points defining a polygon
        '''
        # import ipdb; ipdb.set_trace()
        ray11 = CameraCalculator.ray1(FOVh, FOVv)
        ray22 = CameraCalculator.ray2(FOVh, FOVv)
        ray33 = CameraCalculator.ray3(FOVh, FOVv)
        ray44 = CameraCalculator.ray4(FOVh, FOVv)

        rotatedVectors = CameraCalculator.rotateRays(
                ray11, ray22, ray33, ray44, roll, pitch, heading)
        
        #origin = Vector(0, 0, altitude) # 
        #origin = Vector(0, 0, altitude) # 

   
   ###   FW ---- SLR1
    

        #  origin = Vector(316645.779, 234643.179, altitude)

        '''
        BW ===== SLR2 
        '''
        origin = Vector(316440.819, 234424.606, altitude)
        #origin = Vector(316316, 234314, altitude)
        intersections = CameraCalculator.getRayGroundIntersections(rotatedVectors, origin)

        return intersections


    # Ray-vectors defining the the camera's field of view. FOVh and FOVv are interchangeable
    # depending on the camera's orientation
    @staticmethod
    def ray1(FOVh, FOVv):
        '''
        tasto
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
        Returns:
            vector3d.vector.Vector: normalised vector
        '''
        pass
        ray = Vector(math.tan(FOVv / 2), math.tan(FOVh/2), -1)
        return ray.normalize()

    @staticmethod
    def ray2(FOVh, FOVv):
        '''
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
        Returns:
            vector3d.vector.Vector: normalised vector
        '''
        ray = Vector(math.tan(FOVv/2), -math.tan(FOVh/2), -1)
        return ray.normalize()

    @staticmethod
    def ray3(FOVh, FOVv):
        '''
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
        Returns:
            vector3d.vector.Vector: normalised vector
        '''
        ray = Vector(-math.tan(FOVv/2), -math.tan(FOVh/2), -1)
        return ray.normalize()

    @staticmethod
    def ray4(FOVh, FOVv):
        '''
        Parameters:
            FOVh (float): Horizontal field of view in radians
            FOVv (float): Vertical field of view in radians
        Returns:
            vector3d.vector.Vector: normalised vector
        '''
        ray = Vector(-math.tan(FOVv/2), math.tan(FOVh/2), -1)
        return ray.normalize()

    @staticmethod
    def rotateRays(ray1, ray2, ray3, ray4, roll, pitch, yaw):
        """Rotates the four ray-vectors around all 3 axes
        Parameters:
            ray1 (vector3d.vector.Vector): First ray-vector
            ray2 (vector3d.vector.Vector): Second ray-vector
            ray3 (vector3d.vector.Vector): Third ray-vector
            ray4 (vector3d.vector.Vector): Fourth ray-vector
            roll float: Roll rotation
            pitch float: Pitch rotation
            yaw float: Yaw rotation
        Returns:
            Returns new rotated ray-vectors
        """
        sinAlpha = math.sin(yaw) #sw OK
        sinBeta = math.sin(pitch) #sv OK
        sinGamma = math.sin(roll) #su OK
        cosAlpha = math.cos(yaw) #cw OK
        cosBeta = math.cos(pitch) #cv OK
        cosGamma = math.cos(roll) #cu OK
        m00 = cosBeta * cosAlpha # cosAlpha * cosBeta  #cw*cv 
        m01 = sinGamma * sinBeta * cosAlpha - cosGamma * sinAlpha # cosAlpha * sinBeta * sinGamma - sinAlpha * cosGamma     #cw*sv#cu
        m02 = sinGamma * sinAlpha +  cosGamma * cosAlpha * sinBeta#cosAlpha * sinBeta * cosGamma + sinAlpha * sinGamma
        m10 = sinAlpha * cosBeta
        m11 = sinAlpha * sinBeta * sinGamma + cosAlpha * cosGamma
        m12 = sinAlpha * sinBeta * cosGamma - cosAlpha * sinGamma
        m20 = -sinBeta
        m21 = cosBeta * sinGamma
        m22 = cosBeta * cosGamma
        
        # Matrix rotationMatrix = new Matrix(new double()(){{m00, m01, m02}, {m10, m11, m12}, {m20, m21, m22}})
        rotationMatrix = np.array(((m00, m01, m02), (m10, m11, m12), (m20, m21, m22)))

        # Matrix ray1Matrix = new Matrix(new double()(){{ray1.x}, {ray1.y}, {ray1.z}})
        # Matrix ray2Matrix = new Matrix(new double()(){{ray2.x}, {ray2.y}, {ray2.z}})
        # Matrix ray3Matrix = new Matrix(new double()(){{ray3.x}, {ray3.y}, {ray3.z}})
        # Matrix ray4Matrix = new Matrix(new double()(){{ray4.x}, {ray4.y}, {ray4.z}})
        ray1Matrix = np.array(((ray1.x), (ray1.y), (ray1.z)))
        ray2Matrix = np.array(((ray2.x), (ray2.y), (ray2.z)))
        ray3Matrix = np.array(((ray3.x), (ray3.y), (ray3.z)))
        ray4Matrix = np.array(((ray4.x), (ray4.y), (ray4.z)))
        
        res1 = rotationMatrix.dot(ray1Matrix)
        res2 = rotationMatrix.dot(ray2Matrix)
        res3 = rotationMatrix.dot(ray3Matrix)
        res4 = rotationMatrix.dot(ray4Matrix)
        
        rotatedRay1 = Vector(res1(0, 0), res1(1, 0), res1(2, 0))
        rotatedRay2 = Vector(res2(0, 0), res2(1, 0), res2(2, 0))
        rotatedRay3 = Vector(res3(0, 0), res3(1, 0), res3(2, 0))
        rotatedRay4 = Vector(res4(0, 0), res4(1, 0), res4(2, 0))
        rayArray = (rotatedRay1, rotatedRay2, rotatedRay3, rotatedRay4)
        
        return rayArray

    @staticmethod
    def getRayGroundIntersections(rays, origin):
        """
        Finds the intersections of the camera's ray-vectors 
        and the ground approximated by a horizontal plane
        Parameters:
            rays (vector3d.vector.Vector()): Array of 4 ray-vectors
            origin (vector3d.vector.Vector): Position of the camera. The computation were developed 
                                            assuming the camera was at the axes origin (0, 0, altitude) and the python
                                            results translated by the camera's real position afterwards.
        Returns:
            vector3d.vector.Vector
        """
        # Vector3d () intersections = new Vector3d(rays.length);
        # for (int i = 0; i < rays.length; i ++) {
        #     intersections(i) = CameraCalculator.findRayGroundIntersection(rays(i), origin);
        # }
        # return intersections

        # 1to1 translation without python syntax optimisation
        intersections = ()
        for i in range(len(rays)):
            intersections.append( CameraCalculator.findRayGroundIntersection(rays(i), origin) )
        return intersections

    @staticmethod
    def findRayGroundIntersection(ray, origin):
        """
        Finds a ray-vector's intersection with the ground approximated by a planeç
        Parameters:
            ray (vector3d.vector.Vector): Ray-vector
            origin (vector3d.vector.Vector): Camera's position
        Returns:
            vector3d.vector.Vector
        """
        # Parametric form of an equation
        # P = origin + vector * t
        x = Vector(origin.x,ray.x)
        y = Vector(origin.y,ray.y)
        z = Vector(origin.z,ray.z)
        
        # Equation of the horizontal plane (ground)
        # -z = 0
        
        # Calculate t by substituting z
        t = - (z.x / z.y)
        
        # Substitute t in the original parametric equations to get points of intersection
        return Vector(x.x + x.y * t, y.x + y.y * t, z.x + z.y * t)

camera – Horizontal and vertical field of view calculation when principal point is not image center

I have a requirement to calculate the ground footprint for an aerial camera. I have the camera position and orientation. To calculate it, I require to calculate Horizontal Field of View (FoV) and vertical FoV. I found the formulas in calculating horizontal FoV and vertical FoV. But the formulas assume that the principal point is in the image center (I hope).

What are the correct formulas to apply when the principal point is not the image center? In my case, the principal point deviates from the image center.

Does principal point have an impact on calculating Horizontal FoV and Vertical FoV?

Setting precision of a result for the next calculation

Sorry for my easy question.

I would like to learn how I could change the precision of a value to use it for the next calculation ?

Lets say, a=9.96329×10^-13, in fact this value is so small and can be taken as zero. However, I don’t know how to impose Mathematica to take it zero by setting its precision.

I need to multiply the value of a with the b value which is equal to 10^15 and if I can’t arrange the precision of the a,the multiplication will give the result of 10^12 which in reality I want it to be zero.

Could you help me about this issue ?

Best Regards,

c# – Which design pattern to use for a calculation pipeline with lots of varying rules

I’m currently trying to solve a problem with some legacy code that makes some calculations in order to find out the final value of a monetary benefit. The legacy code uses an imperative approach with lots of if and elses to handle calculation rules for each kind of benefit, which I believe will be hard to mantain and reuse, since those rules can change drastically based on a change of law. (not to mention that the calculation logic is heavily tangled with presentation logic)

So I’ve been trying to find a design pattern which could help me in this situation. My initial thought was to use the Strategy pattern to handle the different kinds of calculations and a factory to choose the correct strategy implementation, but I believe this wont work out due to the number of different calculations (there’s 15 currently, with more to be defined).

So after further research, I’ve found out about the Rule Pattern and the Specification Pattern and I thought they looked promising so I tried to implement a solution by adapting them, but I’ve hit some roadbumps.

My implementation basically tries to select the appropriate calculation rules using the Specification pattern, and then apply the appropriate calculation rules in the order they’re defined. Each specification would have a pipeline of calculation rules attached to them, and if the specification is suitable, the pipeline of rules would be applied.

Here’s my implementation:

The specification would have a set of rules, with a method IsSatisfiedBy defining if the rule should be applied base on the benefit data and the calculation rules, which are registered by an domain expert.

public abstract class BenefitSpecification
{
    protected ICollection<IBenefitRule> Rules { get; set; } = new List<IBenefitRule>();

    public abstract bool IsSatisfiedBy(BenefitData benefitData, CalculationRule benefitRule);

    public abstract void CreateRuleSet(BenefitData benefitData);

    public decimal ApplyTo(BenefitData benefitData)
    {
        decimal total = 0M;
        foreach (var rule in Rules)
        {
            total = rule.ApplyRule(total);
        }
        return total;
    }
}

An hypothetical example of an concrete specification would be:

public class IntegralCalculation
{
    public override bool IsSatisfiedBy(BenefitData data, CalculationRule rule) => rule.IsIntegral;
    public override void CreateRuleSet(BenefitData data)
    {
        Rules.Add(
           new SumContributions(data.Contributions),
           new ApplyTax(),
           new MultiplyBy(2),
           new LimitBy(data.BenefitLimit)
        );
    }
}

The benefit rules would be simple mathematical operations:

public interface IBenefitRule
{
    decimal ApplyRule(decimal value);
}

I’m not sure if I’m overcomplicating stuff, but the example demonstrated here is a simplified version of the real rules, which have more logic inside them. The reason why I’m trying to do it this way is because I want want to reuse calculation logic in other specifications, and sometimes change the order they’re applied based on the benefit data.

The roadbump that I’ve hit is that some necessary information is not available on the BenefitData alone and to get them I would have to break the interface. I thought about registering the benefit rule in the DI Container and accessing the database to get the data, but something about this approach doesn’t feel right.

An example of the problem would be:

public class ReajustContributions : IBenefitRule
{
    /** properties defined  here **/

    /**
     * the reajustIndexes are not available in the BenefitData, so I would
     * have to query the database somehow.
     */
    public ReajustContributions(
       IEnumerable<Contributions> contributions,
       IEnumerable<ReajustIndex> reajustIndexes
    )
    {
       _contributions = contributions;
       _reajustIndexes = reajustIndexes;
    }

    public decimal ApplyRule(decimal value)
    {
       return /** reajusted values **/
    }
}

So my question is Is there a better or simpler design pattern to solve this kind of problem (selection of calculation rules based on business rules)?

Need Suggestion for Numerology Calculation Software or Script

Hi,

Need to know that for numerology calculation ….. personal software development is good or we should use any API???

According to me (I seen formula and other details) numerology calculation software or script is a bit expensive if we are thinking to create our own …. and also it will take a bit long time because there are very long and complex process. If any one is developed then please let me know ideas or time frame that about how many days it will take.

If i google for numerology software then getting many websites ….. as i see some good sites also like astrologyapi but think to see some more ….. So if any one can suggest us more website or software provider then it would be very helpful for me.

Looking for your suggestion for both.
SEMrush

Thanks

 

calculus and analysis – Definite integral gets stuck in the calculation

$f=frac{sinh ^{-1}left(e^{-2 k t} sinh (6 k)right)}{2 k}-2$

for $k=20$ I have:

$frac{df}{dt}=-frac{e^{-20 t} sinh (60)}{sqrt{e^{-40 t} sinh ^2(60)+1}}$

f = -2 + ArcSinh(E^(-2 k t) Sinh(6 k))/(2 k)

Integrate(D(f, t), {t, 0, 10})

And he just gets stuck and doesn’t move on. Then I tried to apply parallel computation, but got the error:

Needs("Parallel`Developer`")

f = -2 + ArcSinh(E^(-2 k t) Sinh(6 k))/(2 k)

Integrate(D(f, t), {t, 0, 10})

Parallelize::nopar1: !(*SubsuperscriptBox(((Integral)), (0), (10))(*SubscriptBox(((PartialD)), (t))f (DifferentialD)t)) cannot be parallelized; proceeding with sequential evaluation.

I would be grateful for help in finding out the reason.

Processor AMD Ryzen 7 2700 Pro, 16 Gb RAM. When calculating this task, the processor is loaded by 8-10%.

unity – ECS – Stats, damage-types & damage calculation

Prologue

Im quite new to data oriented programming and my goal is to implement a runescape stats & damage mechanic.

This is quite a pretty complex topic : [Runescape-Mechanics][1] and i havent found any ECS related sources on that topic yet.

In the following example we see a bunch of items which modify the weares stats based on a few conditions.
This happens in two different variants, either for the damage calculation only, or as a buff.

Brine sabre and Brackish blade increase damage against crabs.
or
Silverlight and Darklight increase ability damage by a scaling of 25-124% against most demons. The exact damage bonus is based on your base Attack and Strength, and the monster's base Defence.

The problem

Such a RPG system is very complex. There different damage types, different resistences and other stats.
Having a strong OOP background and no real experience in DOP, i cant find a suitable architecture to fill those needs.

In my current approach every stat is a component. Items and Buffs are structs. A item “buffs” its owner and the buff modifies his stats. This works so far, but i have no idea how i could realise the damage calculation, while still keeping it that flexible as it is in runescape.
This little example would just be able to buff the stats… not deal, receive or modify the damage itself.


// The stats

public struct Health{
  float max;
  float value;
}

public struct MeeleDamage{
  float base;
  float value;
}

public struct MeeleResistence{
  float base;
  float value;
}

// Item & Buffs
public struct Item{
  string name;
  int amount;
  bool equipable;
  List<Buff> buffs;
}

public struct Buff{
  string name;
  float duration;
  Condition applyable;
  ToBuff stat;
  float value;
}

Its also important that an entity can also deal damage to multiple other entities in one frame.
How would you implement such an complex mechanic ? Any examples are appreciated !

google sheets – GogleSheet Date and Time Calculation – Automation

I have a column that has 34 records of Week Day, Month/Day, and Times. I am looking for two formulas that I can use in a table that will give me the count of weekdays and the time duration per day. Eventually, I would like to just copy and past new dates into column A and have the table automatically calculate. Here is my google sheet example. Is there a way to do this without creating helper columns? If not, no big deal. Anything to help automate the process will be helpful.

https://docs.google.com/spreadsheets/d/1C6N94QJyEgm-2yg2SEDOweIU2fk2h2DLydKb-nH-ObE/edit?usp=sharing

unity – Why does my in-game frame calculation not correlate with animation length?

I need to play an animation in such a way that each frame is for sure being played and not skipped.
To do that, I use a very slow animator speed like this:

_animator.speed = 0.01f;

The Unity editor says this about the animation:

enter image description here

To make sure that each frame is really shown / played, I have implemented my own counter.
My counter however returns 43 frames and not 51 frames as the Inspector.
No matter how often I repeat this process, it always return 43 frames.

What am I missing?

Thank you!

private void LateUpdate()
{

    float f = _animator.GetCurrentAnimatorClipInfo(0)(0).clip.length * (_animator.GetCurrentAnimatorStateInfo(0).normalizedTime % 1) * _animator.GetCurrentAnimatorClipInfo(0)(0).clip.frameRate;
    int iCurrentFrameIDInAnimationFile = (int)f;

    bool bIsNewFrame = (iCurrentFrameIDInAnimationFile != _iLastFrameIDInAnimationFile);

    if (!bIsNewFrame)
    {
        return;
    }
    else if (bIsNewFrame)
    {
        _iFrameCount += 1;
    }

    _iLastFrameIDInAnimationFile = iCurrentFrameIDInAnimationFile;

    if (_animator.GetCurrentAnimatorStateInfo(0).normalizedTime > 1 && !_animator.IsInTransition(0))
    {
        //animation has finished playing
        Debug.Log("Frames: " + _iFrameCount); //this returns 43 frames. Why???
    }

Calculation of a series

It seems that we have:

$$sum_{ngeq 1} frac{2^n}{3^{2^{n-1}}+1}=1.$$

Please, how can one prove it?