Unity – Manage audio in an RTS – when to play sounds?

Building an RTS I have several hundred units on the screen at the same time.

Whenever a unit attacks / collects something, I want to make a sound (if the camera is close enough).

My question is, how do most RTS games do this? Do I play 1 sound per attack per unit, i.e. if 200 units fight, do I play 200 individual sounds (one sound each time a unit attack)?

The same goes for picking, if 20 workers are picking up wood, do I play a sound every time they hit a tree?

Doesn't it use a lot of resources? Is there a better way?

2d – How to generate a random map with islands (like Earth), using the Voronoi diagram, on a tilemap in Unity?

I'm developing a game in Unity and I'm trying to build a map generator capable of creating random maps like Earth, on a 2D tilemap. The map should contain 2 types of tiles (let's call them grass and soil), while grass should be the "main" tile, and there should be patches of soil above the grass .

This has already been requested here. My goal is to create something like second picture in question (this picture shows water and grass, but it doesn't really matter).

One of the responses suggested creating a Voronoi diagram, and that is what I have done so far:

  1. Choose random points on the tilemap and mark them as "center points" (each tile has a probability of CenterPointProbability = 0.15 be the focal point). Each central point is randomly decided as grass or soil (with a CoinProbability = 0.5).
  2. For each tile on the tilemap which is do not a central point, check if it is closest to a central point of grass or soil, and determine its type by that.

I did, but I came up with cards that are not exactly islands. In addition, the "islands" are diagonal:
Example 1
Example 2
Example 3

I tried to change the CenterPointProbability but the results were not better. It should be noted that when implementing point 2 from above, I tried to find the closest central point by looking for the tilemap spiraling outwards, from the cell from top to middle, increasing the distance on each iteration, until a central point is found:
enter description of image here

And finally, I also tried Perlin Noise as suggested in the question I mentioned, but it was even worse – no islands were produced, but mostly single tiles.

The main question is therefore: How to generate a random "water" map with terrestrial "islands" (like the Earth), using the Voronoi diagram? or at least how can i improve my current alogirthm?

Create holes in the mesh (Unity)

I make a game where the field of vision plays a big role in the gameplay. I have created a field of view script that takes an empty game object and uses it to hide everything outside of the FOV. I try to shade everything outside. I was thinking of doing it with a plane which is only a transparent black, and using the same code which already makes that the fov object has just drilled a hole in the mesh, but I didn’t have no idea how to do this. Does anyone here have a contribution? The code I am using is

using UnityEngine;
using System.Collections;
using System.Collections.Generic;
// Problem: pod možijem se mesh rendera, origin point ne štima.
public class FieldOfView : MonoBehaviour {

    public float viewRadius;
    public float viewAngle;

    public Transform spawnPos;
    public LayerMask targetMask;
    public LayerMask obstacleMask;

    public List visibleTargets = new List();

    public float meshResolution;
    public int edgeResolveIterations;
    public float edgeDstThreshold;

    public MeshFilter viewMeshFilter;
    Mesh viewMesh;

    void Start() {
        viewMesh = new Mesh ();
        viewMesh.name = "View Mesh";
        viewMeshFilter.mesh = viewMesh;

        StartCoroutine ("FindTargetsWithDelay", .2f);

    IEnumerator FindTargetsWithDelay(float delay) {
        while (true) {
            yield return new WaitForSeconds (delay);
            FindVisibleTargets ();

    void LateUpdate() {
        DrawFieldOfView ();

    void FindVisibleTargets() {
        visibleTargets.Clear ();
        Collider() targetsInViewRadius = Physics.OverlapSphere (transform.position, viewRadius, targetMask);

        for (int i = 0; i < targetsInViewRadius.Length; i++) {
            Transform target = targetsInViewRadius (i).transform;
            Vector3 dirToTarget = (target.position - transform.position).normalized;
            if (Vector3.Angle (transform.forward, dirToTarget) < viewAngle / 2) {
                float dstToTarget = Vector3.Distance (transform.position, target.position);
                if (!Physics.Raycast (transform.position, dirToTarget, dstToTarget, obstacleMask)) {
                    visibleTargets.Add (target);

    void DrawFieldOfView() {
        int stepCount = Mathf.RoundToInt(viewAngle * meshResolution);
        float stepAngleSize = viewAngle / stepCount;
        List viewPoints = new List ();
        ViewCastInfo oldViewCast = new ViewCastInfo ();
        for (int i = 0; i <= stepCount; i++) {
            float angle = transform.eulerAngles.y - viewAngle / 2 + stepAngleSize * i;
            ViewCastInfo newViewCast = ViewCast (angle);

            if (i > 0) {
                bool edgeDstThresholdExceeded = Mathf.Abs (oldViewCast.dst - newViewCast.dst) > edgeDstThreshold;
                if (oldViewCast.hit != newViewCast.hit || (oldViewCast.hit && newViewCast.hit && edgeDstThresholdExceeded)) {
                    EdgeInfo edge = FindEdge (oldViewCast, newViewCast);
                    if (edge.pointA != Vector3.zero) {
                        viewPoints.Add (edge.pointA);
                    if (edge.pointB != Vector3.zero) {
                        viewPoints.Add (edge.pointB);


            viewPoints.Add (newViewCast.point);
            oldViewCast = newViewCast;

        int vertexCount = viewPoints.Count + 1;
        Vector3() vertices = new Vector3(vertexCount);
        int() triangles = new int((vertexCount-2) * 3);

        vertices (0) = Vector3.zero;
        for (int i = 0; i < vertexCount - 1; i++) {
            vertices (i + 1) = transform.InverseTransformPoint(viewPoints (i));

            if (i < vertexCount - 2) {
                triangles (i * 3) = 0;
                triangles (i * 3 + 1) = i + 1;
                triangles (i * 3 + 2) = i + 2;

        viewMesh.Clear ();

        viewMesh.vertices = vertices;
        viewMesh.triangles = triangles;
        viewMesh.RecalculateNormals ();

    EdgeInfo FindEdge(ViewCastInfo minViewCast, ViewCastInfo maxViewCast) {
        float minAngle = minViewCast.angle;
        float maxAngle = maxViewCast.angle;
        Vector3 minPoint = Vector3.zero;
        Vector3 maxPoint = Vector3.zero;

        for (int i = 0; i < edgeResolveIterations; i++) {
            float angle = (minAngle + maxAngle) / 2;
            ViewCastInfo newViewCast = ViewCast (angle);

            bool edgeDstThresholdExceeded = Mathf.Abs (minViewCast.dst - newViewCast.dst) > edgeDstThreshold;
            if (newViewCast.hit == minViewCast.hit && !edgeDstThresholdExceeded) {
                minAngle = angle;
                minPoint = newViewCast.point;
            } else {
                maxAngle = angle;
                maxPoint = newViewCast.point;

        return new EdgeInfo (minPoint, maxPoint);

    ViewCastInfo ViewCast(float globalAngle) {
        Vector3 dir = DirFromAngle (globalAngle, true);
        RaycastHit hit;

        if (Physics.Raycast (transform.position, dir, out hit, viewRadius, obstacleMask)) {
            return new ViewCastInfo (true, hit.point, hit.distance, globalAngle);
        } else {
            return new ViewCastInfo (false, transform.position + dir * viewRadius, viewRadius, globalAngle);

    public Vector3 DirFromAngle(float angleInDegrees, bool angleIsGlobal) {
        if (!angleIsGlobal) {
            angleInDegrees += transform.eulerAngles.y;
        return new Vector3(Mathf.Sin(angleInDegrees * Mathf.Deg2Rad),0,Mathf.Cos(angleInDegrees * Mathf.Deg2Rad));

    public struct ViewCastInfo {
        public bool hit;
        public Vector3 point;
        public float dst;
        public float angle;

        public ViewCastInfo(bool _hit, Vector3 _point, float _dst, float _angle) {
            hit = _hit;
            point = _point;
            dst = _dst;
            angle = _angle;

    public struct EdgeInfo {
        public Vector3 pointA;
        public Vector3 pointB;

        public EdgeInfo(Vector3 _pointA, Vector3 _pointB) {
            pointA = _pointA;
            pointB = _pointB;


Thank you.

How to use Unity with C ++?

The Visual Studio tools for Unity are mainly intended for writing C # scripts. They don't expose a C ++ script API.

The reason they appear when working with C ++ in Visual Studio is that they are related to VS, not specifically to C ++. VS will allow you to develop in several different languages ​​in a single installation, so that they do not exclude the possibility that you want to use C ++ for certain projects and Unity C # for others.

You can use C ++ to create a native plugin to call from Unity scripts, although you generally only need to do this to access platform-specific functionality that the engine does not exposed to the C # API in a platform-independent manner.

unity – How to find the central position of two or more objects?

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class MoveCameraBehind : MonoBehaviour
    public GameObject camera;
    public List targets = new List();
    public float cameraDistance = 10.0f;
    public bool behindMultipleTargets = false;
    public string cameraWarningMsgs = "";
    public string targetsWarningMsgs = "";

    // Use this for initialization
    void Start()
        if (camera == null)
            var cam = GetComponent();
            if (cam != null)
                cameraWarningMsgs = "Gettig camera component.";

                camera = transform.gameObject;
                cameraWarningMsgs = "Creating a new camera component.";

                GameObject NewCam = Instantiate(new GameObject(), transform);
                NewCam.name = "New Camera";
                camera = NewCam;

        if(targets.Count == 0)
            targetsWarningMsgs = "No targets found.";

    void FixedUpdate()
        if (targets.Count > 0)

    public void MoveCameraToPosition()
        if (targets.Count > 1 && behindMultipleTargets == true)
            var center = CalculateCenter();
            transform.position = new Vector3(center.x, center.y + 2, center.z + cameraDistance);

        if (behindMultipleTargets == false)
            Vector3 center = targets(0).transform.position - targets(0).transform.forward * cameraDistance;
            transform.position = new Vector3(center.x, center.y + 2, center.z);

    private Vector3 CalculateCenter()
        Vector3 center = new Vector3();

        var totalX = 0f;
        var totalY = 0f;
        foreach (var target in targets)
            totalX += target.transform.position.x;
            totalY += target.transform.position.y;
        var centerX = totalX / targets.Count;
        var centerY = totalY / targets.Count;

        center = new Vector3(centerX, centerY);

        return center;

The CalculateCenter function causes the targets (objects) to change position and disappear far away.
Even if there is only one target.

What I want to do is if there is an object, for example a 3D cube, position the camera behind the cube.
And if there are more cubes for example two or ten and the camera is elsewhere calculate the central position behind the targets and position the camera in the middle behind them.

To show what I mean in this example, the view (like a camera) is behind the two soldiers in the middle position between them from behind.

But what if there are 5 soldiers, how can I find the middle position and then position the camera behind them like this example in the screenshot?

See the example

graphics – Unity – The depth texture is too pixelated

I am trying to combine two cameras together: background camera, foreground camera.

I created 3 cameras in my project, 2 for the background and the foreground and one for the depth of the foreground.

I then created a simple shader which combines the first two cameras (they render the texture as well as the foreground depth), the problem I have is that because the depth buffer is too pixelated, the result looks funny and you can clearly see the lines around the foreground (players in my case).

I have created a depth camera with these properties:

Unity camera properties

Notice the Output Texture, I set the rendering texture to:

Depth rendering texture properties

Here is the depth texture of the result (zoomed so you can see the pixels):

enter description of image here

Any idea how I can create this effect using maybe something different than the depth buffer? or can i improve the quality of the depth? what can i do to achieve a good end result?

rotation – Unity Particle System How to rotate the particle effect?

enter description of image here

My particle system creates a "puff" of smoke.
When I rotate the particle system, it doesn't rotate the effect, just the way the particle is drawn.
enter description of image here

What I want to do is when I shoot a wall, make it "eat" horizontal and not vertical. And vertical when hitting the ground. I want it to follow the normals of the surface on which I diffuse rays.

My problem is that the texture does not rotate with its parental rotation.
enter description of image here

Do I have to write my own custom simulation space to be able to rotate the effect?
Assigning parent transformation does nothing.

Prefabricated structure:
enter description of image here

Particle parameters:
enter description of image here

enter description of image here

enter description of image here

Unity – Problem with instantiating buttons and variables in Unity3D

I'm currently trying to create a system where an inventory of buttons is created and at runtime the buttons are placed in a GUI component (Panel). I used a list of buttons which is created at startup and creates X amount of buttons for X amount of spells found for that player, and for some reason, the button created does not recognize any changes made to it:

    public void createButton(int index)
        var panel = GameObject.Find("ButtonPanel");
        Spell spell = GlobalControl.Instance.spells(index);
        GameObject button1 = (GameObject)Instantiate(ButtonPrefab);
        button1.GetComponent().text = spell.spellname;
        //get prev button position
        float yindex = 130.0f - spellinv.Count * 35.0f;
        button1.GetComponent().SetInsetAndSizeFromParentEdge(RectTransform.Edge.Left, yindex + 0, 10);
        button1.layer = 5;

There is no problem at runtime, but RectTransform does not change the position it is right in (0,0,0), and the text is always defined on the "Button" by default prefab, it seems to save nothing that I apply to the button, except for the layer settings.

How could I position the pivot point of a "GameObject" in the center of text added by the "TextMeshPro" in Unity?

How could I position the pivot point of a GameObject at the center of a text added by the TextMeshPro in Unity?

enter description of image here

Oh. I just checked, when the text changes, the yellow rectangle is not updated. What I need is for the pivot point to be in the center vertically and horizontally of the text added by the TextMeshPro.

enter description of image here

c # – Do I really need Unity for 2D, top-down, grid and turn-based strategy games?

I'm sorry if this is a fairly general question, but after many hours of working with Unity, watching tutorials, checking Google, etc., and I don't know where to turn. Although I have found many topics on this topic on various sites, most are either very old, for a very different type of game, unanswered, etc.

As the title suggests, I want to create a 2D strategy game, from top to bottom, grid and turn-based. I tried to use Unity because I thought it would allow me to avoid reinventing the wheel, taking advantage of various Unity optimizations, etc. That said, it looks like I don't need about 90% of what Unity offers. (physical, 3D, etc.).

The main problem is that I hate working with Unity – I find it extremely counter-intuitive and frustrating. Meanwhile, working with C # code can of course be difficult, but I usually find it mentally stimulating and interesting, while with Unity I feel like I'm hitting my head against the Wall.

At first I thought that the steep learning curve with Unity would pay off later once I understood, but at this point I'm not so sure. Would it be a mistake to abandon Unity and focus on working with pure C #? It & # 39; s supposed to be a fun hobby project!

Thanks in advance for any word of wisdom on this topic, or if you can provide links to other useful discussion threads …