unity – Assigning a shader material on runtime issues

I’m facing a problem once assigning a shader material on an android build.

IEnumerator LoadMat()
    {
        ResourceRequest resourceRequest = Resources.LoadAsync<Material>("Solid_frame_inside_under");
        while (!resourceRequest.isDone)
        {
            yield return null;
        }
        if (resourceRequest.isDone)
        {
            Material OutlineMaterial = resourceRequest.asset as Material;
            rend.material = OutlineMaterial;
            Debug.Log("Material Loaded");
        }

    }

Im using this hopefully correct function to assign an outline shader material to 10 game objects instantiated in the scene.

This enumerator gets called on Awake, but when its changing the material to that one, unity freezes and then it continues the game with the outline shader material assigned and working. Am i doing something wrong in the async function?

This function is in a script that gets attached to the game objects as soon as they are instantiated. The function that does that is this which is in another game object.

 for (int i = 0; i < selectedArray.GetLength(0); i++)
        {
            spawnedGameObjects(i) = new GameObject();
            spawnedGameObjects(i).gameObject.name = "puzzlePieces" + i;
            spawnedGameObjects(i).gameObject.tag = ("puzzle");
            spawnedGameObjects(i).gameObject.AddComponent<ClickOn>();
    }

Behavior is as follows, i run the game on my mobile (android) scene starts ,debugs come back and my timer is stuck at 5 for a couple of seconds and game is not playable and then once the outlines are applied it starts counting down.

My runtime is too long when removing elements from my array in JavaScript

I have two arrays, arr_of_frequencies and float_array. Each have the same amount of values in them, around 500-800k depending on the inputs. I have to get that down to somewhere between 200 and 300k values per array. Below is the code I’ve been using so far. It works but it will take around 15 seconds to run, which is about 14 seconds too long.

function get_freq_for_each_sweep_value() {
        if (frequency_span > 3000000000) {
            while (float_array.length > 300000) {
               for(var a = 0; a < float_array.length; a++){
                  float_array.splice(a, 3);
                  arr_of_frequencies.splice(a, 3);
               };
            }
        } else {
            while (float_array.length > 300000) {
               for(var a = 3-1; a < float_array.length; a += 3){
                  float_array.splice(a, 2);
                  arr_of_frequencies.splice(a, 2);
               };
            }
        }

        return arr_of_frequencies;
        return float_array;
    };

What I’m trying to do is take my original array and then cut out a certain amount of values. If there are 500k values, then I want to cut 3 out of every 4, so that array (1,2,3,4,5,6,7,8,9) is now only (1,5,9).

These arrays have to be passed along to a graphing API in Python that will plot the points, but will only plot up to 300k points at the most. Depending on the inputs (frequency_span) determines how many values there are. The higher the span, the more values there will be. The code I have above is working right now, but it is not working well. Any ways to improve the time, or a better approach to my problem, would be appreciated.

Recommendation for Cloud Service (Linux + Python + Python packages, Calculation once a day, Runtime 2 hours)

I am in search of a cloud service for my software project. The prerequisites are the following:

  • It runs on Linux in a Conda-Environment with Python 3.7 and several Python Packages. Install-time is about 20 minutes
  • The program has to run once a day for about 3 hours
  • It requires 4 CPU, 16 GB RAM, 100 GB Memory

Can you recommend a cloud architecture that is suitable for this? Docker is an option for me.

c# – Is there a way to create animation from array of 2d sprites in runtime that works in build in UNITY

Hi so I have an array of sprites that is created in runtime and I would like to create animation from it, so the specific animation would also exist in runtime only it would act as preview animation.
I tryed something like this:

public void CreateAnimation()
{
    Sprite() sprites = createSprite();

    AnimationClip animClip = new AnimationClip();
 
    AnimationClipSettings newSettings = new AnimationClipSettings();
    newSettings.loopTime = true;
    AnimationUtility.SetAnimationClipSettings(animClip, newSettings);
 
    EditorCurveBinding spriteBinding = EditorCurveBinding.PPtrCurve("", typeof(SpriteRenderer), "m_sprite");

    float interval = 1 / 4f;
    ObjectReferenceKeyframe() spriteKeyFrames = new ObjectReferenceKeyframe(sprites.Length + 1);

    for (int i = 0; i < spriteKeyFrames.Length; i++)
    {
        if(i == spriteKeyFrames.Length - 1)
        {
            spriteKeyFrames(i).time = i * interval;
            spriteKeyFrames(i).value = sprites(0);
        }
        else
        {
            spriteKeyFrames(i).time = i * interval;
            spriteKeyFrames(i).value = sprites(i);
        }
    }

    AnimationUtility.SetObjectReferenceCurve(animClip, spriteBinding, spriteKeyFrames);

    AnimatorOverrideController animatorOverrideController = new AnimatorOverrideController();
    animatorOverrideController.runtimeAnimatorController = animC.runtimeAnimatorController;
    animatorOverrideController("TestAnim") = animClip;
    animC.runtimeAnimatorController = animatorOverrideController;

}

inspired by this video:https://www.youtube.com/watch?v=1HEAUZa-ZBM

and this post: https://answers.unity.com/questions/1080430/create-animation-clip-from-sprites-programmaticall.html

I managed to create animation clip but I cant play the animation and even if I did what I understand by other comments and posts this method would not work in build because it uses UnityEditor namespace.
So now I am not sure if there is any way of creating this kind of animation at runtime that also works in build.

Any help would be appreciated thanks 🙂

Assign schema/table name at runtime using dapper.contrib

I’m doing a poc to check feasibility of having multiple tenants in same sql db. Each tenant will have their own schema e.g. tenant1.company, tenant2.company etc..

i’m using Dapper.Contrib, so i’ve tried following approach using “SqlMapperExtensions.TableNameMapper” to set table name by reading the tenant name from request. But the problem is that table mapper is static, so even if i’m able to determine the schema/tablename at runtime, i can not have separate mapper for each request.

code snippet:
Model class:-

(MyDapperTableName("Table1"))
 public class Table1Model

Table mapper:

public static class MyDapperTableNameMapper
    {
        public static SqlMapperExtensions.TableNameMapperDelegate TableNameMapper()
        {
            return (type) =>
            {
                var dapperTableAttribute = Attribute.GetCustomAttribute(type, typeof(MyDapperTableNameAttribute)) as DapperTableNameAttribute;
                var tablename = (dapperTableAttribute != null) ?{dapperTableAttribute.TableName} :type.Name;

                return $"{schema}.{tablename}"; //here i some how have to fetch schema from request.
            };
        }
    }

Most this solution would allow me to set schema once.
so the problem with this approach is, as table mapper is static, multiple request at same time from different tenant would not work. How do i have instance specific mapping of schema/table?

Any suggestions?

sorting – How to detect when to use radix sort in runtime

I’m implementing a stable integer sorting algorithm, I’ve chosen radix sort. I’ve tested LSB vs MSB implementation, wrote MSD/LSD hybrid implementation to reduce bandwidth pressure. The repo is here: https://github.com/KirillLykov/int-sort-bmk

Now some Figures to discuss.

First of all, radix sorts outperform significantly stable_sort on shuffled unique values in the range 0..N, where N is number of elements in the array:

Sorting shuffled unique values in the range 0..N

On an input of randomly generated (uniform) numbers in the range 0..1e9, LSD outperforms other sorting algorithms, but MSD doesn’t:

Sorting uniformly distributed values in the range 0..1e9

I don’t have a good explanation for the last observation.
But it hinted to develop a combined radix sort implementation which will rely on MSD first and when the size of the range is smaller than 2^14 use LSB.

I’ve benchmarked these implementations on different distributions:

Performance of different sorting algorithms on various distributions

Observation is that although radix sort outperforms on random data, it performs poorly on some specific cases.

I think that a good approach is to define a criterium when/when-not use radix sort depending on the input data.
This approach was implemented in boost::spreadsort — knowing the size of the input, min/max values they make a decision whether to use bucket sort one more time or to rely on another sorting algorithm. See for example boost::spread_sort implementation
Probably, it is possible to adapt the same analysis for radix sort yet not sure because I couldn’t reverse engineer the the math behind these checks (I’ve read the report).

So the question is you have any ideas on how to develop criterium for using other sorting algorithms based on some (cheap to compute) observations about the input data?

dependency injection – Is there a proper way of implementing runtime control of dependencies using DI? Is factory pattern okay?

I’m currently brushing up and learning about a bunch of techniques to hopefully begin implementing in my own workflow; one of which is IoC (and DI in particular).

I’m hoping someone could clear up my confusion I have reading two articles about the subject:

In this post, the author seems to demonstrate that you can use the factory pattern alongside DI, with the goal of enabling runtime control of which implementation of the dependency is used.

In this Microsoft doc, they seem to recommend avoiding this approach (or rather, to avoid mixing it and any service locator pattern, with DI). I’m not sure if this means there’s always a better alternative, or rather to simply avoid it in most scenarios but there might be some exceptions where there’s merit (e.g. runtime control).


I guess another potential view at the question could be: when using DI, should runtime control of dependencies be avoided just as much as mixing with service locator pattern, as to reduce the need for things like service locator pattern?

I’m writing this with pretty much no experience using DI yet, so apologies if I’m somehow missing the big picture.

c++17 – Safe runtime numeric casts

The rationale behind this code is to implement a runtime-safe number conversions in situations when precision loss is possible, but is not expected. Example: passing a size_t value from a 64bit code to a library (filesystem, database, etc.) which uses 32bit type for size, assuming you will never pass more than 4Gb of data. Safety here means cast result would have an exactly same numeric (not binary) value (i.e. any rounding, value wrapping, sign re-interpretation, etc. would be treated as casting failure). At the same time, simple impilicit casting for maximum performance is highly desired. This is especially useful for template classes, which usually assumed to have no special treatment to the types they operate on. Since it would be used in many places of my code, I’m wondering if I’ve overlooked something.

Here’s the code (note that “to”-template argument goes before “from”-argument for automatic argument deduction in real-world usage):

#include <limits>
#include <type_traits>
#include <stdexcept>
#include <typeinfo>

class SafeNumericCast {
    protected:
        enum class NumberClass {
            UNSIGNED_INTEGER, SIGNED_INTEGER, IEEE754
        };

    protected:
        template <typename T> static constexpr NumberClass resolveNumberClass() {
            static_assert(std::numeric_limits<T>::radix == 2, "Safe numeric casts can only be performed on binary number formats!");
            if constexpr (std::numeric_limits<T>::is_integer) {
                if constexpr (!std::is_same<T, bool>::value) { // NOTE Boolean is conceptually not a number (while it is technically backed by one)
                    return std::numeric_limits<T>::is_signed ? NumberClass::SIGNED_INTEGER : NumberClass::UNSIGNED_INTEGER;
                }
            } else if constexpr (std::numeric_limits<T>::is_iec559) {
                return NumberClass::IEEE754;
            }
            throw std::logic_error("SafeNumericCast > Unsupported numeric type!");
        }

    public:
        template <typename TTo, typename TFrom> static constexpr bool isSafelyCastable() {
            if constexpr (!std::is_same<TTo, TFrom>::value) {
                const NumberClass toNumberClass = resolveNumberClass<TTo>();
                const NumberClass fromNumberClass = resolveNumberClass<TFrom>();
                if constexpr (toNumberClass == NumberClass::UNSIGNED_INTEGER) {
                    if constexpr (fromNumberClass == NumberClass::UNSIGNED_INTEGER) {
                        return std::numeric_limits<TTo>::digits >= std::numeric_limits<TFrom>::digits;
                    }
                } else if constexpr (toNumberClass == NumberClass::SIGNED_INTEGER) {
                    if constexpr ((fromNumberClass == NumberClass::UNSIGNED_INTEGER) || (fromNumberClass == NumberClass::SIGNED_INTEGER)) {
                        return std::numeric_limits<TTo>::digits >= std::numeric_limits<TFrom>::digits;
                    }
                } else if constexpr (toNumberClass == NumberClass::IEEE754) {
                    if constexpr ((fromNumberClass == NumberClass::UNSIGNED_INTEGER) || (fromNumberClass == NumberClass::SIGNED_INTEGER) || (fromNumberClass == NumberClass::IEEE754)) {
                        return std::numeric_limits<TTo>::digits >= std::numeric_limits<TFrom>::digits;
                    }
                }
                return false;
            }
            return true;
        }
        template <typename TTo, typename TFrom> static constexpr TTo cast(TFrom value) {
            static_assert(isSafelyCastable<TTo, TFrom>());
            return value;
        }
};

class SafeRuntimeNumericCast : public SafeNumericCast {
    private:
        template <typename TTo, typename TFrom> static constexpr bool isRuntimeCastable(TFrom value, TTo casted) {
            static_assert(!SafeNumericCast::isSafelyCastable<TTo, TFrom>());
            const NumberClass toNumberClass = resolveNumberClass<TTo>();
            const NumberClass fromNumberClass = resolveNumberClass<TFrom>();
            if constexpr (toNumberClass == NumberClass::UNSIGNED_INTEGER) {
                if constexpr (fromNumberClass == NumberClass::UNSIGNED_INTEGER) {
                    return value <= std::numeric_limits<TTo>::max();
                } else if constexpr (fromNumberClass == NumberClass::SIGNED_INTEGER) {
                    if (value > 0) {
                        return value <= std::numeric_limits<TTo>::max();
                    }
                } else if constexpr (fromNumberClass == NumberClass::IEEE754) {
                    return casted == value;
                }
            } else if constexpr (toNumberClass == NumberClass::SIGNED_INTEGER) {
                if constexpr (fromNumberClass == NumberClass::UNSIGNED_INTEGER) {
                    return value <= std::numeric_limits<TTo>::max();
                } else if constexpr (fromNumberClass == NumberClass::SIGNED_INTEGER) {
                    return ((value >= std::numeric_limits<TTo>::min()) &&(value <= std::numeric_limits<TTo>::max()));
                } else if constexpr (fromNumberClass == NumberClass::IEEE754) {
                    return casted == value;
                }
            } else if constexpr (toNumberClass == NumberClass::IEEE754) {
                if constexpr (fromNumberClass == NumberClass::UNSIGNED_INTEGER) {
                    return value <= (1ULL << std::numeric_limits<TTo>::digits); // NOTE Can't do "casted == value" check because of int-> float promotion
                } else if constexpr (fromNumberClass == NumberClass::SIGNED_INTEGER) {
                    return static_cast<TFrom>(casted) == value; // NOTE Presumable faster than doing abs(value)
                } else if constexpr (fromNumberClass == NumberClass::IEEE754) {
                    return (casted == value) || (value != value);
                }
            }
            return false;
        }
    public:
        using SafeNumericCast::isSafelyCastable;
        template <typename TTo, typename TFrom> static constexpr bool isSafelyCastable(TFrom value) {
            if constexpr (!SafeNumericCast::isSafelyCastable<TTo, TFrom>()) {
                return isRuntimeCastable<TTo>(value, static_cast<TTo>(value));
            }
            return true;
        }
        template <typename TTo, typename TFrom> static constexpr TTo cast(TFrom value) {
            if constexpr (!SafeNumericCast::isSafelyCastable<TTo, TFrom>()) {
                TTo casted = static_cast<TTo>(value);
                if (isRuntimeCastable<TTo>(value, casted)) {
                    return casted;
                }
                throw std::bad_cast();
            }
            return value;
        }
};

The usage is simple:

SafeNumericCast::cast<uint64_t>(42); // Statically check for possible precision loss
SafeRuntimeNumericCast::cast<float>(1ULL); // Dynamically check for precision loss

SafeNumericCast::isSafelyCastable<uint64_t, uint32_t>(); // Non-throwing static check
SafeRuntimeNumericCast::isSafelyCastable<float>(1ULL); // Non-throwing dynamic check

Here are the assumptions the code is based on:

  • The code is working only on 10 built-in binary numeric types – this is intended for now
  • Any unsigned integer can be exactly represented by another signed or unsigned integer as long as it has enough digit capacity, otherwise a runtime value check against max value is required
  • Any signed integer can be exactly represented by another signed integer as long as it has enough digit capacity, otherwise a runtime value check against min and max values is required
  • Signed integer can’t be generally represented by an unsigned integer, so a runtime value check is required. We can’t simply compare by value due to sign re-interpretation during signed/unsigned promotion, so we have to separately check for negative sign and positive value against possible max value
  • Integers can be represented by an IEEE 754 float as long as it has enough digit capacity, otherwise a runtime value check is required. We can’t simply compare by value due to possible rounding during integer/float promotion, so we have to manually check against maximum representable integer.
  • IEEE 754 floats can’t be generally represented by an integer, so we have to check at runtime by simply comparing original and casted values. This should also cover NaN/Infinity/etc cases.
  • Any IEEE 754 float can be exactly represented by another IEEE 754 float as long as it has same or bigger size (that is, double simply has more capacity for both mantissa and exponent, thus any float is exactly representable by a double). Otherwise, a simple runtime value comparison is required. The only corner case is NaN and std::isnan() is, unfortunately is not constexpr, but we can work it around by checking value != value.

sql server – Should a database ‘administrator’ have the ability to deep dive into runtime query performance issues?

I acknowledge that everyone’s experience and abilities are different. Having said that, I want to avoid setting expectations to high (or to low) for a DBA that through actions; appears to be an ‘administrator’.

Given:

  • I am a developer that is deep diving into sql server trouble shooting
    and performance issues. At night I’m watching Brent Ozar videos with
    a bucket of popcorn.
  • A company with several divisions, this division having around 100ish team members
  • Many customers with databases having millions of rows with ETL
  • A small DBA team that is stretched to handle these same customers. Handle HAG issues, backup, restores, create new deployments for new or upgrading customers.

I am NOT attempting to justify my own opinion. I wish to adjust either my opinion or others in the company.

Question:

Given the above, should expectations be that a DBA is simply an ‘administrator’? Is this something you’ve typically seen?


I went into this soap opera (see my other recent questions) with the expectation that a DBA is ‘expected’ to deep dive. I have come to believe that I have misunderstood and I am leaning towards a DBA – being an ‘administrator’.

I welcome other’s experiences and perhaps advice on refining this question.

jinja2 – Ansible how to select hosts based on certain attributes and use their ip addresses to create a list at runtime

I have a specific question about data manipulation in ansible.

In my inventory file, I have a group called postgresql as below:

[postgresql]
host1 ansible_host=1.1.1.1 postgresql_cluster_port=5432 postgresql_harole=master
host2 ansible_host=2.2.2.2 postgresql_cluster_port=5432 postgresql_harole=slave postgresql_master_ip=1.1.1.1
host3 ansible_host=3.3.3.3 postgresql_cluster_port=5432 postgresql_harole=slave postgresql_master_ip=1.1.1.1
host4 ansible_host=4.4.4.4 postgresql_cluster_port=5432 postgresql_harole=slave postgresql_master_ip=1.1.1.1

Somewhere in my playbook I will need to manipulate and use filters to make a list of ip addresses of all hosts whose postgresql_harole=slave as below:

- hosts: postgresql
  gather_facts: True
  remote_user: root
  tasks:
    - set_facts:
        slave_ip_list: "{{ expressions }}"

I am pulling my hairs to have the correct expressions… any help is highly appreciated!!!!