postgresql – Best field type for cryptocurrency big numbers?

I’m using PostgreSQL for my cryptocurrency exchange database, the question is: for saving currency amount (numbers) with their precision (like 323232323232323.45454545 ~ 23 digit + 1 dot: 15 digit before dot and 8 digit after that), Should I use varchar(24) type for them or double precision or numeric(15,8) ?

Note: It seems that double precision type cant properly save big numbers like example above and it will be rounded to 323232323232323!

Witch one has better performance (speed) and needs less resources?

macOS Big Sur and bash

In Catalina, Apple made zsh the default shell – however, it retained the (now deprecated) bash. Despite my attempts to unearth something related to this, I couldn’t find out whether Big Sur will keep bash in fresh installations or remove it altogether.

Is there any official word out there regarding this? At the very least (…and FWIW), any anecdotal experience coming from beta installations? Thank you.

macos – Is there a way to use accelerated 3D graphics from NVIDIA GeForce GT 750M in Mac OS Big Sur?

Since Mac OS High Sierra, Macbook Pro Late 2013 owners complain about the inability of using their NVIDIA GeForce GT 750M dedicated graphics card to speed up 3D graphics or a second display. So after all, I was wondering if Apple, NVIDIA, or some hacker [?] did something to re-enable these old-but-gold graphics card in Mac OS Big Sur.

I know it may be the latest Mac OS officialy released for 2013 Macbooks Pro, however it doesn’t mean it’s too outdated to be used these days. They’re still very nice pieces of hardware, that’s why I’m asking.

Dangers Of Transporting Big Glass Products

bigsur – I have a problem about Backup and Sync in mac big sur?

I got an error when i open Backup and Sync in macos big sur:
I’ve tried to uninstall it, reboot the computer, installed it again,but error again…
and I’ve tried to type the code(sudo chmod a+wx ~/Library/”Application Support”/Google) in terminal but still, error jumped

How can i solve it?

enter image description here

hashcode – Iter 2: Reusable, robust c++ std::hash for GMP’s big integer type

This is the 2nd iteration of a code review. The 1st iteration (completed) is at
Iter 1: Reusable, robust c++ std::hash<mpz_class> for GMP’s big integer type

1. Goal

My intention is to provide a fast hashing algorithm to hash GMP’s big integer type mpz_class and mpz_t so I can use these types as keys for an unordered_map. The code shall be reusable for others.

2. Current Approach

string_view is used to wrap the data of the big integer’s absolute value. Since C++17, the standard library provides the specialization hash<string_view> which is used to produce the initial hash value. If the big integer is negative, the initial hash value is scambled once to produce different hash values for positive and for negative big integers of the same magnitude.

3. Code

File hash_mpz.h:

#ifndef HASH_MPZ_H_
#define HASH_MPZ_H_

#include <gmpxx.h>

namespace std {

template<> struct hash<mpz_srcptr> {
    size_t operator()(const mpz_srcptr x) const;
};

template<> struct hash<mpz_t> {
    size_t operator()(const mpz_t x) const;
};

template<> struct hash<mpz_class> {
    size_t operator()(const mpz_class &x) const;
};

}

#endif /* HASH_MPZ_H_ */

File hash_mpz.cpp:

#include "hash_mpz.h"
#include <cstddef>
#include <string_view>

constexpr size_t pi_size_t() {
    if (sizeof(size_t) == 4) {
        return 0xc90fdaa2; // floor(pi/4 * 2^32)
    } else if (sizeof(size_t) == 8) {
        return 0xc90fdaa22168c234; // floor(pi/4 * 2^64)
    } else {
        throw std::logic_error(
                "current sizeof(size_t) not supported. only 32 or 64 bits are supported.");
    }
}

inline size_t scramble(size_t v) {
    return v ^ (pi_size_t() + (v << 6) + (v >> 2));
}

namespace std {

size_t std::hash<mpz_srcptr>::operator()(const mpz_srcptr x) const {
    string_view view { reinterpret_cast<char*>(x->_mp_d), abs(x->_mp_size)
            * sizeof(mp_limb_t) };
    size_t result = hash<string_view> { }(view);

    // produce different hashes for negative x
    if (x->_mp_size < 0) {
        result = scramble(result);
    }

    return result;
}

size_t hash<mpz_t>::operator()(const mpz_t x) const {
    return hash<mpz_srcptr> { }(static_cast<mpz_srcptr>(x));
}

size_t hash<mpz_class>::operator()(const mpz_class &x) const {
    return hash<mpz_srcptr> { }(x.get_mpz_t());
}

}

File main.cpp:

#include <iostream>
#include <gmpxx.h>
#include <unordered_map>

#include "hash_mpz.h"

using namespace std;

int main() {
    mpz_class a;

    mpz_ui_pow_ui(a.get_mpz_t(), 168, 16);

    cout << "a      : " << a << endl;
    cout << "hash( a): " << (hash<mpz_class> { }(a)) << endl;
    cout << "hash(-a): " << (hash<mpz_class> { }(-a)) << endl;

    unordered_map<mpz_class, int> map;
    map(a) = 2;
    cout << "map(a) : " << map(a) << endl;

    return 0;
}

4. Question

Is there anything which can benefit from further improvement?

2d – How big my sprites should be in Unity?

I’m making a 2D retro-style pixel game and I’d like to know how big my sprites should be.

I’m not talking about the size of my image files (which I decided to use a 32×32-based artstyle), but the actual size of them once they are in Unity.

Should I keep a 1:1 scale and import all of my usually 32×32 sprites into Unity? Is there a special scale up or down process I need to do before or after import them?

I’m saying this because when I’m working with HUD elements, they are drastically really big compared to my tiny sprites, so I’m worried if this could affect implementing something like RigidBody or collisions, and I should indeed scale up my sprites, or if this is normal and I’m being paranoic.

I’ll append an image showing my concern.

BTW, the game looks so far as I wanted to, and it’s working well, but right now it’s more like a proto, so I’d like to resolve this doubt before continuing. Also, I test this usually in a 1920×1080 resolution, so maybe that’s why the HUD looks so big.

Thanks!

gif proportions in game

import – Openning JSON file, (NDJSON, new line delimited) — big file

I am trying since many hours to open a JSON file (“new line delimited”) that has this as structure

{"Id":"5a864241ec2f2018","yearOfBirth":"1942","gender":"male","creationDate":"2009-01","numberOfNeutralVue":"0"}
{"Id":"8a75dacbe1c0d991","yearOfBirth":"1947","gender":"male","creationDate":"2004-01","numberOfNeutralVue":"0"}

(when I open at Blocknote txt) or if I use

Import("....json","CSV")

While when I use Import("...","JSON"), I get the error message, specifically

Import::jsonexpendofinput: Unexpected character found while looking for the end of input.
Import::jsonhintposandchar: An error occurred near character '"', at line 2:3

I will be grateful for any help how to proceed! (Moreover some of my files are rather big, more than 1 Gb, I know it is a separate question, but if you can also give advice on how to deal with chunks of such file, I will appreciate a lot!)

❓ASK – Do you think it would come to a time where every big company will want to own a crypto coin? | NewProxyLists

Facebook hasn’t launched Libra, since it hindered with multitude of problems.

Nah, it isn’t Google, each and every governments in the world will try to regulate it since crypto is dealing with international money transfer.

Back to the topic, some big online companies like Amazon decided to not make their own crypto, so it depends on each company’s policies.

How to block internal MacBook keyboard on macOS Big Sur?

On Catalina, I would use Karabiner Elements but this app is disabled in Big Sur.

Also, the below command no longer works:

sudo kextunload /System/Library/Extensions/AppleUSBTopCase. kext/Contents/PlugIns/AppleUSBTCKeyboard. kext/

Does anybody have a workaround for this?

Many thanks.