security – Doesn’t storing the “recovery seed phrase” for a wallet defy all logic?

I’ve heard many times that you are supposed to write down a “recovery seed phrase” (a series of English dictionary words) on a piece of paper and store that securely so that you can recover your coins when you inevitably lose access to your wallet.dat.

But doesn’t this defy all logic? If this phrase alone can recover the wallet.dat (which feels like magic to me, but I’ll accept it as the truth), what happens if:

  1. There is a fire and it burns your coins with the paper?
  2. A burglar steals the paper and takes your coins?
  3. The government seizes the paper and takes your coins?

Even if you say “I’ll buy one of those fireproof metal plates where you put in the seed phrase with the little metal letters”, that only protects against the first hazard.

And even if you put the piece of paper (or metal plate) inside a fireproof safe, that only stops (at best) number 1 and 2. The government will not be scared to force-open the safe or make you do it under gunpoint, and there’s your recovery phrase.

And if you put your coins on a hardware wallet and secure that, and still store your seed phrase in your computer, then somebody could grab that remotely through one of the numerous security holes that computers/OSes/software have today, restore the the wallet and take your coins before you ever know.

It seems that storing the recovery phrase ONLY protects against your own clumsiness and hardware dying, but not against all the other serious threats.

I’ve spent so long thinking about every possible method to keep my Bitcoins safe, but I just can’t find a single method which I’m not myself able to quickly poke huge holes through…

postgresql – Storing URL paths as unique identifiers in Postgres

I need to find a performant way for storing and looking up friendly paths for a multi-tenant SaaS, e.g. app.com/client. I also need to retrieve the paths from the database to generate human-readable links to entities for notifications and sharing.

Is storing the paths in plaintext as unique varchar(maxlength) the only way to broach this or is there room for optimization?

Also, unless I introduce an intermediary step of requesting the client’s PK first, all API requests would use the path as the ID (and I could cache the real PK in Redis for back-end usage).

Storing a secret on the client

I have an application which requires a secret known to the client, but unknown to the server. When logging in, the user inputs the secret, then the application uses this secret during normal operations.

How can I implement a “remember me” feature for such an application?

  • The client is a web browser.
  • The secret is a private key used to sign sensitive messages.
  • The secret should be valid for as long as possible to make it convenient to use.
  • The secret can be invalidated by the server at any time.

I feel very uncomfortable storing the secret in plain text on the client.

On the other hand, if I encrypt it with a password, then the password would be required, defeating the purpose of the “remember me” functionality.

I can’t store it hashed either, because I need the value of the secret.

I can’t think of any way in which this can be done securely.

postgresql – Storing data on separate storage volumes

I’m just looking for some design options around Postgresql.

I’m looking at using Postgresql as the backend for a SCADA system that I’d like to develop, this essentially means that it would have a combination of both OLTP loads (for instantaneous state / config of system) as well as quite a lot of OLAP involving time series data.

One particular item that I’d be worried about is performance, and in particular around disk I/O performance.

I see that Postgresql allows for Tablespaces to be used to put tables on different storage volumes, is there any way to do this with particular columns of data also?
My current schema approach would entail the following (exposed) tables:

  • Objects
  • Historical Data
  • Historical Events

For the Historical Data and Historical Events I’d ideally like to able to partition these across a number of different storage volumes, but not in time order, and instead perhaps by a hashing partition or similar. I think this wouldn’t be a problem, although it appears that it would require the use of Inherited tables (i.e. the Historical Data / Events being hidden Parent tables, real tables inheriting from these along some hash of the ObjectId FK or similar).

It’s the Objects table that I’m most unsure about. Each row would consist of something like:

  • ObjectId (Unique, Primary Key)
  • Security (ACL etc)
  • Data (JSON(B) format)
  • Config (JSON(B) format)

What I’d really like to be able to do is split out the Data column so that it’s stored elsewhere. It will be write heavy, and I would prefer not to have that same disk I/O impacting on the Object Config.

Is there any nice way to do this in Postgresql? (or should I be normalising the schema to have Data into a separate table? just with FK against ObjectId..)

private key – Why storing bitcoin in paper wallet is complex for me?

I created a paper Bitcoin wallet by using https://www.bitaddress.org.

So, I have the private key and public key.

I want to buy some bitcoin with my credit card and send it to my paper wallet by using the public key.

So, I am searching a website for this purpose. Only should I do is giving my public address, amount of money to buying bitcoin and my credit card infos.

However, the sites that I found, wants some ID, registration, their own wallets etc.

How can I find simplest site for this purpose?

postgresql – Storing sparse and vilotile data in an analytical database

I want to perform an analytics of my company deliverables: financial metrics, production volumes, etc.
For that, I’m creating a database on Postgres.

The main problem is that, I’m not sure which parameters (metrics) I would need finally – I would add parameters on demand. Therefore, creating an ER-model seems difficult for me now – I don’t now how many columns and which columns would be, and should I use different tables for different groups of information therefore.

I’ve come up with the following EAV model:

Parameters
|ID| title       |
| 1| Revenue     |
| 2| Expenditures|  
ITEMS
|ID| parameter_id| title          |
| 1|     1       | online shopping|
| 2|     2       | new computers  |
DATA
|ID| item_id| date       | value|
| 1|    1   | 2020-01-01 | 1000 |`

I made a research and it’s said, that this model would be difficult for analytics – different aggregation functions, data change on time interval, etc.

QUESTION

  1. Is this model acceptable in my case?
  2. What performace issues may I face and how to overcome?
  3. Should I move to an ER-model later (create a separate table for each parameter with a separate column for each item)? Or, it’s also appropriate to use VIEWS with pivoting data?

sql – Relational or non-relational database for storing and querying log data?

I am planning to extend an application with logging of user events. (Client application via REST-API to Symfony). Now I am facing the question which type of databases would be more suitable?
Currently the log data is stored in a relational database along with all the rest of the data. Now that click events are to be stored as well, I suspect that a relational database would no longer be suitable for storing data sets in the range of a million.

Would it make more sense to use a non-relational database or should I continue to work with relational database and maybe use possibilities for the optimization of the requests(e.g. elasticsearch)?
Since realtional db seems to work slowly for querying a lot of data, would it possibly also be possible to have the data prepared in advance via cronjob and stored in auxiliary tables?

The goal is to evaluate the log data for the user interface and to display different data in several charts.

Database Design: Storing price date wise, or date range wise?

I’ve to store prices for various items on multiple dates. The table schema would look like this:

CREATE TABLE date_wise_price (
    item_code varchar,
    date date,
    price numeric(19,4)
)

An alternate table schema for this could be to store prices date range wise, which will result into lesser number of records stored and loaded in-memory, when let’s say, price of 100 days are same.

CREATE TABLE date_wise_price (
    item_code varchar,
    date_range date_range,
    price numeric(19,4)
)

Problem with the second approach is to have unique price entry for each date. And I sense this table may start bloating a lot, unless we split previous conflicting date range, everytime we enter new data.

I need suggestion on what approach should I take.

  • First approach gives me unique price entry per date. I can have unique constraint on that.
  • Second approach gives me lesser number of records needed to be loaded, thus lesser stress on memory consumed while processing data for multiple items for let’s say 1 year. We’re looking at creating 1 entity object vs 300 entity objects.

I was even wondering, is there a way to club same price on continuous days, into date range, while loading data from first table or not? That will give me best of both approaches.

NOTE: Actual table to store price will have more number of columns.

dnd 5e – Can my familiar use a Ring of Spell Storing?

The section on attunement is quite long, so I won’t reproduce it here, but it refers exclusively to a “creature”. It doesn’t say anything about a player, a player character, a humanoid, or anything else that would exclude familiars from being able to attune to magic items.

The ring of spell storing itself likewise has no restriction beyond requiring attunement, so yes, a familiar can attune to it. I feel compelled to point out that some DMs might not allow a pseudodragon to use a ring because it doesn’t have fingers, though.

As for concentration, the DMG says this about items that cast spells (on page 141):

The spell uses its normal casting time, range, and duration, and the user of the item must concentrate if the spell requires concentration.

So if you cast a spell into the ring that requires concentration, when your familiar uses the ring to cast it, your familiar will have to maintain concentration, not you.

Storing timeseries data with dynamic number of columns and rows to a suitable database

I have a timeseries pandas dataframe which dynamically increases the columns every minute as well as adds a new row:

Initial:

timestamp                100     200     300
2020-11-01 12:00:00       4       3       5

Next minute:

timestamp                100     200     300   500
2020-11-01 12:00:00       4       3       5     0
2020-11-01 12:01:00       4       3       5     25

The dataframe has these updated values and so on every minute.

so ideally, I want to design a database solution that supports such a dynamic column structure. The number of columns could grow to over 20-30k+ and since it’s one minute timeseries, it will have 500k+ rows per year.

I’ve read that relational db’s have a limit on the number of columns so that might not work here, but also, since I am setting the data for new columns and assigning a default value(0) to previous timestamps, I lose out on the DEFAULT param that’s there on MySQL.

Eventually, I will be querying data for 1 day, 1 month to get the data for the columns and their values.

Please suggest a suitable database solution for this type of dynamic row and column data.