9000 topics and growing! | Forum Promotion

Bizdustry – Business & Economics

Bizdustry is a make money forum where members can talk about business, economics, finances, budgeting, investing and anything Crypto related. Members earn 0.01$ for every message they publish and 1$ for each member that is referred to the forum. We strive to provide the best quality content and…

PostgreSQL database with 9000 tables continuously grow in memory usage

I have a PostgreSQL database that I use to store time-series (finance) data. each table contains the same table schema but has a different name based on the market pair and timeframe.

Ex. I have tables called candles_btc_usdt_one_minute, candles_eth_usdt_one_hour, candles_eth_usdt_one_week, etc.

These tables sum up to around 9000 tables in total.

Note that I know about TimescaleDB and InfluxDB, I already tested both and will post a reason I’m not using them at the end of this post.

So, since this is time-series data, it means that I’m only doing INSERT write operations and very rarely some SELECT to retrieve some data.

My issue is that the database memory usage seems to grow infinitely until I get an OOM crash. I configured my postgresql.conf using solutions as PGTune to a system with 1GB of RAM, 6 cores, and 120 connections and I limited my docker container to 4GB and still got an OOM after around one day with the system on.

I also tried other configs as 4GB of ram and 8GB in the container but PostgreSQL never respects the limit stipulated by the config and keeps using more and more RAM.

Is this the expected behavior? Maybe PostgreSQL has some other obscure config I can use to limit the memory usage in cases where there is a huge number of tables.. I’m not sure..

The reason I’m guessing this issue has something to do with my high number of tables is because the opened connections from my connection pool keep growing in memory usage faster at the start of my system (the first hours) and then the growth gets slower (but never stops).

That behavior reflects my INSERT intervals when hitting the tables.

For example, a table with a timeframe five_minutes means that every five minutes I will insert a new row to it, which means that I’m accessing these tables for the first time faster when the system starts than tables with higher timeframes as one_hour, etc.

And monitoring the memory growth, it seems that the connection process grows a little bit when it accesses a new table for the first time.

So, assuming this is right, it would mean that after some months, all the connections would have accessed all the tables at least one time and memory growth would stop. The problem with that is that I don’t know how much memory this would use at the end and it’s not ideal since trying to limit the memory via postgresql.conf becomes meaningless.

Here is the schema for one of the tables (as I said before, all tables has the same columns, index, etc):

data_db_prod=# d+ candles_one_minute_btc_usdt
                                   Table "public.candles_one_minute_btc_usdt"
  Column   |            Type             | Collation | Nullable | Default | Storage | Stats target | Description 
-----------+-----------------------------+-----------+----------+---------+---------+--------------+-------------
 timestamp | timestamp without time zone |           | not null |         | plain   |              | 
 open      | numeric                     |           | not null |         | main    |              | 
 close     | numeric                     |           | not null |         | main    |              | 
 high      | numeric                     |           | not null |         | main    |              | 
 low       | numeric                     |           | not null |         | main    |              | 
 volume    | numeric                     |           | not null |         | main    |              | 
Indexes:
    "candles_one_minute_btc_usdt_timestamp_desc_index" btree ("timestamp" DESC)
    "candles_one_minute_btc_usdt_timestamp_index" btree ("timestamp")
Access method: heap

About other solutions

As I said before, I already tried TimescaleDB and InfluxDB.

For TimescaleDB I would be able to use a single candles table and create 2 partitions to store the market pair and the timeframe, fixing the high number of tables and probably the RAM issue I’m having, but I cannot use this because TimescaleDB uses too much storage, so I would need to use their compression feature, but a compressed hypertable doesn’t allow write operations, meaning that to be able to do a backfill (which I do often) I would need to basically decompress the whole database each time.

For InfluxDB the issue is simply because they don’t support any numeric/decimal type, and I cannot lose precision using double.

Feel free to suggest some other alternative I’m not aware of that would fit nicely into my use case if there is one.

nginx – Setup SSL on Public URL and Admin Dashboard of an app running on port 9000

I seem to have a problem configuring SSL on an app on a subdomain, only that I don’t know how to fix them:

  1. I have a web app running on http://mysub.domain.com:9000
  2. This url is the admin dashboard of the app
  3. This app shoots email with public facing URLs like http://mysub.domain.com:9000/xxx/xxx/xxx (which lead to an action within my app)
  4. The mysub.domain.com is SSL enabled and loads with SSL but the moment the port (9000) is added it returns error page and I only have access to it via http
  5. The problem : 1 – port is visible to public and 2 – the URL shows insecure (when clicked by users it opens with port)
  6. The solution required : 1 – port to be removed without the action of the URL to be impacted and the same URL needs to be served on SSL

I have linux 18/postgres 12 and apache (webserver) + nginx (reverse proxy) running on my server.

Any help is appreciated. Pls also help with the name of the file to be edited.

1000 WEB 2.0 PBNs With 9000+ 11 Deferent Platform SEO Backlinks HQ Links Rank Your Website for $250

1000 WEB 2.0 PBNs With 9000+ 11 Deferent Platform SEO Backlinks HQ Links Rank Your Website

Force your keywords to rank on Google first page by our very safe link pyramid service:

100% safe from all Google updates (Penguin and panda and latest Hummingbird)

!! Exclusive offer for 2020 !!

>Buy 2 quantities and get 1 free

>Buy 4 quantities and get 3 free>Buy 6 quantities and get 4 free

> Re-seller offer: Buy 8 get 6 free

So, do hurry!! offer will be closed anytime

How does our SEO are working?

MY SERVICE:

1. 200+ Profile Backlinks PR 1-9 DA 10- 90+

2. 100+ Wiki Backlinks PR 1-9 DA 20-70+

3. 50+ Web 2.0 Blog PR 3-9 DA 40- 90+

4. 15+ Social Bookmarking HQ PR 4-9 DA 40- 80+

5. 70+ Social Bookmarking PR 1-9 DA 20- 90+

6. 70+ From Profiles PR 3-9 DA 30- 80+

7. 300+ Web 2.0 Profiles PR 1-9 DA 20- 90+

8. 5+ PDF Upload DA 40- 95+

9. 10+ EDU Sites PR 5- 9 DA 50- 90+

10. 70+ Authority Links PR 4- 9 DA 40- 90+

Your website will get strong mixed up on high authority site and that should give you to get your expect result.

This Diagram consists only of 4x Web 2.0’s Link Groups and each Web 2.0 Link Group is interlinked up to 5 Tiers. That means that on each Blog SEO AP will create 5 posts. The first post will contain a link to the money site and the rest 4 posts will have a link to it’s previous Tier. (Woman’s head…lol). All other Tiering Link Groups will support each and every Web 2.0 (Lion’s Body) in order to increase the Authority of Web 2.0’s, transfer a tremendous amount of juice and finally boost the Money site URL(s) on SERP’s.

>> Accept all language website for backlinks

>> Do not accept escort, adult, porn, nude etc sites

Requirements:

** website url

** Keywords: (1-5)

.(tagsToTranslate)1000(t)WEB(t)2.0(t)PBN(t)SEO(t)Backlinks(t)Rank(t)Your(t)Websit(t)Dofollow(t)pbn(t)post

email – Importing 9000 eml files to Thunderbird .78

Folder containing 9069 .eml files sent by Windows Live Mail.

I need to add them to Thunderbird. I created sub-folder in Thunderbird Archieved items, selected all files in Windows Explorer and dragged files them to this folder into this folder.

Thunderbird shows icon with number of files and freezes about 30 seconds.
After that nothing happes.
I tried several times from and to different folders but problem presists.
Importing small number of .emf files works.

How to import 9000 eml files to Thunderbird ?

Using 32 bit Thunderbird 78.1.0 in 64-bit latest Windows 10

ImportExportTools plugin has notice that it works only with Thunderbird 14.0 – 60.

I asked it also in Thunderrbird support at https://support.mozilla.org/en-US/questions/1298295
but havent got answer.

enter image description here

Altcoins might sparkle but only if Bitcoin stays above $9,000 – Cryptocurrencies Corner


Platinum Crypto Academy


logo.png




Altcoins might sparkle but only if Bitcoin stays above $9,000


Screenshot-1.png


Select altcoins are showing momentum, hence, traders can explore trading in them until Bitcoin sustains above $9,000




#Altcoin #Bitcoin #Cryptocurrencies #Exchange #Cryptonaireweekly


https://www.platinumcryptoacademy.com/past-weekly/14th-july-2020/

Is it possible to sync up LMKs in two Thales PayShield 9000 instances>

Basically, when we execute a generate key command such as A0 then we receive a key-under-lmk for future use. What if we have multiple HSMs in a high availability configuration? How would we make sure that all keys-under-LMK mean the same thing to all HSM instances?

The documentation I have doesn’t cover this and I didn’t find anything online about that particular model.

What type of server should HAproxy obtain for approximately 9,000 online users at the same time?

Hello

I would like to have your recommendations on the specifications of a HAproxy server that I would like to put online for about 9000 users at the same time …

I prefer to have a dedicated server …

I would also like your opinion on how to develop a MariaDB database around 80 GB with 80% of reads and 20% of writes …

Should I use Maxscale or Proxysql? Keep all servers synchronized at the same time or keep one server for writes and the rest for reads? How to scale the records also?

I find the sharding a bit complex for now …

Thank you

My trip to earn $ 9,000 a month

Hi guys,

Thanks to everyone on the DP forums
I'm learning so much about affiliate marketing and started working for a few years
– I joined a network of affiliates recommended here
– Create a simple page to add special offers for IMO, Finance, Investment offers
– Use promotional tips from here and my best is to use social media groups

The first year, I just earned between $ 100 and $ 1,000 a month.
After a few years, my income went up and I earn $ 9,000 a month in February …

My trip to earn $ 9,000 a month

Cisco PFR ASR 1000 9000

Need recommendations on router model and IOS version for ASR 1000 and 9000 supporting PFRs
I know how wonderful Noction is and know Expereo, formerly Border 6, but their current price does not suit me.

for calculating memory for the BGP, Gtt, Ntt, Cogent, Hurricane routing table
need at least 6 10G sfp + and a couple of 1G sfp
would rather prefer a not so old version 2015+

My budget is limited in the sense that I would need 2 units to have a spare reserve.
also open to running 2 peripheral routers with pfr if, for example, the memory on one is not sufficient on the model, but that the model is good.

Thank you in advance.