tables – For displaying of audit logs, is it better to show date and time on separate lines?

I have to design a page with a table for an audit log, and one of the columns is the Timestamp.
I am torn on how to handle this however, and I find examples of both on the web.

A] Without line break

2020-12-20  01:59:20 GMT+8


B] With line break

01:59:20 GMT+8

What would be the best practice in this case?

data tables – Displaying a list with a mixed-hierarchy


A landlord has a list of tenants in a given building.

They want to look at this list for the following reasons:

  1. To know which tenants are in their building
  2. To see other information at a glance, like the Suite Number or Square Footage occupied by each tenant
  3. To see the subtenants in the building and which tenants they are subleasing from*

This is the issue I’m running into: reason #3 suggests that I should show subtenants as nested within tenants, but reason #1 suggests that sub-tenants should be as easy to access in the list as tenants, aka placed at the same level of hierarchy.


Can anyone recommend a good UX/IA solution for a list that allows for a mixed-hierarchy? Here are a couple of ideas I had:

enter image description here

enter image description here

*context on tenants/subtenants: the landlord has a legal relationship with the tenant. The subtenant does not have a legal relationship with the landlord, but rather with the tenant. In a subtenant situation, the subtenant pays rent to the tenant, who then pays the landlord. Thus, the subtenant is not as “important” to the landlord, but the landlord still likes to know who is physically occupying space in the building.

upgrade – Drop tables but space not claimed in postgres 12

I have upgraded Postgresql 9.5 to Postgresql 12.4 a few days back using pg_upgrade utility with link (-k) option.

So basically I am having two data directories i.e. One is old data directory (v9.5) and the current one in running state (v12.4).

Yesterday I have dropped two tables of size 700MB and 300MB.

After connecting to postgres using psql utility I can see database size whose tables was dropped got decreased (with l+ ) but what is making me worry is that only a few MBs have been freed from storage partition.

I have checked if any deleted open file is there on OS level using lsof but there is none. I have run vacuumdb only on that database but no luck

Looking for the solution.

sql server 2017 – Clustered columnstore index on small tables

Clustered column store indexed tables in general are useful for large tables. Ideally with million of rows.
And also useful with queries, which selects only the subset of available columns in such tables.

What happens if we break these two “rules”/best practices?

  1. Like having a clustered column store indexed table which will only store few thousand, or hundreds of thousands of rows max.
  2. And running queries against those clustered column store table where all the columns are needed.

My tests don’t reveal any performance degradation comparing to row stored clustered index table.
Which is great in our case.

Is there any “long term” effects breaking these two rules? Or any hidden pitfalls which haven’t showed up just yet?

Context why is it needed: I designed a database model which will be used for many instances of different vendor databases. The schema remains the same in every database, but different vendors have different amount of data. Hence few small vendors may end up with small amount of data (<1 000 000) in their tables. I can’t allow myself to keep up two different database for row-store and column-store model.

PostgreSQL database with 9000 tables continuously grow in memory usage

I have a PostgreSQL database that I use to store time-series (finance) data. each table contains the same table schema but has a different name based on the market pair and timeframe.

Ex. I have tables called candles_btc_usdt_one_minute, candles_eth_usdt_one_hour, candles_eth_usdt_one_week, etc.

These tables sum up to around 9000 tables in total.

Note that I know about TimescaleDB and InfluxDB, I already tested both and will post a reason I’m not using them at the end of this post.

So, since this is time-series data, it means that I’m only doing INSERT write operations and very rarely some SELECT to retrieve some data.

My issue is that the database memory usage seems to grow infinitely until I get an OOM crash. I configured my postgresql.conf using solutions as PGTune to a system with 1GB of RAM, 6 cores, and 120 connections and I limited my docker container to 4GB and still got an OOM after around one day with the system on.

I also tried other configs as 4GB of ram and 8GB in the container but PostgreSQL never respects the limit stipulated by the config and keeps using more and more RAM.

Is this the expected behavior? Maybe PostgreSQL has some other obscure config I can use to limit the memory usage in cases where there is a huge number of tables.. I’m not sure..

The reason I’m guessing this issue has something to do with my high number of tables is because the opened connections from my connection pool keep growing in memory usage faster at the start of my system (the first hours) and then the growth gets slower (but never stops).

That behavior reflects my INSERT intervals when hitting the tables.

For example, a table with a timeframe five_minutes means that every five minutes I will insert a new row to it, which means that I’m accessing these tables for the first time faster when the system starts than tables with higher timeframes as one_hour, etc.

And monitoring the memory growth, it seems that the connection process grows a little bit when it accesses a new table for the first time.

So, assuming this is right, it would mean that after some months, all the connections would have accessed all the tables at least one time and memory growth would stop. The problem with that is that I don’t know how much memory this would use at the end and it’s not ideal since trying to limit the memory via postgresql.conf becomes meaningless.

Here is the schema for one of the tables (as I said before, all tables has the same columns, index, etc):

data_db_prod=# d+ candles_one_minute_btc_usdt
                                   Table "public.candles_one_minute_btc_usdt"
  Column   |            Type             | Collation | Nullable | Default | Storage | Stats target | Description 
 timestamp | timestamp without time zone |           | not null |         | plain   |              | 
 open      | numeric                     |           | not null |         | main    |              | 
 close     | numeric                     |           | not null |         | main    |              | 
 high      | numeric                     |           | not null |         | main    |              | 
 low       | numeric                     |           | not null |         | main    |              | 
 volume    | numeric                     |           | not null |         | main    |              | 
    "candles_one_minute_btc_usdt_timestamp_desc_index" btree ("timestamp" DESC)
    "candles_one_minute_btc_usdt_timestamp_index" btree ("timestamp")
Access method: heap

About other solutions

As I said before, I already tried TimescaleDB and InfluxDB.

For TimescaleDB I would be able to use a single candles table and create 2 partitions to store the market pair and the timeframe, fixing the high number of tables and probably the RAM issue I’m having, but I cannot use this because TimescaleDB uses too much storage, so I would need to use their compression feature, but a compressed hypertable doesn’t allow write operations, meaning that to be able to do a backfill (which I do often) I would need to basically decompress the whole database each time.

For InfluxDB the issue is simply because they don’t support any numeric/decimal type, and I cannot lose precision using double.

Feel free to suggest some other alternative I’m not aware of that would fit nicely into my use case if there is one.

Lightweight Portable Massage Table | Massage tables for sale near me– Need Supplies

Our special designed portable massage tables are high on comfort. They are lightweight, Aluminium Made and Available at Discounted Rate. Also deals in pump dispenser.


postgresql – How do you clone the tables and data of a source postgres database to another postgres database on a remote server?

My question is exactly like the one asked here, except I want the duplicated database to be created on a remote database.

For example, I connect to the source database like this:

psql postgresql://srcdbusername:srcdbpassword@srcIPaddress:5432/srcdbname

Similarly, I connect to the remote (or destination) database like this:

psql postgresql://destdbusername:destdbpassword@destIPaddress:5432/destdbname

The difference between the two is “destdbname” is empty, whereas “srcdbname” has all the tables and data I need cloned.

In this post they mention to use the following command:

CREATE DATABASE my_new_database TEMPLATE my_old_database;

However, I cannot use this because the destination database is on a remote server far away from the source database.

How do I get around this?

Better way to query from multiple tables using Django ORM

I have the following models:

class Client(BaseModel):
    #other fields
    engineers = models.ManyToManyField(Engineer, null = True, blank = True, related_name='clients')

class OrganizationMember(MPTTModel):
    designation = models.CharField(max_length = 100, blank = True)
    user = models.OneToOneField(User, on_delete=models.CASCADE)
    parent = TreeForeignKey('self', null=True, blank=True, related_name='children', on_delete=models.SET_NULL)

Now I would like to get all engineers of a client instance and check if each engineer has an organizationmember instance or not.
Then if they belong to some organization then I should get_ancestors of that organizationmember and then the users.
If they do not have any organizationmember then just get the user of that engineer instance.

For this I have written a method in Client model as below:

class Client(BaseModel):

    def get_users_from_engineers(self):
        users_in_organization = ()
        users_outside_organization = ()

        engineers = self.engineers.all()
        if engineers:
            for engineer in engineers:
                user = engineer.user
                if hasattr(user, "organizationmember"):
                    users_in_organization += (orgmember.user for orgmember in user.organizationmember.get_ancestors(include_self=True))

Now this code is working correctly but I am sure that I can use django filters here instead of two for loops and optimize this code. But I do not know how.

For eg:


can be used to get all the organizationmembers of the engineers of that particular client instance. But I need the users of those organizationmembers.
Also if I use get_ancestors on multiple engineers then more than one engineer might have the same ancestor thereby adding the same person more than once in the users_in_organization list. How can I rectify this if I use django filters?

Note: This is my first question in codereview. I do not know if this question should be asked here or in stackoverflow since I already have a working code. Anyway I have asked the same question in stackoverflow also. Please feel free to correct me regarding the same.

sql server – Replicate data from multiple tables with same structure to a single destination table with additional columns?

I have a table with say the following structure. This table is in multiple databases.

Name varchar(50),
Age int

I want to replicate the data from this table from multiple databases to a single table. The target table has an extra column TabCode. I want to populate this TabCode with a value depending on from where the data is coming.

I do not have the option of adding any new column to the source tables.

Name varchar(50),
Age int,
TabCode int

Can this be done in Microsoft Sql Server?

accessibility – What would be most efficient way to search through database tables and pick columns?

I am tasked with designing a modal screen where data scientists and business analysts should select different tables from database and then if necessary narrow down the selection by (optional) choosing specific table columns.

There can be up to hundred of columns per table. This is why Im considering splitting the views into 2 boxes where the first one is for tables and then the other one dynamically loads columns and has a search capability instead of Version B, where everything is together.

My question is whether you know of any other possible solution (modal window with limited space friendly) than this or whether you would have any design feedback for this.

I also intend doing some usability session, but since i am very limited with time. I wanted to seek comments from this community as well.

enter image description here