OOP applications and Databases

I am having such a hard time with OOP and databases. Both in C++ with SQL and even more so with Node.js and MongoDB. For this question I will use Node/Mongo.

Let’s say we have a user object. Over time user objects can get pretty big..

class User {
   constructor(){
      this.name = '';
      this.id = '';
      this.country = '';
      this.email = '';
      this.password = '';
      this.level = 1;
      // ... and on
   }

   isAdmin(){
      return this.level === 0;
   }
}

In my app the User object can be constructed to check for authorization among other things. But 90% of its purpose is to just hold data. So when a user is entered into the database, all properties in the class are entered as well. That is where the simplicity ends. Once I need to retrieve a user, I end up with a plain old object that looks exactly like a User object but without any functions. So what do I do? I can

  1. Pass the user object retrieved into a User constructor that has all 30ish parameters via and object
  2. Call 30 setters such as User.setName(doc.name)
  3. Write some intermediary function in the main app like App.setUserFromDatabase(user, doc)
  4. Just use the object directly, ie validatePassword(password, doc.password)

Converting my plain MongoDB object to a User class object is such a waste of time, yet I feel so dirty about using the MongoDB object directly since it has no definition like the class object. But using the class object seems so pointless just to call one or two functions.

So I am just stuck here trying to understand the right way to user database objects and actual application objects when they are basically the same, but only one (the application object) has a very clearly defined structure.

encryption – Are there security vulnerabilities with apps/webpages/software while running and unloading from encrypted databases to memory for use?

Looking for an “in” into learning a bit more about security of software, apps and web pages from a standpoint of security while in operation. I don’t really know any terminology or situational vocabulary enough to search this properly for previous posts or other research so I’m hoping to be pointed in the right direction here :]

What I mean by “security while in operation” is this. When my app, software or webpage is running, i.e. being used and interacted with- open, what are the risks, if any of security? My context here is this. Im going to be working with encrypted databases to store my users data. Hive databases, and SQL with other encryption methods also. So it seems that for storage of data I’ve got it licked, seems pretty simple in that direction. BUT what about when software is running? Say I architect the app to unload an encrypted Hive box for a specific pack of data, it gets loaded into a standard List so it can be worked with. Well what are the security risks here if any? Would not considering this be a major security vulnerability?

Also in your answer can you please state the answer to the above as it pertains to each development platform i.e. web, app, standard computer software. That is, if there is any difference of course.

Further context:

I’m not designing anything top secret or anything, actually encryption may even be overkill but I sincerely respect my users privacy and data. Therefore I want to make sure I cover all bases and design with this in mind.

Thanks for any input :]

performance – Are unstructured databases faster using arrays or many binary variables?

Is it faster (or otherwise better) to structure unstructured data like this:

{ label:"many_variables_example",
  "a":0,
  "b":1,
  "c":1,
  "d":1,
  "e":0,
  "f":1,
  "g":1,
  "h":0
 }
}

or this:

{ 
  label:"array_example",
  namedata:("a","b","c","d","e","f","g","h"),
  binarydata:(0,1,1,1,0,1,1,0)
}

Hybrid REST and Graph Databases architecture with microservices

I’ve come across Graph Databases recently and I am wondering how much sense does it make to use in my scenario and I’d like some more expert advice on it.
I am building microservices for an application and I am currently using Postgres as my data source.
My simplest scenario is as follow:

  • Users can purchase Items from a Venue.
  • When the total amount spent in Items is equal to X (e.g. $5) the User receives a Bonus Point
  • A Venue can give away a Reward in exchange of N Bonus points
  • A Venue has to keep track of the number of purchases and the number Rewards given out
  • The User has to track how many Bonus points has earned from a specific Venue

At the moment I am using, as mentioned earlier, Postgres. Each of the components in here are a microservice, so Users have a microservice and a database, Venues have another and Transactions have another one. Here a diagram to visualize it better.

enter image description here

I hope it makes it clear that I would have to query and assemble a lot of the data on the fly based on the request needed. So for instance in order to get how many Bonus Points a User has for a specific Venue I would need to create a table in my User Service where I store how many more purchases a User has to do in a particular Venue to get Bonus Points and how many Bonus Points does a User have in a particular venue to spend and how many Rewards a User has redeemed from a Venue. So, yeah, it gets ugly very quickly. Considering then that I need the reverse as well from the Venue point of view for metrics.

Now it looks to me like a Graph database would be a great fit for this implementation, as every node has a direct relationship to the next node and therefore would be more efficient and less complex to query the data.

So I would have my REST endpoints which use Postgres to retrieve simple user data (such as name, email etc…) or venues data (such as venue name, co-ordinates etc…) and then use the user/venue id to query the graph database for the rest of the information. I could even remove the transaction microservice at this point. Something like this

enter image description here

What would be the actual Pros and Cons of this type of architecture? Does it actually make sense?

Thank you for your kind responses

backup and restore app for databases

I’m looking for an open source app to backup sql server / mariadb / postgres.
Do you know which app can do that ?
I want to schedule backups with the app.

Thank you guys.

Is there a way to make Google sheet files as static data values or export to cloud databases?

When making a master sheet that sums some values across hundreds of Google sheets with the functions importrange, or query data across hundreds of Google sheets’ IDs, it takes longer time to load…

Is there a way to have a buffer like database in Google that is continuously updating the master sheet so whenever I open it the data has already been updated instead of waiting to execute the functions.?. In other basic words, as if the master sheet file is continuously open in my desktop. So whenever I need to look at it, it is already open and the importrange function has finished doing the job. That will make the dynamic data calculation & synchronization much faster specially if you need to look for multiple filtered values and data.
Probably working with cloud database and keeping the sheets that have importrange data and calculations in the database will solve the issue.

NOTE:

I found sites to work as linking a trigger of google sheet with another application but don’t really solve my issues like https://integromat.com/ or https://zapier.com. I found similar question where something called SQL works as a cloud database for Google but I’m very primitive to it so far.

migration – What extra work is required to restore a SQL Server instance and its databases included in a Windows Server snapshot backup?

A backup of the entire server will be inclusive of the SQL Server instance and the database files as a snapshot in time at that moment. This is assuming that the database files live on the server itself that you’re taking a backup of, as it is possible to setup the database’s such that their files live on remote shares.

You can verify the location of your databases’ files in SQL Server Management Studio, by right clicking on a specific database, clicking Properties, and then navigating to the Files page, then looking at the Path and File Name properties.

For example:
Database File Properties Example

In the above, my Test database’s files are stored in the default SQL instance folder. Please ensure the drives that your database files live on are part of the server / snapshot being taken of it, as even mapped drives can have drive letters but point to remote shares.

There shouldn’t be any additional work on the destination server, but you may want to make sure the appropriate SQL services (in the Windows Services) are started and running just the same as on the source server.

For example:
Windows SQL Services

The above is my test instance which probably has a lot more components of SQL Server installed than yours will, so don’t take this as a definitive list, rather it’s an example. Compare against your source server to the destination server.

The only other thing I can think of is to ensure all the same Windows permissions are setup to the correct accounts on your destination server as its respective source server. For example, if you’re using the SQL Agent, or a feature like Replication, those might leverage Windows accounts that currently have special permissions setup on your source server which need to be replicated on your destination server. (This is a little outside the scope of DBA.StackExchange though, and might be more of a ServerFault question, if you need help with that.)

replication – there is a solution to replicate a postgresql 9.2 databases including DDL operations?

there is a solution to replicate a postgresql 9.2 databases including DDL operations?

i found solutions : Slony and bucardo but dont support DDL

The native solution of postgres streaming replication (hot-Standby)it is not useful to me because i need replication of specific databases of other Postgresql databases servers

Regards

mac os x – MySQL with symlinked databases on an external drive on MacOS cannot read idb files

I have had this setup working for years, but all of the sudden today, MySQL crashed, and now I’m having trouble with every database on my external drive on my Mac.

It is currently running MySQL 8.0.21, and that update was done 3 months ago. Prior to that is when I say “working for years” since verstion 5.6/5.7. I haven’t updated the OS recently either — it has been running Catalina for a while.

Initially after the crash, the databases could not be recovered automatically by InnoDB. When it started up, it had lines for all of the tables in every one of those DBs that mentioned they could not be recovered because they were “in an unprotected location.”

Eventually, I figured out how to change the permissions for the users on the external drive to match those expected by mysql, and put this extra location in innodb_directories config setting, and it starts up OK.

However, now, when trying to select data, it tells me Tablespace is missing for table ... and there is a new error in the error log:

Operating system error number 1 in a file operation.
Error number 1 means 'Operation not permitted'
Cannot open datafile for read-only: ... OS error: 101

Not sure why it’s giving me both error 1 and 101, but according to perror, 1 is indeed operation not permitted, and 101 is STREAM ioctl timeout.

As for “not permitted” — the permissions were actually more lenient on these folders. drwxr-xr-x vs. drwxr-x--- (so, world readable and executable) and the ibd files within them are similarly more lenient: -rw-r--r-- vs -rw-r-----.

I don’t even know where to start with error 101, except to try to clear error 1 first, and see if it goes away. The drive is absolutely accessible, even by mysql, which could read it to start up, as best I can tell (since the errors regarding them being unprotected went away).

I decided to chmod o-r and chmod o-x to make those folders and tables match exactly the permissions on the ones that worked. I restarted mysql to see if that made a difference in its ability to read those tables in the databases on the external drive, but it gives the same errors.

I moved a database folder that was working from the main drive to the external drive. It continued to work, until I restarted MySQL. Then it started up as normal, but when selecting data again it gave the same error as before. I moved a database from where it didn’t work on the external drive, to the main MySQL folder, and restarted MySQL, and it works fine. Move it back, and we get the same errors.

MySQL runs under the same user account that I am logged in as, so I don’t see why it would have an issue with any kind of permissions related to the user account.

Finally, I tried adding mysqld and mysqld_safe to Full Disk Access in MacOS and restarting MySQL, in case MacOS was blocking it at the program level, again with no luck.

I am out of ideas. Anyone have anything else I can try?

amazon web services – RDS Postgres could not be upgraded because one or more databases do not allow connections

I’m using AWS RDS to run a Postgres server, and I’m trying to do a major upgrade from 9.6.11 to 10.15. I’m using aws-cli to request the upgrade, like this:

aws rds modify-db-instance
  --db-instance-identifier my-database
  --engine-version 10.15
  --db-parameter-group-name my-pg10
  --option-group-name my-pg10
  --allow-major-version-upgrade
  --apply-immediately

The database changes from “Available” into “Upgrading” state, then 2-3 minutes later it goes back to “Available” and the version is unchanged. A new file called error/pg_upgrade_precheck.log.1613891689533 appears in the logs tab, and the content looks like this:

------------------------------------------------------------------
Upgrade could not be run on Sun Feb 21 07:14:49 2021
------------------------------------------------------------------
The instance could not be upgraded from 9.6.11.R1 to 10.15.R1 because of following reasons.
Please take appropriate action on databases that have usages incompatible with requested
major engine version upgrade and try again.
- The instance could not be upgraded because one or more databases do not allow connections. 
Please ensure all non-template0 databases allow connections and try again.
----------------------- END OF LOG ----------------------

I tried doing a major upgrade to a different version (10.7) and that failed the same way. I tried doing a minor upgrade to 9.6.20 and that worked just fine.

I’m stumped. How do I make progress upgrading this database?