How I create store and manage my passwords

What method do you use for storing and remembering your numerous passwords?
Here is what I do.

I use Lastpass, a third-party password manager that’s also cross-platform and cross-browser, unlike the built-in browser password managers limited to the browser they are installed in.

For instance: I use Google Chrome on my laptop and can access my passwords in any browser, on any platform.

My password manager creates the password and has the ability to control how long it is and what type of characters it contains.

This is how I store create and manage my passwords.


How can I manage newly created tables permissions in the PostgreSQL?

I want to create default permissions on the schema and automatically assign these permissions to newly created tables. For this purpose, I’ve created a role and assign a user. Afterward, I’ve created default permission with the following statements. But there are no default permissions in the newly created tables. How can I manage users and roles on the newly created tables?

CREATE SCHEMA ${db_schema};

#for create read only role
CREATE ROLE readonly;
GRANT CONNECT ON DATABASE ${db_name} TO readonly;
GRANT USAGE ON SCHEMA  ${db_schema} to readonly;

#for create read write role
CREATE ROLE readwrite;
GRANT CONNECT ON DATABASE ${db_name} TO readwrite;
GRANT USAGE, CREATE ON SCHEMA ${db_schema} TO readwrite;
grant readwrite to ${db_user_name}

# create a user and assign read-write role
create user appuser with password 'secret_password'
grant readwrite to appuser

# create a user and assign read-write role
create user reportuser with password 'secret_password'
grant readonly to reportuser

# create a table with appuser who has the read-write permission
CREATE TABLE usertestscehema.accounts (
insert into accounts (username) values ('test');

# select table with reportuser who has the read-only permission
select * from accounts

Finally, I got the below error;

SQL Error (42501): ERROR: permission denied for table accounts

Afterward, I examined the permissions with the following statement and I didn’t see any permission related to reportuser who has my readonly user.

SELECT grantee, privilege_type 
FROM information_schema.role_table_grants 
WHERE table_name='accounts'

Where am I missing about this problem?

rest – How to manage primary keys in CQRS

I’m building a backend following the CQRS pattern and I don’t know how to manage properly the primary keys (surrogate keys) between the command and query databases.

For example: I have a model with two different tables in each database. When I want to update the model in his different tables, each one doesn’t have the same ID (this is not my real schema):

players (cmd db)
| id | name | wins |   email   |
  1    Mark   200    mk@kogames
players (query db)
| id | name | wins | role_power | role_name | email |
  12   Mark   200      2120         melee   mk@kogames

As you can see Mark has two different IDs: 1 and 12. What are the recommended strategies in order to service CRUD operations and reference both registries properly? I was wondering if store both keys in any kind of storage as follows would be well-suited:

|   email   | cmd_id | query_id |
  mk@kogames     1        12

But it doesn’t seems to be a well designed solution.

How do you manage and organize business data?

I am looking for ways that will help me to organize all data in one place. Also which tool you prefer for making your business presentations.

design – How to run and manage multiple instances of an application (different start params) on multiple servers?

Our main (C#) application:

  • takes in parameters and starts working (batch processing, takes anywhere from minutes to hours)
  • up to x instances of said application per server
  • instances are started by users from our front end by making a request to a “master” on a server (really bad load balancing is just one of the problems).

The application itself is really simple. The instances themselves are “stateless”, meaning it doesn’t really matter if they do fail as long as they are being restarted with the same parameters. The application is also pretty lightweight.


  • deployment of the main application is done manually, by literally uploading it to servers and changing the application path the “master” uses to launch the instances
  • if an instance errors for whatever reason, healthchecks fail and the database entry for the instance is being deleted -> users need to manually restart the application since our backend isn’t capable of restarting the instance on another server.

A dream would be having a single endpoint on to which we pass the starting parameters, and a framework (or whatever) takes care of the actual provisioning and error handling. Maybe restarting an instance automatically on another server if one should decide to spontaneously fail.

Docker came to our minds, but we thought this would only really help us in terms of deployment.

A push in to the right direction would be wonderful!

key management – How can I store and manage my GPG key pair securely?

I’ve taken measures and thoughts on how to securely store and manage my key pair. In the process of it a few questions arose, which I’m not capable of answering yet. My key pair will be used to encrypt passwords and documents of banks, insurances, invoices, photos and the like. All this data is not publicly available. It is stored in a cloud with password restricted access. I’m evaluating right now, which one fits best.

This is how I set up my key pair:

# Generated a key pair in the past, following general tutorials
gpg> list
sec rsa2048/9AB628FC04C23871
    created: 2019-02-29 expires: 2022-02-29 usage: SC
    trust: ultimate    validity: ultimate
ssb rsa2048/17832C40CF826BA9
    created: 2019-02-29 expires: 2022-02-29 usage: E
( ultimate ) (1). Thomas Kelly <>

> gpg --list-keys --with-fingerprint
pub    rsa2048 2019-02-29 (SC) (expires: 2022-02-29)
       B69A 8371 FC28 402C C204 82CF 7138 A96B B8F4 C87A
uid         ( ultimate ) Thomas Kelly <>
sub    rsa2048 2019-02-29 (E) (expires: 2022-02-29)

> fdisk /dev/sdb # n, 2048, +2G, w
> cryptsetup open --type plain -d /dev/urandom /dev/sdb1 data
> dd if=/dev/zero of=/dev/mapper/data status=progress bs=1M
> cryptsetup close data
> cryptsetup luksFormat /dev/sdb1 # pw ...
> sudo cryptsetup open /dev/sdb1 data
> mkfs.ext4 /dev/mapper/data

Then I went on and exported my keys towards this device, I’ve created. After I got used to it, that private keys are always a little bit different from another and you can’t export your sub-public key, the following questions remained:

  1. Are both of the following commands returning the ssb key (17832C40CF826BA9)?
gpg --export-secret-keys 17832C40CF826BA9
gpg --export-secret-subkeys 9AB628FC04C23871
  1. Is it fine to remove the key 9AB628FC04C23871 from my system, after I backed it up on the drive, created above?

  2. Should I save a revocation certificate with it?

  3. This key pair once expired and I changed the expire date. I can’t remember correctly, but I’ve found two additional certificates lying around that seem to be these old expires certificates. I’ve read that the process of changing the expiring value creates new certificates. Can you confirm this?

  4. I want to have two certificate stores like this on different locations. I’d renew the key on a yearly base. Should I use paperkey or the same digital method above?

sharepoint online – SPO CSOM Powershell manage folders with # and % in the name

I’m writing a CSOM PS script to create folders in a doclib, and set permissions on them.

Anything works fine except when the folder name contains a %

I am able to create the folder using ResourcePath, FolderCollectionAddParameters and AddUsingPath following this article

But I can’t get access to the folder using GetFolderByServerRelativeURL method.

The code is the following :

$folderurl="/sites/MySite/MydocLib/MyFolder %1"
$rpFolder = (Microsoft.SharePoint.Client.ResourcePath)::FromDecodedUrl($folderurl)
$CurrentFolder = $ctx.web.GetFolderByServerRelativeUrl($rpFolder)

And I get a “file not found” error.

Any help will be appreciated

AdsBridge – manage all campaigns in one place | NewProxyLists

Hello everyone 👋

Today we want to introduce our tracker and get acquainted with you!

AdsBridge is an advanced tracker platform for analytics and traffic distribution.

The most essential tool for affiliate marketers, media agencies, ad networks, and in general anyone involved with internet marketing.

AdsBridge is a cloud-based solution with data centers all over the world.

The main advantages of the tracker:

  • Precise targeting
  • Traffic splicing according to 18 distribution rules.
  • Split testing
  • Auto-optimization of campaigns
  • Built-in landing page editor (visual and HTML)
  • The functionality of hiding transition links
  • Multiplayer mode
  • Manual bot filter
  • Redirect domains
  • Availability and flexible pricing policy.

You can watch a detailed setup guide in this playlist.

There is a 14-day free trial for testing all functions that we propose.

Have a nice day!

AdsBridge team


How to manage your child’s Google Family Link settings using a web browser

You’re a parent. You use Google Family Link to help provide parental controls on your child’s Android devices.

(As Google admits, no parental-control software is perfect. Therefore, you also keep all your child’s devices in your own bedroom every night, until the child is old enough to need overnight smartphone access.)

It’s true that you can use the “Family Link for Parents” app to change your child’s Family Link rules. But perhaps you’d rather not use any app.

How can you change your child’s Family Link rules using a web browser?

how to manage global variables cookies localStorage sessionStorage etc across large organization [javascript, mostly]

Specific details on my current situation I think would be best. This is mostly focused on javascript. The one grey area would be httpOnly cookies. We should probably include httpOnly cookies at the end of the day, but not at first.

Our company uses sessionStorage all over the place. These are basically all globals. I guess each git repo should have it’s own namespace, but, we really should just have one monorepo in the first place.

We had some bug that called for going deep down a rabbit hole. For some reason a sessionStorage.apitoken was undefined/null. Sidenote: at a previous very large company, there was an errily similar issue, instead of sessionStorage it was a cookie. In personal projects and even new, isolated projects, you can simply read and write to sessionStorage and document.cookie directly without repercussions. Sure document.cookie api is so rough that any sane developer would use a wrapper library (like js/cookie) but they might also just write their own/copy paste something from somewhere, so they aren’t directly reading the raw document.cookie or directly assigning to it document.cookie = 'foo...'.

These storage bits are ‘super global’ in the sense that they are global across multiple page loads. Normal globals are difficult, but not quite as difficult as data stored client side. They tend to get written to+read from more often.

At least having a wrapper would be a good start. This was an interesting read: How do I deal with global variables in existing legacy code (or, what’s better, global hell or pattern hell)?
It seems like a registry would be a good idea.

One idea I just got was having a tiny node module/git repo, which holds typescript defs for these globals. All the sessionStorage keys that anyone could used would be enumerated (in some way) in the typescript definitions. If you want to introduce a new global, you need to make a tiny pr to this typescript def.

This provides a central location for managing globals. The tiny typescript def repo should probably also include wrapper libraries for these.

I think before you can read/write any globals you’ll need to create an “instance” of the registry by passing in your name, so the registry knows who you are, to make things easier to debug/track down who should change their code. It could be useful to actually create a new instance of the registry, to keep things 100% isolated, but the whole point here is to help manage globals. Maybe it should include some utilities/methods which allow you to read/write from your own “private” namespace, but it feels like a bit of a “v2” at this point. (or maybe just before stamping it as “v1” if it were an open source library)

I think, jsCookie is the best library for managing cookies:
But, we also need a wrapper for sessionStorage+localStorage. Feels like it should be asynchronous, but hard to say.. I think some projects may have no issues with storage limits for session/localStorage.

I think, you need a mechanism to “listen” to new values, based on my experience at a previous company. (Like, for your “preferred store” location for a retail company).

I think the “global manager” library should probably be the only code that is allowed to read/write from client side storage (cookies/sessionStorage/localStorage). Crucially, review of this code will have to consider how previous client side data would be migrated when changes to the keys/storage format might change. I’m not sure what the exported functions of this library would be exactly, but the smaller the better, I think. It might be useful to consider how the code would handle going over cookie storage limits. Of course, reviewers would also need to think about client side storage limits. This might be where httpOnly cookies need to come into consideration, I’m not sure exactly how they might contribute to cookie storage limit. In some way this “global manager” repo needs to have checks for possibly exceeding storage limit, and all key/value pairs (esp. cookies) would need to include some definition of how many bytes will be stored in that key.
Options for this check:
-Easiest would be a normal runtime check, would also be the most accurate, but worst performance. The runtime checks could be made to run only if NODE_ENV!=='production', either only in development, or only in testing.
-Compile time check would be best performance, but I’m not sure how that would be implemented. Might require some special tooling.
Theoretically a compile time check could be as accurate as the runtime check when it counts. If there’s an error with exceeding the storage limit, then some how a flag would be set.. I suppose in the url.. this flag would cause the runtime checking code to be loaded in the future for X days until there are no more storage errors… In addition, when an error occurs, the current state of the storage mechanism could be cached, if writing to storage fails, the whole state is cached, so we know the state before+after the storage exception.

Some cookie defs could include code for itself if deleted… along with some indication that it is recoverable… and possibly other values that are needed to recover it (although that seems like bit overboard, probably time to fix the problem upstream).

  • On second thought, if cookie/sessionStorage/localStorage value is recoverable, it might be better to just consider it a cached value. In this sense, the “global storage manager” could have code that lets the cache be cleared if the storage limit is being hit (or simply uses a different storage mechanism – i.e. switch a key/value from being stored as a cookie to being stored in sessionStorage (which has a higher storage limit).

Pardon the fact that this is not well written at this point. There’s just a lot of complexity and ideas and only so much time in the day.