8 – Configuration import constantly showing the configurations to import

Recently, after performing a configuration import via drush (drush cim sync), the configuration files do not seem to import correctly, because the files are listed after a supposed successful import.
(and make a cex sync displays the configurations to export, despite the fact that nothing is changed in the backend)

The only thing I have done recently is to import a copy of a database from one of our live test servers, to get the content.

Is there a UUID I need to change or something?

Bitcoin core – Does restoring wallet.dat require the same paths and configurations on the servers?

I have a main Bitcoin server (server 1) that works well. Now, I am testing the backup and restoration of his portfolio on a new server (Server 2).
Imagine that server 1 has these configurations:

blocksdir=/btc/blocks
datadir=/btc/data # wallet.dat file is here in  the wallets directory

Now I want to move the backup file (wallet.dat) to the new server whose default paths are like this:

~/.bitcoin/wallet.dat
~/.bitcoin/blocks
  1. Do i have to have the same paths on server 2 for The data and block as server 1? Or can I move the backup file to the default path of wallet.dat on server 2?
  2. Should I copy the download blockchain from server 1 and move them to server 2 as well?

bitcoind – Bitcoin Core JSONRPC only accepts requests with 0.0.0.0 in configurations

I had lunch with a Bitcoin Core server and am trying to connect via JSON-RPC.
Here are my configuration settings:

server=1  
rpcuser=admin  
rpcpassword=password
rpcport=1234
rpcallowip=94.183.32.151

But all cURL connections to this server via IP 94.183.32.151 have the same error result as:

cURL error 7: Failed to connect to 94.183.32.151 port 1234: Connection refused

I also tried to add this option, but that did not solve the problem:
rpcbind:94.183.32.151

Only when i put 0.0.0.0 as a combined RPC IP, the Bitcoin core returns a real answer. I have checked many pages, but I have found no other suitable way to allow certain IP addresses to Bitcoind. Can you help me please?

Note: IP, port number, user name and password are changed from actual values.

Why did PostgreSQL, MongoDB, and possibly other database software allow such dangerous configurations?

A few years ago now, still well into the 2000s, I was very naive. Especially in terms of IT security.

To make a long and painful story that I don't even remember too well, the bottom line is that I installed a FreeBSD server at home with PostgreSQL. Being the naive fool that I was, I had no idea that there was even such a thing as an "SSH tunnel" or something like that. So, I assumed that the only way to connect to my database was to allow remote connections directly to it.

I did not have a "LAN LAN" with internal IP addresses; instead, I had several "real" (external) IP addresses, one for my normal PC and one for the server. As such, this problem has become even more serious.

When setting up the file called "pg_hba.conf", which controls how you can connect to the PostgreSQL database (separate from user accounts or "roles"), I didn’t not read or understand the manual and comments in the file correctly. For this reason, I have interpreted "trust" mode to mean "trust, assuming they give the correct username and password". In reality, it meant "trust this username with ANY PASSWORD OR NO PASSWORD AT ALL".

Since I also selected "all IP addresses" (because, even if they were "real" IP addresses, they were not static and sometimes changed), this means that six months, my "secure" server (as I imagined it in my stupid head) with very private and sensitive data was there so that the whole world could connect freely as long as they could guess my name ; PG user very easily guessable .. from any IP address … with or without password …

It was only after months and months (again, six months seemed about right) that I reviewed this file after getting cold feet. It was basically just a "feeling", and it could easily have gone on like this for years and years. To date, I don't know if anyone has logged in and stolen all of the data and is now sitting on it for future blackmail opportunities.

Yes, I was a complete idiot for not reading / understanding. I understand. I even agree. But still, why would it be even to have such a configuration possibility? Who would ever want them to "trust" someone just providing the user name / role and ignoring the password, even if a password has been set? It doesn't make sense to me. In my defense, it has never happened in my brain that anyone who designs a system in such a stupid way. Yes, I blame the database software designers to some extent, even if it was not the default configuration. I actively changed it, but why do it possible to do this? The manual didn't exactly have a big warning about it, and no message was issued when restarting the database to warn me of this or something like that.

To this day, it still occurs to me that such a configuration was (and probably still is) possible. You don't set a password for it to be bypassed like this. I'm still almost incredulous about it.

Also, even though I have never used it myself, in recent years I have heard horror stories about MongoDB databases allowing the whole world to freely connect to it by default! It goes even further than PostgreSQL and makes my skin crawl just by thinking about it. I really feel for those poor fools who trust this database and configure it thinking, as I did with PG, that it is secure and sane.

Why are they doing this? If it is to give some "job security" to database administrators, well, that’s a really cruel way to do it. Even though it was largely / mostly my fault, I continue to hold this against the PostgreSQL developers and will never "drop" it mentally. In the case of MongoDB, it looks like they really did it on purpose, because it was by default. I don't understand how they can endanger their users like that, especially not without the user even changing the configuration.

Sharing the nginx cache between two server configurations

I am trying to use an Nginx cache instance in two server configurations. on the same server.

Is it safe and supported by Nginx?

The configuration works. But I'm not sure of the consistency

Nothing is written in the documentation

proxy_cache_path /home/mycache levels=1:2 keys_zone=mycache:90m max_size=200G inactive=15d;

server {
   server_name server1;
   ...
   location / {
          proxy_temp_path /home/temp;
          proxy_cache mycache;
          proxy_cache_key $uri; # only URI
          expires 50d;
          proxy_pass        http://blabla;
   }
}

server {
   server_name server2;
   ...
   location / {
          proxy_temp_path /home/temp;
          proxy_cache mycache;
          proxy_cache_key $uri; # only URI
          expires 50d;
          proxy_pass        http://blabla;
   }
}

architecture – How to manage different system configurations for multiple clients?

Background:
I work in a retail market. We have more than 15 different applications in our system (including desktop, web and background microservices) and all these applications have their own properties (that is, each application is written from so that its features are based on the configuration, you can enable the feature or disable it depending on what the customer wants, for example a feature that takes "Create Gift Lists at the point of sale" (its configuration can be enabled or disabled.) Similarly, some properties are in the database, which will be different for each client.
Most properties are in a .properties file similar to the spring properties.

Statement of the problem:
Now suppose we have more than 100 clients and each client has their own requirements and we need to configure each property for them. It is very difficult to integrate a new client and define each property for him. There should be a way to manage all these properties, such as a configurator, in which we ask it to enable / disable a feature, which will do all the work for us. This is a real problem and I am the people solve this problem.

Can you give me some suggestions on this?

Drupal Console: How to Get Unique Items in Multi-Object Configurations Using Debug: Config

For example, in drush I can run

drush cget search_api.server.iges_solr backend_config.connector_config.port

and get a useful answer.

('search_api.server.iges_solr:backend_config.connector_config.port': 8988)

In Drupal Console, I tried the same formatting as in the drush command, but two points between the main config point and the subitem

(drupal dc search_api.server.iges_solr:backend_config.connector_config.port)

I have tried quite a few other things and have read the documentation in vain. Is it possible to do without Grepping?

Browser configurations to stay safe from malicious software and unwanted items

I have to set up a browser to surf the Internet trying to stay as much as possible away from malware (I already know that there is no way to stay safe at 100%)

My idea is: to use Firefox with these extensions: Adblock More, uBlock Origin, HTTPS Anywhere and especially NoScript Security Suite. I've also thought about clear the cache when Firefox is closed(https://superuser.com/questions/461574/does- clearing-the-browser-cache-provide-real-security-benefits).

But as I am not an expert, I searched on the internet info and read this: https://security.stackexchange.com/a/27957 and he said:

Disabling JS should not be considered a miracle solution for the browser.
security

and

Take into consideration that NoScript will also increase the attack
area

Before reading this, I was pretty sure that No Script would have been enough to make the browser very secure.
But now, I wonder if there are the safest ways to secure the browser, and I now have these questions:

Is my idea good? If yes, what can I improve?

Should I use Chrome instead of Firefox? (I read this https://security.stackexchange.com/a/113 then I ask it)

Are the extensions I mention above good?(I know that Adblock Plus and uBlock Origin block more or less the same ads, but I prefer to keep both.) Browser performance is not a problem.

is there another extension that I should install?

is there another browser setting that I should turn on / off (like the option to clear the cache when Firefox is closed)?

I already know the basic rules, such as the update browser and the operating system, do not open an unsecured link, and so on. I would like to know the advanced tips.
I know it also depends on the operating system and other elements, but in this topic, I would like to talk about the browser.

PS: I know that instead of no script, I could just disable the global scripts of the browser settings, but I like to allow a script in a site because some sites could not run without script specific.

PPS: Sorry for my bad english

formal languages ​​- Maximum number of configurations of the Turing machine after $ n $ has moved

I came across the following question:

What is the maximum number of Turing Machine configuration after $ n $ moves?

The answer given was:

$ k ^ n $, or $ k is a branching factor.

And this "branching factor" left me confused. I've thought about it: $ Q $ to be the total number of states, $ Gamma $ to be a band alphabet and two movements, left and right $ {L, R } $, for each transition function, we have $ 2 ^ {Q times Gamma times 2} $ possible transitions to each of these $ n $ moves. So, $ k must be $ 2 ^ {Q times Gamma times 2} $. So total number of configurations of the Turing machine after $ n $ the movements must be $ {(2 ​​^ {Q times Gamma times 2})} ^ n $. Am I correct with this?

design – Angular: Is it a good way to manage configurations when using similar but different configurations?

I want to hear some thoughts on a pattern I use to keep track of different instances of table configurations. These tables can be the same table with slightly different configurations for different pages. Thus, it is sometimes necessary to add a column to all tables and sometimes a column specific to the table of a specific page.

We are currently encapsulating the common behavior in a component, but we are having problems modifying the entries and updating the table. It also seems difficult to keep these details inside the component.

Here is a StackBlitz with the same pattern, but not the same code as below. Below you will find a more condensed version of the pattern.

table state.service.ts

export class TableStateService {
  state = new BehaviorSubject(new Configuration())

  set(options: Configuration): void {
    this.state.next(new Configuration({ ...this.state.value, ...options })
  }
}

table-one-state.service.ts

export class TableOneStateService extends TableStateService {
  constructor() {
    super()

    this.set(new Configuration(...))
  }
}

wrapper-state.service.ts

export class WrapperStateService {
  service: TableStateService
  $state: Observable

  setService(service: TableStateService): void {
    this.service = service
    this.$state = service.state.asObservable().pipe(throttleTime(50))
  }
}

table.component.ts

ngComponent({
  ...,
  providers: (WrapperStateService),
})
export class TableComponent {
  private sink = new SubSink()
  @Input() service: TableStateService

  constructor(
    private wrapper: WrapperStateService,
  ) { }

  ngOnInit() {
    this.wrapper.setService(this.service)

    this.sink.add(
      this.wrapper.$state.subscribe(state => ...)
    )
  }

  ngOnDestroy() {
    this.sink.unsubscribe();
  }
}

various-table-enfant.component.ts

export class VariousTableChildComponent {
  private sink = new SubSink()

  constructor(
    private wrapper: WrapperStateService,
  ) { }

  ngOnInit() {
    this.sink.add(
      this.wrapper.$state.subscribe(state => ...)
    )
  }

  ngOnDestroy() {
    this.sink.unsubscribe();
  }
}

page.component.html