Confirm Remote Synchronization – Firebase Real Time Database with ReactJS + Redux + Saga

I have a ReactJS / Redux / Saga application that sends and currently reads data from a Firebase Realtime database. As the data is sent and received, there is a global value of redux status loading, which switches between true and false between sending data and confirming that these data are now in Firebase. loading default to false for this case.

When a user updates his data, the feed is currently:

  • Redux reducer SEND_TO_FIREBASE

return { ...state, loading: true };

  • This reducer triggers a Saga function sendToFirebaseSaga()
     function* syncToFirebaseSaga({ payload: userData }) {
        try {
            var uid = firebase.auth().currentUser.uid;
            const database = (path, payload) => {
                firebase
                    .database()
                    .ref(path)
                    .set(payload);
            };
            yield call(database, "users/" + uid + "/userData", userData);
            yield console.log("successfully written to database");
        } catch (error) {
            alert(error);
        }
    }
  • So at this point loading:true (confirmed that it works)
  • Then, as part of componentDidMount of one of my root components, I have a listener for changes to the Firebase database:
    var props = this.props

    function updateStateData(payload, props) {
        props.syncFirebaseToState(payload);
        }

    function syncWithFirebase(uid, props) {
        var syncStateWithFirebaseListener = firebase.database().ref("users/" + uid + "/userData");
        syncStateWithFirebaseListener.on("value", function(snapshot) {
            var localState = snapshot.val();
            updateStateData(localState, props);
            });
        }

  • and this.props.syncFirebaseToState(payload) is a Redux action with this reducer:

return { ...state, data: action.payload, loading: false };

  • which then confirms that the data was written to the Firebase Realtime database, and then deletes the download page, indicating to the user that their update is now secure.

In most cases, this flow works well. However, I encounter problems when the user has a bad Internet connection or if I refresh the page too quickly. For example:

  • The user loads the application.
  • Disconnects from the Internet.
  • Submits data.
  • The full loop works immediately and loading:false (The Firebase real-time database has written it in "offline mode" and is waiting to be reconnected to the Internet)
  • The user reconnects online.
  • Once online, the user immediately refreshes the page (reloading the React application)
  • The Firebase real-time database did not have the time to synchronize the updates in queue with the remote database. Now, after the refresh of the page, the changes are not made.

Sometimes, the user must not lose his Internet connection. If they submit a change (the page instantly returns a "successful read"), then refresh before the remote server saves it, the data is lost once the update is complete. .

Anyway, as you can see, it's a very bad user experience. I really need a way to confirm that the data has been written to Firebase before deleting the loading screen. I have the impression that I have to do something wrong here and get a successful reminder when that is not the case.

This is the first time I use React / Redux / Saga / Firebase, so I appreciate your patience and your help!

ORACLE DMU (database migration wizard for Unicode)

I am new to this Oracle tool. Currently, I am reading the document for the Oracle DMU tool. When I try to create a step for the DMU repository table space, the document states that we must use the result of the query for the initial size. However, I could not execute this query even as a system account. Below is the request of the Oracle document (https://docs.oracle.com/cd/E51608_01/doc.20/e48475/ch4scenarios.htm#DUMAG247):

SELECT CEIL ((t.cnt * 300 + c.cnt * 1000) / 1048576) || MB & # 39;
"Initial dimension"
FROM (SELECT COUNT () cnt DE sys.tab $) t,
(SELECT COUNT (
) cnt
DE sys.col $
WHERE obj # IN (SELECT obj # FROM sys.tab $)
AND BITAND (property, 65536) = 0
AND type # IN (1,8,58,96,112)
AND charsetform = 1) c

please help!

Thank you

Using the schema in the Oracle database

I was looking for a query to get details about database objects such as tables, LOB, MV, IOT, etc. in Oracle. For example,
if the username is JOHN, then my goal is to get the object name, its total size allocated in GB, the used space
in GB, available space in GB or free percentage. I've had this query above in google that can provide the required output.
Unfortunately, this query does not work in my environment. The database version that I have is version 12.1.0.2 and OS – 7.1 AIX.
Is it possible to obtain the desired result? I have to include it in a shell script and send an e-mail notification for our production database.

I got this query online to get the desired result, but it does not work for me –

Question:

SET SERVEROUTPUT on;
DECLARE
  input_owner         NVARCHAR2(128) := 'APP USER';
  segment_size_blocks NUMBER;
  segment_size_bytes  NUMBER;
  used_blocks         NUMBER;
  used_bytes          NUMBER;
  expired_blocks      NUMBER;
  expired_bytes       NUMBER;
  unexpired_blocks    NUMBER;
  unexpired_bytes     NUMBER;
  total_blocks        NUMBER;
  total_bytes         NUMBER;
  unused_blocks       NUMBER;
  unused_bytes        NUMBER;
  last_ext_file_id    NUMBER;
  last_ext_blk_id     NUMBER;
  last_used_blk       NUMBER;
  result_table        NVARCHAR2(128);
  result_segment_type NVARCHAR2(128);
  result_used_mb      NUMBER;
result_unused_mb    NUMBER;
  result_total_mb     NUMBER;
  CURSOR cur
  IS
    SELECT
      s.segment_name   AS segment_name,
      s.owner          AS segment_owner,
      s.partition_name AS partition_name,
      s.segment_type   AS segment_type,
      CASE WHEN s.segment_type IN ('TABLE', 'TABLE PARTITION', 'TABLE SUBPARTITION')
        THEN s.segment_name
      WHEN s.segment_type IN ('INDEX', 'INDEX PARTITION', 'INDEX SUBPARTITION')
        THEN (SELECT i.table_name
              FROM dba_indexes i
              WHERE s.segment_name = i.index_name AND s.owner = i.owner)
      WHEN s.segment_type IN ('LOBSEGMENT', 'LOB PARTITION')
        THEN (SELECT l.table_name
              FROM dba_lobs l
              WHERE s.segment_name = l.segment_name AND s.owner = l.owner)
  WHEN s.segment_type IN ('LOBINDEX')
        THEN (SELECT l.table_name
              FROM dba_lobs l
              WHERE s.segment_name = l.index_name AND s.owner = l.owner)
      ELSE 'Unknown'
      END              AS table_name,
      s.bytes          AS segment_bytes
    FROM dba_segments s
    WHERE owner = input_owner
    ORDER BY table_name, segment_type;
BEGIN
  dbms_output.put_line('table                         ; segment type        ;   used (mb)     ; unused (mb)     ;  total (mb)');

  FOR ro IN cur
  LOOP

    result_table := ro.table_name;
    result_segment_type := ro.segment_type;

    IF ro.segment_type IN ('TABLE', 'INDEX')
    THEN
      dbms_space.unused_space(
          segment_owner             => ro.segment_owner,
          segment_name              => ro.segment_name,
          segment_type              => ro.segment_type,
          total_blocks              => total_blocks,
          total_bytes               => total_bytes,
          unused_blocks             => unused_blocks,
          unused_bytes              => unused_bytes,
          last_used_extent_file_id  => last_ext_file_id,
          last_used_extent_block_id => last_ext_blk_id,
          last_used_block           => last_used_blk);

      result_used_mb := (total_bytes - unused_bytes) / 1024 / 1024;
      result_unused_mb := unused_bytes / 1024 / 1024;
      result_total_mb := total_bytes / 1024 / 1024;

    ELSIF ro.segment_type IN ('LOBSEGMENT')
 THEN
      dbms_space.space_usage(
          segment_owner           => ro.segment_owner,
          segment_name            => ro.segment_name,
          segment_type            => 'LOB',
          partition_name          => ro.partition_name,
          segment_size_blocks     => segment_size_blocks,
          segment_size_bytes      => segment_size_bytes,
          used_blocks             => used_blocks,
          used_bytes              => used_bytes,
          expired_blocks          => expired_blocks,
          expired_bytes           => expired_bytes,
          unexpired_blocks        => unexpired_blocks,
          unexpired_bytes         => unexpired_bytes
      );
      result_used_mb := used_bytes / 1024 / 1024;
      result_unused_mb := (segment_size_bytes - used_bytes) / 1024 / 1024;
      result_total_mb := segment_size_bytes / 1024 / 1024;
 ELSE
      -- TODO ??
      result_used_mb := ro.segment_bytes / 1024 / 1024;
      result_unused_mb := 0;
      result_total_mb := result_used_mb + result_unused_mb;
    END IF;

    dbms_output.put_line(
        RPAD(result_table, 30) || '; ' ||
        RPAD(result_segment_type, 20)|| '; ' ||
        TO_CHAR(result_used_mb  / 1024 / 1024, '999999999990D00')|| '; ' ||
        TO_CHAR(result_unused_mb  / 1024 / 1024, '999999999990D00')|| '; ' ||
        TO_CHAR(result_total_mb / 1024 / 1024, '999999999990D00')); 

  END LOOP;
END;

Please indicate what I need to change according to my environment.

Relationships – Best Practices in Database Modeling and Design

I'm looking for advice on the best way to model my data.

A bit of background: I have location information that I'm trying to model. There are two categories of placements (corporate and non-business locations), so all locations do not have the same data elements.

The two tables below represent my current model (without listing all the columns).

My thoughts on the current design:

  1. If I combine the two tables below, I will have to put null values ​​for company_codebecause not all sites have company_code. I do not want to add null values ​​because of the large number of different data elements between the two types of locations.
  2. the location_company and location are two important concepts in the business. I would like to have tables in the database both represent these two concepts.

enter the description of the image here

I do not know if that's the best approach. Any idea about the design would be greatly appreciated. Any additional thought is also welcome.

If my problem is not clear, ask for it please. I will graciously add clarifications. Thank you in advance.

architecture – Middleware to optimize speed between Web server and database server

We have a web service that receives a maximum throughput of 50,000 messages / second. However, our database can process a maximum of 30,000 messages / second. What kind of middleware architecture should I use to optimize the balance of the load? It must be built from scratch. Should I opt for an event architecture?

Database API and Security – Drupal Answers

I'm working on a controller that queries a table based on the value of a route parameter

Example of a road path:

path: '/path/{parameter}'

Controller:

  public function myFunction($parameter) {
    $connection = Drupal::database();
    $query = $connection->select('my_custom_table', 'mct');
    $query->condition('mct.col', $parameter, '=');
    ...
    $result = $query->execute();
    $records = $result->fetchAll();

Should disinfection be done in this case where the database API supports it?

postgresql – change the host and port and add the user and password to the postgres database in local Ubuntu

I am new to postgres

currently i want to replicate the method of how i connect to postgresql db in the cloud
to my system. I use Ubuntu 18.04 lts and postgresql 11

what i want is to change the default option for postgresql

host, port, database, username, password

once created, I want to grant access to the user to access a specific server / database table only

all of this should be done with python later.

what I did.

After tutorials on the web, I install postgresql

then
to create db i did was

1. sudo su postgres

2. psql
3. create database test_db;
4.  create user test_user with password 'test_password'
5. alter user test_user with superuser;

so all this is done at the test level by me, thanks to python, I am able to connect to the default one.
I now want to change the host, port, and privilege of a particular user to access a particular database / data

Thanks in advance

php – Get the variables in the API to write in the database

I've solved the problem of returning API data that was problematic, but it is happening as well.

("cabecalho")=>
object(stdClass)#5 (10) {
  ("bloqueado")=>
  string(1) "N"
  ("codigo_cliente")=>
  int(537229882)
  ("codigo_parcela")=>
  string(3) "999"
  ("codigo_pedido")=>
  int(538529896)
  ("codigo_pedido_integracao")=>
  string(4) "1152"
  ("data_previsao")=>
  string(10) "15/02/2017"
  ("etapa")=>
  string(2) "10"
  ("numero_pedido")=>
  string(4) "1096"
  ("origem_pedido")=>
  string(3) "API"
  ("quantidade_itens")=>
  int(11)
}
("det")=>

How can I get the values ​​inside the attribute of ("product") => to be able to work with them

PHP code

header('Content-type: application/json');

require_once("../OmieAppAuth.php"); 
require('PedidoVendaProdutoJsonClient.php');


  $sc = new PedidoVendaProdutoJsonClient(); 

  $chave = new pvpConsultarRequest();  
  $chave->numero_pedido = "1096"; 
  //$chave->pedido_venda_produto = "1096"; 

       //   var_dump($chave);  
  $ret = $sc->ConsultarPedido($chave); 

  var_dump($ret);

SQL Server – Accidentally named all tables in my database starting with a number

I accidentally named all my tables (100+) in my SQL database with the 2019_ prefix

I did not know that it was not a good idea until now (I have just returned to SQL after 2 years)

Is it possible to bulk swap it in suffix or even delete all the tables? I do not even manage to open them. Really, I do not want to run them all manually.

Appologies if that's a silly question, potentially googling the wrong terms

Thank you!