html – Firebase error: the database is not defined



    
    
    
    
    


Update of the fire database in real time

I have a method that I want to check every time in the database if a child node field is changed (in this case, the "status" field), when it changes, I want the text on the Android screen is changed. I am able to make the change in the text, however when I leave and return to the screen the message a bit (even when the status in the database is changed). Password:

@Override
Protected void onCreate (Bundle savedInstanceState) {
super.onCreate (savedInstanceState);
setContentView (R.layout.activity_principal);

circleImageViewUser = findViewById (R.id.circleImageViewUser);
txtOcorrencias = findViewById (R.id.txtOcorrencias);
txtStatus = findViewById (R.id.txtStatus);
txtPendencias = findViewById (R.id.txtPendencias);
txtSaudation = findViewById (R.id.txtSaudation);

Toolbar toolbar = findViewById (R.id.toolbarPrincipal);
setSupportActionBar (toolbar);


retrieveUser ();
recoverUserPassword ();

checkStatusOcorrence ();

}


public void verificaStatusOcorrence () {
String idUsuario = UserFirebase.getDadosUsuarioLog (). GetIdUsuario ();
userRef = firebaseRef.child ("occurrences"). child (idUsuario);

userRef.addValueEventListener (new ValueEventListener () {
@Override
public void onDataChange (@NoNull DataSnapshot dataSnapshot) {
for (snapshot DataSnapshot: dataSnapshot.getChildren ()) {
Occurrence Occurrence = snapshot.getValue (Occurrence.class);

if (occurrence.getStatus () == "1") {
txtStatus.setText (Occurrence.A_CAMINHO);
txtStatus.setTextColor (Color.MAGENTA);
txtStatus.setTextSize (16);
} else if (instance.getStatus () == "0") {
txtStatus.setText (Occurrence.AGUARDING);
txtStatus.setTextColor (Color.RED);
txtStatus.setTextSize (16);
} else if (instance.getStatus () == "2") {
txtStatus.setText (OK event);
txtStatus.setTextColor (Color.GREEN);
txtStatus.setTextSize (17);
}
}
}

@Override
public void onCancelled (@NonNull DatabaseError databaseError) {

}
});
}

Database:

insert the description of the image here

Design Review: Primary Keys Generated Randomly in the Database

In my web application, entities have unique identifiers and, for certain types of entities (eg, user, order, etc.), identifiers are visible to users via URLs. MySQL is used to store the entities.

Using automatic incrementing as the primary key is simple and convenient, but if it is exposed in a URL, users can estimate the amount of the entity or increase the rate by observing the IDs (tank problem German), and we may want to avoid that.

Having read many articles on the subject, I know several approaches to mitigate the information disclosed, for example: generate random identifiers by the application, using UUID, encode / chop the last identifier, etc.

But finally, I came with my approach.

First, generate a list of unique integers and preload them into the database.

Since I'm not going to have billions of records, I could make a big sequential list and mix it in memory.

array to store the integers would be:

CREATE TABLE IF NOT EXISTS entity1_id (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
entity_id INT UNSIGNED NOT NULL UNIQUE
)

the entité_id the column is given the integer value.

Second, prepare the entity table and a table of counters:

CREATE TABLE IF NOT EXISTS entity1 (
id INT PRIMARY KEY NOT SIGNED,
create_time DATETIME DEFAULT CURRENT_TIMESTAMP
) MOTOR = InnoDB;

CREATE TABLE IF NOT EXISTS id_counter (
CHAR (30) PRIMARY KEY
counter NON NULL DEFAULT 1
)

INSERT INTO id_counter (entity_name) VALUES (& # 39; entity1 & # 39;);

Finally, use a "BEFORE INSERT" trigger to retrieve an ID from the entity1_id table and increment the ID counter.

The counter is used to keep track of the IDs used in the entity1_id table. Think about entity1_id in the form of a map, entity1_id[counter] gives the following entity ID.

Here is the trigger:

DELIMITER |

CREATE TRIGGER entity1_idgen BEFORE INSERT ON Entity1
FOR EACH ROW BEGIN
DECLARE cnt, eid INT UNSIGNED;

IF (NEW.id IS NULL) THEN
SET cnt = (SELECT counter of account id_counter WHERE entity_name = 'entity1' FOR UPDATE);

SET eid = (SELECT entity_id FROM entity1_id WHERE id = cnt);
IF (eid is zero) THEN
SET @errMsg = CONCAT ("the table" entity1_id` does not have a line where id = ", cnt);
SQLSTATE SIGNAL> 45000 & # 39; SET MESSAGE_TEXT = @ errMsg;
END IF;

SET NEW.id = eid;
UPDATE id_counter SET counter = counter + 1 WHERE entity_name = & # 39; entity1 & # 39 ;;
END IF;
END |

DELIMITER;

I've briefly tested this approach, inserting it into the Entity1 table, if login is not specified, an identifier is taken from the entity1_id table; if an identifier is specified, it will be used, and entity1_id and id_counter is not visited; if login is not specified and there is not enough ID in entity1_id, the error is reported.

I think this approach is good for me:

  1. I receive randomly generated identifiers
  2. I can decide which identifiers to use
  3. This is not slow

From my point of view, this approach has 3 disadvantages:

  1. it's not a simple solution
  2. the risk of missing ids
  3. additional space for pre-generated identifiers
  • Am I doing something wrong?
  • Are there other risks that I am not aware of?

Postgresql Vacuum and Duplication – Exchange Stack Database Administrators

I have a cascaded replication architecture with 3 db postgresql

enter the description of the image here

The master becoming too big (~ 170Gig), I wanted to perform a "cleaning" work on the weekend that would perform several DELETE operations on millions of lines per batch and on a VACUUM table just after.
Unfortunately, my cleanup script was not successful because the DB2 disk is full (pg_xlogs?)

2019-06-21 17: 41: 08.770 UTC [1136] FATAL: Unable to expand file "base / 34163166 / 44033600.20": no available space on the device
2019-06-21 17: 41: 08.770 UTC [1136] TIP: Check available disk space. 2019-06-21 17: 41: 08.770 UTC [1136] BACKGROUND: Repeat xlog at 662 / 6A087C30 for Heap / INSERT + INIT: off 1 2019-06-21 17: 41: 09.188 UTC [13036] FATAL: Can not write to "pg_xlog / xlogtemp.13036" file: no available space on the device

So I have to run my script, but I wonder how I should do it so that my database db2 does not implode (does not stop replication) and does not reconfigure it? Also, I'm not sure why the db2 disk was filled: /, have you any idea?

Thank you for your help, bravo

database – changed the name of the computer – SQL server

I changed the name of the computer and entered these commands in SQL Server Management Studio (ssms)

sp_dropserver ;
sp_addserver , local; 

I'm relying on this query

select * from sys.servers 

it's the result

Is it correct? the location, the provider chain, the catalog is NULL?

How can I tell if everything has changed based on the new server name?
Do I have to reinstall SSMS?

How to update a NoSQL database and publish an event atomically in an event-driven microservice architecture?

In an event-driven microservice architecture, services typically need to update their domain state and publish an integration event on a service bus at the same time (both operations are complete or none). When using a relational database, this is usually done with the help of the send box template: only one entry is saved in the database, which indicates that the event X must be published. This entry is saved as part of the same transaction that contains the domain status changes. A background process then queries these entries and publishes the events. This means that the event will eventually be published, which will make the system consistent.

However, NoSQL databases do not support the idea of ​​updating multiple documents in a single transaction, and many of them do not support it without great solutions. You will find below a list of potential solutions (some more ugly than others):

1. Variation of the model of the shipping box:

Send box template, but instead of having a separate collection of documents for pending events, they will be saved as part of the domain entity. Each domain entity will encapsulate a collection of events that remain to be published and a background process will query these entities and publish the events.

The inconvenients:

  1. If the background process publishes the event but fails to delete it
    from the domain entity, it will republish it. This should not really be a problem if the updates are idempotent or if the event manager is able to identify duplicate events.
  2. Domain entities are
    corrupted with integration events.

2. Event search:

Event search eliminates this problem, but it is very complex to implement and represents a significant cost for small microservices.

The inconvenients:

  1. Complex, may require a complete overhaul of how services work with data.

3. Listen to your own events:

The service will only publish an event to which it is also subscribed (it will not update its status as part of the same operation). When the service bus returns the event for processing, the service updates its domain entity.

The inconvenients:

  1. Other microservices can handle the event before the microservice of origin. This can cause problems if they assume that the event has already occurred while it is not.

Are there other solutions to this problem? What is the best?

mysql – according to my last attempt to try to update the customer table from the database 1

I've tried this approach, I seem to miss something
Any advice would be useful.
Thank you in advance.

UPDATE databse2.customers_T
INNER JOIN database1.customers_T
ON database1.customers.customers.id = databse2.customers_T.customers.id
SET databse2.customers_T.customers.id searchkey.id customers.name customers.card
clients.address clients.address2 clients.postal
customers.city customers.region customers.country
customers.firstname customers.email customers.phone
customers.phone2 customers.fax customers.visible
customers.curdate customers.image customers.memodate
= database1.customers_T.customers.id searchkey.id customers.name
customers.card customers.address customers.address2
customers.postal customers.city customers.region
customers.pays customers.firstname clients.email
customers.phone customers.phone2 customers.fax
customers.visible customers.curdate customers.image
ccustomers.memodate

php – Database interface and PDO adapter

I write my own PHP framework and I want to respect the SOLID principles.

I made this interface:

    & # 39; test & # 39;]* /
public function insertRecords (string $ table, array $ data);

/ **
* Update records in a table
* @param string $ table Name of the table
* @param array $ changes Table with the fields and values ​​of the table - Ex: ['name' => 'test']
     * @param array $ conditions Requirements to run it Ex: ['id' => 1]
     * /
public function updateRecords (string $ table, array $ changes, array $ conditions);

/ **
* Delete records in a table
* @param string $ table Name of the table
* @param string $ conditions Requirements to run it - Ex: "id =: id"
* @param array $ params Parameters to replace in conditions
* @return int Affected line
* /
public function deleteRecords (string $ table, string $ conditions, array $ params = []): int;

/ **
* Returns the last ID inserted
* @return int ID
* /
public function lastInsertId (): int;

/ **
* Close the connection
* /
public function closeConnection ();

}
?>

Implemented by this class:

get (& # 39; base_server). & # 39 ;; port = & # 39; $ config-> get (& # 39; base_port & # 39;). & # 39 ;; basename = & # 39 ;. $ config-> get (& # 39; base_name & # 39;);

try{
$ this-> pdo = new PDO (
$ connectionString,
$ config-> get (& # 39; db_username & # 39;),
$ config-> get (& # 39; db_password & # 39;)
)
# We can now save all exceptions in case of fatal error.
$ this-> pdo-> setAttribute (PDO :: ATTR_ERRMODE, PDO :: ERRMODE_EXCEPTION);

# Disable the emulation of prepared instructions, rather use REAL prepared statements.
$ this-> pdo-> setAttribute (PDO :: ATTR_EMULATE_PREPARES, true);

$ this-> connected = true;
return true;

// Error handling
} catch (PDOException $ e) {
throw new  Exception ("Failed to connect to database:". $ e-> getMessage (), 1);
}
}

public function prepare (string $ sql, array $ params = [])
{
$ this-> stmt = $ this-> pdo-> prepare ($ sql);
if (! empty ($ params))
{
$ this-> bindParams ($ params);
}
}

/ **
* Link the param value to the prepared SQL query
* @param string $ param
* @param $ value
* @param $ type
* /
private function bind (string $ param, $ value, $ type = null)
{
if (is_null ($ type))
{
switch (TRUE) {
case is_int ($ value):
$ type = PDO :: PARAM_INT;
Pause;
case is_bool ($ value):
$ type = PDO :: PARAM_BOOL;
Pause;
case is_null ($ value):
$ type = PDO :: PARAM_NULL;
Pause;
default:
$ type = PDO :: PARAM_STR;
}
$ this-> stmt-> bindValue (& # 39 ;: $ param, $ value, $ type);
}
}

/ **
* Link a group of parameters
* @param array $ params Array with parameters and values ​​Ex: ['name' => 'test']
     * @param string $ prefix Prefix to add to the parameter name
* /
private function bindParams (array $ parameters, string $ prefix = & # 39;)
{
foreach ($ params as $ key => $ value) {
$ this-> bind ($ prefix. $ key, $ value);
}
}

/ **
* Eseque the query preparata
* /
public function execute () {
returns $ this-> stmt-> execute ();
}

public service results ()
{
$ mode = PDO :: FETCH_ASSOC;
$ this-> execute ();
$ this-> stmt-> fetchAll ($ mode);
}

public service single ()
{
$ mode = PDO :: FETCH_ASSOC;
$ this-> execute ();
$ this-> stmt-> fetch ($ mode);
}

public function rowCount (): int
{
return $ this-> stmt-> rowCount ();
}

/ **
* Elimina database record dal. Es: (users, where id =: id, ['id' => 1])
* @param string tabella
* @param string $ conditions campi e condizione
* @param array $ params valori delle condizioni
* @return int affected lines
* /
public function deleteRecords (string $ table, string $ conditions, array $ params = []): int
{
$ delete = "DELETE FROM {$ table} WHERE {$ conditions}";

$ this-> prepare = $ delete;
if (! empty ($ params))
{
$ this-> bindParams ($ params);
}
$ this-> execute ();

return $ this-> rowCount ();
}

/ **
* Aggiorna a record of the database
* @param string $ table
* @param array $ changes with the modifiche [field => value]
     * @param array $ conditions condizioni [id => 1]
     * /
public function updateRecords (string $ table, array $ changes, array $ conditions)
{
$ changesStr = & # 39;
$ whereStr = & # 39;
$ cond_array = [];

foreach ($ change as $ field => $ value) {
$ changesStr. = "{$ field} =: param _ {$ field},";
}
// rimuovo the ultiam, in eccesso
$ changesStr = substr ($ changesStr, 0, -1);

foreach ($ conditions as $ condition => $ value) {
$ cond_array[] = "{$ condition} =: where _ {$ condition}";
}
$ whereStr = implode (& # 39 ;, $ cond_array);

$ this-> prepare ("UPDATE {$ table} SET {$ changesStr} WHERE {$ whereStr}");

// uso i prefissi per evitare sovrapposizioni tra parametri e condizioni
$ this-> bindParams ($ changes, __parameters);
$ this-> bindParams ($ conditions, where _ & # 39;);

$ this-> execute ();
}

/ **
* Inserisce record nel database
* @param string $ table tabella
* @param array $ data dati da inserire field => value
* @return bool
* /
public function insertRecords ($ table, $ data)
{
$ fieldStr = & # 39 ;;
$ valuesStr = & # 39;

// genero the request
foreach ($ data as $ f => $ v) {
$ fieldStr. = $ f;
$ valuesStr. = ": {$ f}";
}

// rimuovo la, in eccesso
$ fieldStr = substr ($ fieldsStr, 0, -1);
// rimuovo la, in eccesso
$ valuesStr = substr ($ valuesStr, 0, -1);

$ this-> prepare ("INSERT INTO {$ table} {{fieldsStr}) VALUES ({$ valuesStr})");
$ this-> bindParams ($ data);
$ this-> execute ();
return true;
}

// The clone of the magic method is empty to avoid the duplication of the connection
private function __clone () {
returns false;
}
private function __wakeup () {
returns false;
}

public function lastInsertId (): int {
return $ this-> pdo-> lastInsertId ();
}

public service closeConnection () {
$ this-> pdo = null;
}

// Get the connection
public function getConnection () {
returns $ this-> pdo;
}
}

?>

Is this correct for SOLID principles? Insert the insertRecords, updateRecords, and deleteRecords methods here, or is it better to insert them into another class like DataMapper?

Separate and combined database and web server

I've always had a separate database server and web server. However, it was recently recommended to merge the 2 servers on a faster machine with more RAM than the other 2 machines combined and with SSDs. The combination of 2 servers seems strange to me. I am a web developer and not a computer scientist. Also, I do not keep sensitive information on my websites: no credit card information, just the basic content and maybe a few user addresses (which the user is not obliged to enter).

– Would this configuration be faster than 2 separate servers?
– Do you have problems with the database and web servers together?
– Other advantages of the combination, no latency, etc.?

database – Fix slow query wp_term_relationships in get_posts

I've used the Query Monitor plugin to search for a slow query on my WordPress site. The mentioned below WP_Query-> get_posts () takes about 0.3446 query time on the total database query time of 0.3976.

SELECT wp_posts.ID
DE wp_posts
JOIN LEFT wp_term_relationships
ON (wp_posts.ID = wp_term_relationships.object_id)
O 1 = 1
AND wp_posts.ID NOT IN (203598)
AND (wp_term_relationships.term_taxonomy_id IN (17)
OR wp_term_relationships.term_taxonomy_id IN (11652,20693,21952,23971,24907,24908,25928))
AND wp_posts.post_type = & post;
AND ((wp_posts.post_status = publish & # 39;))
GROUP BY wp_posts.ID
ORDER BY wp_posts.post_date DESC
LIMIT 0, 6

I guess it has something to do with the 20,000 or more tags used in the posts on my site. Is it true? If so, how do you suggest solving this slow query? Or how do I remove tags that are not used in more than 5 posts from all posts on the site?

Please help.