mysql – MariaDB crashed: Unknown/unsupported storage engine: InnoDB

I’ve a Debian GNU/Linux 9 (4GB, 2 CPUs) on Digitalocean. Tonight (I’ve done nothing) my DB (mariaDB) crashed with this errors. I ran a wordpress with InnoDB and myISAM tables:

2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Using mutexes to ref count buffer pool pages
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: The InnoDB memory heap is disabled
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Compressed tables use zlib 1.2.8
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Using Linux native AIO
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Using SSE crc32 instructions
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Initializing buffer pool, size = 500.0M
InnoDB: mmap(549126144 bytes) failed; errno 12
2020-10-17  0:51:18 140430430813568 (ERROR) InnoDB: Cannot allocate memory for the buffer pool
2020-10-17  0:51:18 140430430813568 (ERROR) Plugin 'InnoDB' init function returned error.
2020-10-17  0:51:18 140430430813568 (ERROR) Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2020-10-17  0:51:18 140430430813568 (Note) Plugin 'FEEDBACK' is disabled.
2020-10-17  0:51:18 140430430813568 (ERROR) **Unknown/unsupported storage engine: InnoDB**
2020-10-17  0:51:18 140430430813568 (ERROR) Aborting


My full DB conf:

#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#

# this is read by the standalone daemon and embedded servers
(server)

# this is only for the mysqld standalone daemon
(mysqld)

#
# * Basic Settings
#
user        = mysql
pid-file    = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /var/lib/mysql
tmpdir      = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking

# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address        = 127.0.0.1

#
# * Fine Tuning
#
key_buffer_size     = 16M
max_allowed_packet  = 16M
thread_stack        = 192K
thread_cache_size       = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam_recover_options  = BACKUP
#max_connections        = 100
#table_cache            = 64
#thread_concurrency     = 10

innodb_buffer_pool_instances = 1
innodb_buffer_pool_size = 500M
max_heap_table_size     = 25M
tmp_table_size          = 25M
#log_slow_queries        = /var/log/mysql/mysql-slow.log
#long_query_time = 2
#log-queries-not-using-indexes

#
# * Query Cache Configuration
#
query_cache_limit   = 2M
query_cache_size        = 50M

#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1
#
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
#
# Enable the slow query log to see queries with especially long duration
#slow_query_log_file    = /var/log/mysql/mariadb-slow.log
#long_query_time = 10
#log_slow_rate_limit    = 1000
#log_slow_verbosity = query_plan
#log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
#server-id      = 1
#log_bin            = /var/log/mysql/mysql-bin.log
expire_logs_days    = 10
max_binlog_size   = 100M
#binlog_do_db       = include_database_name
#binlog_ignore_db   = exclude_database_name

#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!

#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates you can use for example the GUI tool "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
#
# Accept only connections using the latest and most secure TLS protocol version.
# ..when MariaDB is compiled with OpenSSL:
# ssl-cipher=TLSv1.2
# ..when MariaDB is compiled with YaSSL (default in Debian):
# ssl=on

#
# * Character sets
#
# MySQL/MariaDB default is Latin1, but in Debian we rather default to the full
# utf8 4-byte character set. See also client.cnf
#
character-set-server  = utf8mb4
collation-server      = utf8mb4_general_ci

#
# * Unix socket authentication plugin is built-in since 10.0.22-6
#
# Needed so the root database user can authenticate without a password but
# only when running as the unix root user.
#
# Also available for other users if required.
# See https://mariadb.com/kb/en/unix_socket-authentication-plugin/

# this is only for embedded server
(embedded)

# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
(mariadb)

# This group is only read by MariaDB-10.1 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
(mariadb-10.1)

Here my first page of htop
htop result after some minutes
Could not export/copy htop result.

I would be happy if you could help me out!!

Thanks a lot
M

innodb – MYSQL Slow Query WARNING

I am running FiveM (Grand Theft Auto 5 Multiplayer Game Modification) server, which uses MYSQL as database. When I have a lot of data stored in the database, the data starts executing very slow and In my console appears that Slow Query Warning. Can someone help me ? How can I fix/improve/remove the limit of it and do the query execution faster, because when I receive this slow querry warn, in my server everyone have something like a delay (like if the open menu which access data from the base, they need to wait like 1-2 minutes, but if the database is brand new, there is no problem, only when i have more than 40mb database which i think is not that ok, i’ll be glad if someone helps me).

Slow Querry Warns :

 (esx_billing) (4825ms) INSERT INTO billing (identifier, sender, target_type, target, label, amount) VALUES (?, ?, ?, ?, ?, ?) : ("steam:110000142fd3d53","steam:110000142fd3d53","society","society_police","Speedcamera (80KM/H) - Your speed: 148 KM/H - ",1300)

 (esx_inventoryhud_trunk) (1456ms) SELECT * FROM trunk_inventory WHERE plate = ? : ("EIK 160 ")

 (esplugin_mysql) (2232ms) UPDATE users SET `money`=?, `bank`=? WHERE `identifier`=? : (6066,337190,"steam:110000136560c03")

 (gcphone) (3332ms) UPDATE phone_messages SET phone_messages.isRead = 1 WHERE phone_messages.receiver = ? AND phone_messages.transmitter = ? : ("391-2698","774-8865")

Is it something from mysql’s configuration, does I need to change some values/settings in the .ini to avoid this slow querry’s, also they happens only when In my server are connected more than 40 players. What I think, more players = more querry’s.

innodb – What is the best mysql configuration for mysql instance with a lot of databases and lot of tables inside?

I have a mysql database instance with more than 3000 database inside. Each database contains more than 200 tables. Total data of all theses database comes around 100gb. I am using windows server 2012R2 operating system with a 4GB of RAM. The RAM memory utilization of the server system was always showing very high. So I tried to restart the system and restart is not working. It is showing restarting for long time and not restarting. When i checked the logs I understood that there is a memory issue. What is the best configuration for the mysql with above architecture? what i need to do to make this work with out failure in future?

(Warning) InnoDB: Difficult to find free blocks in the buffer pool (1486 search iterations)! 1486 failed attempts to flush a page! Consider increasing the buffer pool size. It is also possible that in your Unix version fsync is very slow, or completely frozen inside the OS kernel. Then upgrading to a newer version of your operating system may help. Look at the number of fsyncs in diagnostic info below. Pending flushes (fsync) log: 0; buffer pool: 0. 26099 OS file reads, 1 OS file writes, 1 OS fsyncs. Starting InnoDB Monitor to print further diagnostics to the standard output.

linux – InnoDB Failing to start / MySQL not loading plugins

I installed MariaDB on my system (Debian 9), however, upon running sudo service mysql start, I get these errors:

Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) /usr/sbin/mariadbd (mysqld 10.5.5-MariaDB-1:10.5.5+maria~stretch) starting as process 30151 ...
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Warning) Could not increase number of max_open_files to more than 16384 (request: 32184)
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) InnoDB: Using Linux native AIO
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) InnoDB: Uses event mutexes
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) InnoDB: Compressed tables use zlib 1.2.8
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) InnoDB: Number of pools: 1
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) InnoDB: Using SSE4.2 crc32 instructions
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts)
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) InnoDB: Completed initialization of buffer pool
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (ERROR) InnoDB: Invalid flags 0x4800 in ./ibdata1
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (ERROR) InnoDB: Plugin initialization aborted with error Data structure corruption
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) InnoDB: Starting shutdown...
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (ERROR) Plugin 'InnoDB' init function returned error.
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (ERROR) Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (Note) Plugin 'FEEDBACK' is disabled.
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (ERROR) Could not open mysql.plugin table: "Table 'mysql.plugin' doesn't exist". Some plugins may be not loaded
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (ERROR) Unknown/unsupported storage engine: InnoDB
Sep 22 12:32:37 bremea mariadbd(30151): 2020-09-22 12:32:37 0 (ERROR) Aborting
Sep 22 12:32:37 bremea systemd(1): mariadb.service: Main process exited, code=exited, status=1/FAILURE
Sep 22 12:32:37 bremea systemd(1): Failed to start MariaDB 10.5.5 database server.

I assume this is an error with InnoDB, but I may be wrong. I tried uninstalling/reinstalling mysql but that didn’t work either. Can anyone steer me in the right direction?

innodb – MySQL performance degraded after database migration?

I migrated my MySQL database from GCP to Azure (both 5.7), but it seems to have affected performance.

Server before migration: 2 VCPUS with 7.5GB memory
Server after migration: 2 VCPUS with 8GB memory

Both servers run / ran version 5.7 of the MySQL server. My database is currently around 6GB in size, growing 100MB+ a day. It only consists of 32 tables, although a fraction of them tables enter the millions of rows category.

I read up on innodb_buffer_pool_size, GCP apparently sets it to around 80% of the memory, which would make it 6GB. I have set the innodb_buffer_pool_size on the new server to the same value.

Before updating this value (when I first noticed decreased performance), innodb_buffer_pool_size was set to 0.1 GB on the new server, I then decided to update this to the value the GCP server was set at hoping it would help.

Following this documentation I was able to update the buffer pool size.

How did I check the innodb_buffer_pool_size initially?

-- returned 0.111...
SELECT @@innodb_buffer_pool_size/1024/1024/1024;

How did I update innodb_buffer_pool_size?

SET GLOBAL innodb_buffer_pool_size=6442450944;

I checked the resize status with this query,

-- returned 'Completed resizing buffer pool at 200920 13:46:20.'
SHOW STATUS WHERE Variable_name='InnoDB_buffer_pool_resize_status';

I execute around 2 queries a second, peaking at 250k a day spread out. I can’t be certain but this usage shouldn’t be enough to halt performance?

How am I checking performance?

I have shown a list of queries ran, and the times it takes for the server to respond. I have tested these queries in Navicat, Datagrip, and CLI with similar results.

I wasn’t sure what queries to include here to give as much information as possible, so if I haven’t included anything useful I can update it upon request.

-- Fetching 100k rows from a 3.1m rows table
-- Time took: 21.248s
SELECT * FROM `profile_connections` LIMIT 100000;

-- (SECOND TIME) Fetching 100k rows from a 3.1m rows table
-- Time took: 1.735s
SELECT * FROM `profile_connections` LIMIT 100000;

- Fetching a random row from a 3.1m row table 
-- Time took: 0.857s
SELECT * FROM `profile_connections` WHERE `id` = 2355895 LIMIT 1;

-- (SECOND TIME) Fetching a random row from a 3.1m row table 
-- Time took: 0.850s
SELECT * FROM `profile_connections` WHERE `id` = 2355895 LIMIT 1;

-- Fetching all rows from a 20 row table
-- Time took: 40.010s
SELECT * FROM `profile_types`

-- (SECOND) Fetching all rows from a 20 row table
-- Time took: 0.850s
SELECT * FROM `profile_types`

But at times, I can run all of the above queries and get a response in 2 – 5 seconds. Performance seems to be hit or miss, there are huge differences in times taken for the same query, depending on when it is run which I am currently struggling to diagnose.

I ran mysqltuner and got these performance metrics back:

(--) Up for: 47m 39s (38K q (13.354 qps), 1K conn, TX: 403M, RX: 63M)
(--) Reads / Writes: 50% / 50%
(--) Binary logging is disabled
(--) Physical Memory     : 7.8G
(--) Max MySQL memory    : 146.8G
(--) Other process memory: 0B
(--) Total buffers: 6.0G global + 954.7M per thread (151 max threads)
(--) P_S Max memory usage: 72B
(--) Galera GCache Max memory usage: 0B
(!!) Maximum reached memory usage: 21.9G (281.61% of installed RAM)
(!!) Maximum possible memory usage: 146.8G (1888.34% of installed RAM)
(!!) Overall possible memory usage with other process exceeded memory
(OK) Slow queries: 3% (1K/38K)
(OK) Highest usage of available connections: 11% (17/151)
(OK) Aborted connections: 0.67%  (9/1342)
(!!) name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
(OK) Query cache is disabled by default due to mutex contention on multiprocessor machines.
(OK) Sorts requiring temporary tables: 0% (0 temp sorts / 41 sorts)
(OK) No joins without indexes
(OK) Temporary tables created on disk: 4% (82 on disk / 1K total)
(OK) Thread cache hit rate: 98% (17 created / 1K connections)
(OK) Table cache hit rate: 63% (667 open / 1K opened)
(OK) table_definition_cache(1400) is upper than number of tables(302)
(OK) Open file limit used: 1% (55/5K)
(OK) Table locks acquired immediately: 100% (1K immediate / 1K locks)

Slow query logs
I run a lot of the same queries, so I’ve truncated it to include just a few.

# Time: 2020-09-20T16:45:04.230173Z
# User@Host: root(root) @  (51.132.38.176)  Id:     7
# Query_time: 1.022011  Lock_time: 0.000084 Rows_sent: 1  Rows_examined: 1058161
SET timestamp=1600620304;
SELECT @id := `id`,`item`
                    FROM `queue_items`
                    WHERE `processed_at` IS NULL AND `completed_at` IS NULL AND `confirmed` = '1'ORDER BY `id` ASC
                    LIMIT 1
                    FOR UPDATE;
# Time: 2020-09-20T16:45:09.676613Z
# User@Host: root(root) @  (51.132.38.176)  Id:     5
# Query_time: 1.198063  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0
SET timestamp=1600620309;
COMMIT;
# Time: 2020-09-20T16:45:22.938081Z
# User@Host: root(root) @  (51.105.34.135)  Id:     4
# Query_time: 5.426964  Lock_time: 0.000133 Rows_sent: 0  Rows_examined: 1
SET timestamp=1600620322;
UPDATE `queue_items` SET `completed_at` = '2020-09-20 16:45:17', `updated_at` = '2020-09-20 16:45:17' WHERE `id` = 1818617;

innodb – order by slowing down query with multiple joins and limit/offset on larger result sets

I am having trouble with the following query taking quite a long time to process when results are large. The limit and offset can change as this is used with pagination. The range on capture_timestamp can also change, but in this example is finding ALL results (between 0 and 9999999999 – this field is an int of utc timestamp). The issue seems to be the ORDER BY taking up most of the processing time. It looks like it uses user_id for the table join, but then never uses anything for the ordering.

On the logs table I have the following indexes :

PRIMARY : activity_id
user_id : (user_id, capture_timestamp)
capture_timestamp : capture_timestamp (added this to see if by itself would make a difference - it did not)

There are keys setup for all the ON joins.

This particular query for example has 2440801 results (the logs table itself is currently holding 18332067 rows), but I am only showing the first 10 sorted by capture_timestamp and it takes roughly 7 seconds to return the results.

SELECT
    logs.activity_id,
    users.username,
    computers.computer_name,
    computers.os,
    logs.event_title,
    logs.event_target,
    logs.capture_timestamp

FROM computers
INNER JOIN users
    ON users.computer_id = computers.computer_id
INNER JOIN logs
    ON logs.user_id = users.user_id AND logs.capture_timestamp BETWEEN :cw_date_start AND :cw_date_end
    
WHERE computers.account_id = :cw_account_id AND computers.status = 1
ORDER BY logs.capture_timestamp DESC
LIMIT 0,10

analyze :

Array
(
    (0) => Array
        (
            (ANALYZE) => {
  "query_block": {
    "select_id": 1,
    "r_loops": 1,
    "r_total_time_ms": 6848.2,
    "filesort": {
      "sort_key": "logs.capture_timestamp desc",
      "r_loops": 1,
      "r_total_time_ms": 431.25,
      "r_limit": 10,
      "r_used_priority_queue": true,
      "r_output_rows": 11,
      "temporary_table": {
        "table": {
          "table_name": "computers",
          "access_type": "ref",
          "possible_keys": ("PRIMARY", "account_id_2", "account_id"),
          "key": "account_id_2",
          "key_length": "4",
          "used_key_parts": ("account_id"),
          "ref": ("const"),
          "r_loops": 1,
          "rows": 294,
          "r_rows": 294,
          "r_total_time_ms": 0.4544,
          "filtered": 100,
          "r_filtered": 100,
          "attached_condition": "computers.`status` = 1"
        },
        "table": {
          "table_name": "users",
          "access_type": "ref",
          "possible_keys": ("PRIMARY", "unique_filter"),
          "key": "unique_filter",
          "key_length": "4",
          "used_key_parts": ("computer_id"),
          "ref": ("db.computers.computer_id"),
          "r_loops": 294,
          "rows": 1,
          "r_rows": 3.415,
          "r_total_time_ms": 0.7054,
          "filtered": 100,
          "r_filtered": 100,
          "using_index": true
        },
        "table": {
          "table_name": "logs",
          "access_type": "ref",
          "possible_keys": ("user_id", "capture_timestamp"),
          "key": "user_id",
          "key_length": "4",
          "used_key_parts": ("user_id"),
          "ref": ("db.users.user_id"),
          "r_loops": 1004,
          "rows": 424,
          "r_rows": 2431.1,
          "r_total_time_ms": 4745.3,
          "filtered": 100,
          "r_filtered": 100,
          "index_condition": "logs.capture_timestamp between '0' and '9999999999'"
        }
      }
    }
  }
}
        )

)

Is there anything I can do here to speed these up? When the result set is smaller everything is pretty much immediate although I guess that is because there isn’t as much sorting to do.

Additions :

CREATE TABLE `computers` (
  `computer_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `account_id` int(10) unsigned NOT NULL,
  `status` tinyint(1) unsigned NOT NULL,
  `version` varchar(10) COLLATE utf8_unicode_ci NOT NULL,
  `os` tinyint(1) unsigned NOT NULL,
  `computer_uid` varchar(64) COLLATE utf8_unicode_ci NOT NULL,
  `computer_name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `last_username` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `uninstall` tinyint(1) unsigned NOT NULL,
  `capture_timestamp` int(10) unsigned NOT NULL,
  PRIMARY KEY (`computer_id`),
  UNIQUE KEY `account_id_2` (`account_id`,`computer_uid`),
  KEY `account_id` (`account_id`,`status`),
  CONSTRAINT `computers_ibfk_1` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`account_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=14362124 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci    

CREATE TABLE `users` (
  `user_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `computer_id` int(10) unsigned NOT NULL,
  `username` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `changed` tinyint(1) unsigned NOT NULL,
  `timestamp` int(10) unsigned NOT NULL,
  `ctimestamp` int(10) unsigned NOT NULL,
  `stimestamp` int(10) unsigned NOT NULL,
  PRIMARY KEY (`user_id`) USING BTREE,
  UNIQUE KEY `unique_filter` (`computer_id`,`username`),
  CONSTRAINT `users_ibfk_1` FOREIGN KEY (`computer_id`) REFERENCES `computers` (`computer_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=54312 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci   

CREATE TABLE `logs` (
  `activity_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `user_id` int(10) unsigned NOT NULL,
  `event_title` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `event_target` text COLLATE utf8_unicode_ci NOT NULL,
  `capture_timestamp` int(10) unsigned NOT NULL,
  `timestamp` int(10) unsigned NOT NULL,
  `demo` tinyint(1) unsigned NOT NULL DEFAULT 0,
  PRIMARY KEY (`activity_id`) USING BTREE,
  KEY `timestamp` (`timestamp`,`demo`),
  KEY `user_id` (`user_id`,`capture_timestamp`) USING BTREE,
  KEY `capture_timestamp` (`capture_timestamp`),
  CONSTRAINT `logs_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`user_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=444156934 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci   

How to do safety mirroring innodb table on mysql?

I have one Mysql InnoDb table with rush trafik on row update. The table is also frequently used for heavy select query (especially group by for summary report). As I know this heavy select query will interfere or reduce the update peformance. So i have an idea to mirror this table, so the heavy select query will do in this mirror table (delay 15 menit still accepted).

Fyi, this table have size approximately 10 GB, I have used mysql event (scheduled every 15 minute) to copy this table to another with “insert into select” query. Actually it need more than 60 second for copy process, but the update query still impacted when this event running.

So Is there any best/common practise for mirroring table mysql with minimum impact to currently query (especially update) process on the master table?

Note : I want to do on the same server

Innodb mysql 5.7 cluster wont join after reboot

I get the following when I try to recover the cluster from a complete outage. Can anyone advise what’s going on

cluster = dba.rebootClusterFromCompleteOutage()

Dba.rebootClusterFromCompleteOutage: Invalid value for localAddress,
string value cannot be empty. (ArgumentError)

Thanks

innodb – MySQL Query Fetch Time increases when LEFT JOIN with big table

I am fetching the same amount of data with 2 different queries. However, one of them have a fetching time of ~x130 compared to the other. Being the only difference between the two queries a LEFT JOIN with a big table (4M rows).

Especifically, my problem goes like this:
I have table_a with 200K rows. table_b with 100 rows and table_c with 4M rows. The fields involved are indexed.

My query looks something like this:

       SELECT
          a.id
       FROM
          table_a a
          LEFT JOIN table_b b ON b.id = a.b_id
          LEFT JOIN table_c c ON c.id = b.c_id
       GROUP BY a.id;

MySQL Workbench tells me that this query takes ~4s of Duration and 130 seconds of Fetch Time.
However, when I remove the second LEFT JOIN with the big table, the query takes <1s of Duration and <1s Fetch Time.

I clearly understand why the query duration is increased. I am doing a kinda heavy left join. But, my question is: Why the fetching time is so much higher, if the fetched data is the same?

I have already increased innodb_buffer_pool_size with no success.

I am working in MySQL 8.0.19, with innodb as tables engine.

Is there something I am missing here?
Thanks in advance for the help!

innodb – Issues with MySQL 8.0.17

Recently we upgraded MySQL version on our UAT database server to 8.0.17 from 5.7.18.

We are facing issues with our web application, Website is loading slowly if there are more users accessing it I mean even if 10-20 users open application simultaneously. We didn’t face this issue when using the previous version of MySQL 5.7.18. The only main difference in settings is query cache is deprecated in MySQL 8.0.17, We ere using query cache in the previous version could this be a cause? The total size of the database is about 1TB and is hosted on Windows Server.

Here are the MySQL variable values [Most of them were the same in the older version as well]

default-character-set=utf8mb4
skip_ssl
event_scheduler=OFF
collation-server = utf8mb4_unicode_ci
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
port=3306
default-storage-engine=MYISAM
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION"
log-output=FILE
skip-log-bin
lower_case_table_names=1
max_connections=1500
table_open_cache=2000
tmp_table_size=16M
thread_cache_size=9
myisam_max_sort_file_size=100G
myisam_sort_buffer_size=32M
key_buffer_size=3584M
read_buffer_size=512K
wait_timeout = 480
read_rnd_buffer_size=1M
skip-innodb
innodb_flush_log_at_trx_commit=1
innodb_log_buffer_size=8M
innodb_buffer_pool_size=72M
innodb_log_file_size=48M
innodb_thread_concurrency=8
innodb_autoextend_increment=64M
innodb_buffer_pool_instances=8
innodb_concurrency_tickets=5000
innodb_old_blocks_time=1000
innodb_open_files=300
innodb_stats_on_metadata=0
innodb_file_per_table=1
innodb_checksum_algorithm=0
back_log=70
flush_time=0
join_buffer_size=256K
max_allowed_packet=1060M
max_connect_errors=100
open_files_limit=4110
sort_buffer_size=1M
table_definition_cache=1400
binlog_row_event_max_size=8K
wait_timeout = 480
sync_master_info=10000
sync_relay_log=10000
sync_relay_log_info=10000
loose-local-infile = 1

Note: We don’t use InnoDB tables in our application all tables are in MyISAM, I aware that all internal tables are in InnoDB since Mysql version 8.

In production, we expect about 200-500 users accessing portals simultaneously can anyone please suggest changes in settings by our application loading time will increase.