linux – Using nftables to drop UDP packets on a random basis

I have an iptables rule that I use to drop UDP packets on a random basis. I’m trying to convert this to nftables, but I’m likely making a syntax error, because nftables complains that the “dport” argument is unexpected.

Here’s my iptables rule: iptables -A INPUT -m statistic --mode random --probability 0.10 -p udp --destination-port 2020 -i eth0 -j DROP

I tried using iptables-translate but I get the same rule back:

$ iptables-translate -A INPUT -m statistic --mode random --probability 0.10 -p udp  --destination-port 2020 -i eth0  -j DROP
nft # -A INPUT -m statistic --mode random --probability 0.10 -p udp --destination-port 2020 -i eth0 -j DROP

I tried to create my own nftables table, chain and rule using:

$ sudo nft add table ip filter
$ sudo nft add chain ip filter mychain

But when I try to use the rule, I get the error that dport is unexpected:

$ sudo nft add rule ip filter mychain input udp dport 2020 drop
Error: syntax error, unexpected dport, expecting end of file or newline or semicolon
add rule ip filter mychain input udp dport 2020 drop
                                     ^^^^^          
  1. What am I doing wrong?
  2. Why can’t iptables-translate translate this rule?

linux networking – Why are “Relayed” multicast packets are not received?

I wrote a test program to diagnose multicast routing problems.
The program has several modes of operation:

  • sender (send out a number of multicast packets)
  • receiver (receive a number of multicast packets)
  • requester (send a multicast packet, then time-wait for a response, repeat a number of times)
  • responder (receive a multicast packet, then send a response, repeat a number of times)
  • relay (like responder, but don’t respond to the sending address, but to the multicast address)

The “relay” mode was added most recently, and all the other modes work as expected, but “relay” does not (even though doing more or less the same as the other modes do):
The relay only receives it’s own responses, but the requester does not receive any response.

I compared a combination of (requester, responder) with (requester, relay) on the same host:

Requester 1

~/src/C/multicast> ./mc-tester -l 224.7.7.7/123 -d 224.7.7.7/1234 -m requester -c100 -v1
(1) verbosity = 1
(1) Sending 100 requests on 3...
(1) Sending "224.7.7.7/1234: v04 request #1/100" size 39 to 224.7.7.7/1234 on 3
(1) Receiving message #1 on 3...
(1) v04 received #1/1 from 172.20.16.35/60248 (TTL -1): "172.20.16.35/35949 v04: response #1/10 for #1"
(1) Sending "224.7.7.7/1234: v04 request #2/100" size 39 to 224.7.7.7/1234 on 3
(1) Receiving message #1 on 3...
(1) v04 received #1/2 from 172.20.16.35/60248 (TTL -1): "172.20.16.35/35949 v04: response #2/10 for #2"
(1) Sending "224.7.7.7/1234: v04 request #3/100" size 39 to 224.7.7.7/1234 on 3
(1) Receiving message #1 on 3...
(1) v04 received #1/3 from 172.20.16.35/60248 (TTL -1): "172.20.16.35/35949 v04: response #3/10 for #3"
^C

(“TTL -1) means the received TTL is unknown)

Responder 1

~windl/src/C/multicast/mc-tester -v3 -l 224.7.7.7/1234 -m responder -d 224.7.7.7/1234 -c10
(1) verbosity = 3
(2) /home/windl/src/C/multicast/mc-tester: 224.7.7.7/1234 -> 224.7.7.7/1234 (16)
(1) op_mode = 3
(2) /home/windl/src/C/multicast/mc-tester: 224.7.7.7/1234 -> 224.7.7.7/1234 (16)
(1) msg_count = 10
(2) socket(PF_INET, SOCK_DGRAM, 0)...
(2) setsockopt(3, SO_REUSEADDR, 1)...
(2) socket(PF_INET, SOCK_DGRAM, 0)...
(2) setsockopt(4, SO_REUSEADDR, 0)...
(2) bind(3, 224.7.7.7/1234)...
(2) recv_socket: getsockname(3) returned 224.7.7.7/1234
(2) setsockopt(3, IP_MULTICAST_LOOP, 0)...
(2) setsockopt(3, IP_RECVTTL, 1)...
(2) setsockopt(3, IP_ADD_MEMBERSHIP, 224.7.7.7/1234)...
(2) setsockopt(4, IP_MULTICAST_TTL, 3)...
(1) Receiving 10 messages on 3...
(1) v04 received #1/10 from 172.20.16.35/35949 (TTL 3): "224.7.7.7/1234: v04 request #1/100"
(1) Sending "172.20.16.35/35949 v04: response #1/10 for #1" size 50 to 172.20.16.35/35949 on 4
(1) v04 received #2/10 from 172.20.16.35/35949 (TTL 3): "224.7.7.7/1234: v04 request #2/100"
(1) Sending "172.20.16.35/35949 v04: response #2/10 for #2" size 50 to 172.20.16.35/35949 on 4
(1) v04 received #3/10 from 172.20.16.35/35949 (TTL 3): "224.7.7.7/1234: v04 request #3/100"
(1) Sending "172.20.16.35/35949 v04: response #3/10 for #3" size 50 to 172.20.16.35/35949 on 4
^C

So that combination worked as expected.
Now the combination that did not:

Requester 2

/src/C/multicast> ./mc-tester -l 224.7.7.7/123 -d 224.7.7.7/1234 -m requester -c100 -v1
(1) verbosity = 1
(1) Sending 100 requests on 3...
(1) Sending "224.7.7.7/1234: v04 request #1/100" size 39 to 224.7.7.7/1234 on 3
select timed out
(1) Sending "224.7.7.7/1234: v04 request #2/100" size 39 to 224.7.7.7/1234 on 3
select timed out
(1) Sending "224.7.7.7/1234: v04 request #3/100" size 39 to 224.7.7.7/1234 on 3
^C

(“select timed out” refers to receiving, not sending)

Relay

~windl/src/C/multicast/mc-tester -v3 -l 224.7.7.7/1234 -m relay -d 224.7.7.7/1234 -c10
(1) verbosity = 3
(2) /home/windl/src/C/multicast/mc-tester: 224.7.7.7/1234 -> 224.7.7.7/1234 (16)
(1) op_mode = 4
(2) /home/windl/src/C/multicast/mc-tester: 224.7.7.7/1234 -> 224.7.7.7/1234 (16)
(1) msg_count = 10
(2) socket(PF_INET, SOCK_DGRAM, 0)...
(2) setsockopt(3, SO_REUSEADDR, 0)...
(2) socket(PF_INET, SOCK_DGRAM, 0)...
(2) setsockopt(4, SO_REUSEADDR, 1)...
(2) bind(3, 224.7.7.7/1234)...
(2) recv_socket: getsockname(3) returned 224.7.7.7/1234
(2) setsockopt(3, IP_MULTICAST_LOOP, 0)...
(2) setsockopt(3, IP_RECVTTL, 1)...
(2) setsockopt(3, IP_ADD_MEMBERSHIP, 224.7.7.7/1234)...
(2) setsockopt(4, IP_MULTICAST_TTL, 3)...
(2) setsockopt(4, IP_ADD_MEMBERSHIP, 224.7.7.7/1234)...
(1) Relaying 10 messages on 3...
(1) v04 received #1/10 from 172.20.16.35/33488 (TTL 3): "224.7.7.7/1234: v04 request #1/100"
(1) Sending "224.7.7.7/1234 v04: relay #1/10 for #1" size 43 to 224.7.7.7/1234 on 4
(1) v04 received #2/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #1/10 for #1"
(1) Sending "224.7.7.7/1234 v04: relay #2/10 for #1" size 43 to 224.7.7.7/1234 on 4
(1) v04 received #3/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #2/10 for #1"
(1) Sending "224.7.7.7/1234 v04: relay #3/10 for #2" size 43 to 224.7.7.7/1234 on 4
(1) v04 received #4/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #3/10 for #2"
(1) Sending "224.7.7.7/1234 v04: relay #4/10 for #3" size 43 to 224.7.7.7/1234 on 4

(1) v04 received #9/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #8/10 for #7"
(1) Sending "224.7.7.7/1234 v04: relay #9/10 for #8" size 43 to 224.7.7.7/1234 on 4
(1) v04 received #10/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #9/10 for #8"
(1) Sending "224.7.7.7/1234 v04: relay #10/10 for #9" size 44 to 224.7.7.7/1234 on 4
(1) Received 10 messages
(2) setsockopt(3, IP_DROP_MEMBERSHIP)...
(2) setsockopt(4, IP_DROP_MEMBERSHIP)...
(2) close(4)...
(2) close(3)...

So the messages in the relay are looping locally.
The test was done with Linux (SLES12 SP4).

I decided not to lengthen the question with the C source of the program, but when requested I can present the relevant parts or an ltrace/strace of the relay.

proxy – How can I route usb modem packets with squid

I have two usb modems connected to a linux machine, both connections are made through wvdial and are working, I verified they work with:
curl https://api.myip.com/ --interface pppx and ping www.google.com -I pppx

Below are the responses to some commands that help show my config.

# ifconfig | grep -eppp(0-1) -A 1
ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500
        inet 10.120.178.170  netmask 255.255.255.255  destination 10.64.64.64
--
ppp1: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500
        inet 10.114.188.103  netmask 255.255.255.255  destination 10.64.64.65

# ip rule

0:  from all lookup local 
32764:  from 10.114.188.103 lookup ppp1 
32765:  from 10.120.178.170 lookup ppp0 
32766:  from all lookup main 
32767:  from all lookup default 

# ip route
default via 192.168.12.1 dev wlan0 proto dhcp src 192.168.12.150 metric 303 mtu 1500 
10.64.64.64 dev ppp0 proto kernel scope link src 10.120.178.170 
10.64.64.65 dev ppp1 proto kernel scope link src 10.114.188.103 
169.254.0.0/16 dev wwan0 scope link src 169.254.169.80 metric 242 
169.254.0.0/16 dev wwan1 scope link src 169.254.110.247 metric 244 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.12.0/24 dev wlan0 proto dhcp scope link src 192.168.12.150 metric 303 mtu 1500 

I am trying to have packets routed through one of these two usb modems depending on which squid port is used, for that purpose I’m using localport acl with tcp_outgoing_port, like this:

http_port 3130
http_port 3131
acl thirdport  localport 3130
acl forthport  localport 3131
tcp_outgoing_address 10.114.188.103 thirdport
tcp_outgoing_address 10.120.178.170 forthport

the full squid.conf:

acl localnet src 192.168.12.0/24    # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80      # http
acl Safe_ports port 21      # ftp
acl Safe_ports port 443     # https
acl Safe_ports port 70      # gopher
acl Safe_ports port 210     # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280     # http-mgmt
acl Safe_ports port 488     # gss-http
acl Safe_ports port 591     # filemaker
acl Safe_ports port 777     # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager 
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128

http_port 3130
http_port 3131


acl thirdport  localport 3130
acl forthport  localport 3131

tcp_outgoing_address 10.114.188.103 thirdport
tcp_outgoing_address 10.120.178.170 forthport

coredump_dir /var/spool/squid
refresh_pattern ^ftp:       1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern -i (/cgi-bin/|?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$      0       20%     2880
refresh_pattern .       0   20% 4320
via off
forwarded_for off
request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Cache-Control allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Title allow all
request_header_access Connection allow all
request_header_access Proxy-Connection allow all
request_header_access User-Agent allow all
request_header_access Cookie allow all
request_header_access All deny all

The problem is that when I connect to port 3030 (ppp1 interface) it works without problem but when I try port 3031 (ppp0 interface) it doesn’t work and I get the following output in /var/log/squid/access.log

1615513848.373 239749 192.168.12.145 NONE/503 0 CONNECT api.twitter.com:443 - HIER_NONE/- -
1615513848.373 239442 192.168.12.145 NONE/503 0 CONNECT api.twitter.com:443 - HIER_NONE/- -
1615513867.431   1232 192.168.12.145 TCP_MISS/204 183 GET http://connectivitycheck.gstatic.com/generate_204 - HIER_DIRECT/172.217.171.227 -
1615513874.402 119994 192.168.12.145 NONE/503 0 CONNECT api.myip.com:443 - HIER_NONE/- -
1615513886.416 120303 192.168.12.145 NONE/503 0 CONNECT api.myip.com:443 - HIER_NONE/- -

Am I doing something wrong here? Are there some steps to do to get more information about the problem in question?

linux – How to block outgoing packets to IP range with iptables?

I want to block outgoing packets to an IP range but the iptables command I’m using does not seem to work.

 sudo iptables -P OUTPUT ACCEPT
 sudo iptables -A OUTPUT -s 157.240.0.0/16 -j REJECT
 sudo iptables -A OUTPUT -s 31.13.0.0/16 -j REJECT
 sudo iptables -A OUTPUT -s 192.229.0.0/16 -j REJECT
 sudo iptables -A OUTPUT -s 104.244.0.0/16 -j REJECT

Isn’t what I need to do to

  1. allow all packets for the entire range
  2. then block specific subsets of the entire range?

internet – what will be the initial sequence (first 6, say) of packets that flow on the network from or to this computer?

Consider a computer in Hostel 2 (Ethernet LAN) with the following network configuration- IP: 10.2.3.5 Netmask: 255.255.0.0 Gateway: 10.2.1.1. The computer is switched on, web browser started and the user types in the URL http://www.rediff.com and begins browsing this site. Assuming that the DNS server is 10.200.1.11 and assuming that there is no other network traffic, what will be the initial sequence (first 6, say) of packets that flow on the network from or to this computer?

networking – Send Ethernet Packets under unicast

I am trying create this type of ethernet frame :

Dest Address ( first bits zero) + Source Address +Source Ip address +Dest IP address +Data

But I dont know Is it neccessary ether type IpV4 08 00 or anything.

I want to communicate pair to pair communication under ethernet protocole so that I want to use unicast packet frame, but how can I use it. How can I prepare the ethernet frame under unicast ? I sourced on the google but I didnt find any usefull things how can I continue ? I have an stm32 lwip and ethernet switch for that purpose.

How can I create unicast point to point ethernet packets please help me

wifi – Does my wireless adapter support the packet injection if I it can send deauth packets?

So, I want to know a little bit about wifi security and I am interested on some questions. So, I want to know when somebody sends the deauth packets does it mean that the deauth packets are the packet injection proccess? Does my wireless adapter support the packet injection if I it can send deauth packets?

performance – mariadb: Aborted connection .. Got timeout reading communication packets

What is the typical cause of warnings such as this? They appear periodically, sometimes multiple times per day then not for a day or so.

2021-01-08 13:20:46 203939 (Warning) Aborted connection 203939 to db: ‘lsv’ user: ‘finder’ host: ‘23.227.111.186’ (Got timeout reading communication packets)

This database server is only queried by a few hosts, and it seems to happen with all hosts and all databases on the host. This server is connected by a 1gbit link to the Internet as well as a 10gbit local link to a web server.

This is a mariadb-10.4.17 server on fedora33 with a 5.9.16 kernel and 128GB of RAM. It’s the only function of this box. It’s been happening for quite some time. It doesn’t seem to matter How do I troubleshoot this? Could this be a networking problem?

I would appreciate any ideas you might have. Here is the contents of the my.cnf.

# cat my.cnf |grep -Ev '^$|^#'
(client)
port            = 3306
socket          = /var/lib/mysql/mysql.sock
default-character-set = utf8mb4

(mysqld)
character-set-client-handshake = FALSE
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
max_connections=600
replicate_do_db='txrepdb'
replicate_do_db='sqlgrey'
replicate_do_db='sbclient'
port            = 3306
socket          = /var/lib/mysql/mysql.sock
skip-external-locking
key_buffer_size = 256M
max_allowed_packet = 512M
join_buffer_size = 2M 
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
query_cache_size = 0
query_cache_type = 0
relay_log_space_limit = 500M
relay_log_purge = 1
log-slave-updates = 1
local_infile = OFF
binlog_format = ROW
max_heap_table_size = 1024M 
tmp_table_size = 1024M 
performance_schema=ON
performance-schema-instrument='stage/%=ON'
performance-schema-consumer-events-stages-current=ON
performance-schema-consumer-events-stages-history=ON
performance-schema-consumer-events-stages-history-long=ON
relay-log=havoc-relay-bin
log_bin                 = /var/log/mariadb/mysql-bin.log
expire_logs_days        = 2
max_binlog_size         = 500M
plugin_load=server_audit=server_audit.so
plugin_load_add = query_response_time
server_audit_events=connect,query
server_audit_file_path                  = /var/log/mariadb/server_audit.log
server_audit_file_rotate_size           = 1G
server_audit_file_rotations             = 1
slow-query-log = 1
slow-query-log-file = /var/log/mariadb/mariadb-slow.log
long_query_time = 1
log_error = /var/log/mariadb/mariadb-error.log
binlog_format=mixed
server-id       = 590
report-host=havoc.example.com
innodb_data_home_dir = /var/lib/mysql
innodb_defragment=1
innodb_file_per_table
innodb_data_file_path = ibdata1:10M:autoextend:max:500M
innodb_buffer_pool_size=60G
innodb_log_file_size = 1G
innodb_flush_log_at_trx_commit = 2
innodb_flush_method=O_DIRECT
innodb_lock_wait_timeout = 50
innodb_buffer_pool_instances = 40
open_files_limit=30000  # from 1222 for ~ 50% of planned ulimit -a Open Files of 65536
innodb_open_files=10000  # from 512 to match table_open_cache
innodb_log_buffer_size=64M  # from 8M for ~ 30 minutes log buffered in RAM
innodb_page_cleaners=15  # from 4 to expedite page cleaning
innodb_purge_threads=15  # from 4 to expedite purge processing
innodb_write_io_threads=64  # from 4 to expedite multi core write processing SE5666 Rolando
innodb_read_io_threads=64  # from 4 to expedite multi core read processing SE5666 9/12/11
read_rnd_buffer_size=262144  # from 4M to reduce handler_read_rnd_next of 124,386 RPS
innodb_io_capacity=2100  # from 1100 to allow higher SSD iops
innodb_lru_scan_depth=100  # from 1024 to conserve CPU cycles every SECOND
max_connect_errors=10
table_open_cache=10000  # from 512 to reduce opened_tables RPS of 1
read_buffer_size=1572864 # from 1M to reduce handler_read_next of 32,317 RPS
table_definition_cache=10000  # from 400 to reduce opened table_definitions RPS of 1
log_slow_verbosity=explain  # from nothing or ADD ,explain to enhance SLOW QUERY log
query_prealloc_size=32768 # from 24K to reduce CPU malloc frequency
query_alloc_block_size=32768 # from 16K to reduce CPU malloc frequency
transaction_prealloc_size=32768 # from 4K to reduce CPU malloc frequency
transaction_alloc_block_size=32768 # from 8K to reduce CPU malloc frequency
innodb_fast_shutdown=0
aria_pagecache_division_limit=50  # from 100 for WARM blocks percentage
aria_pagecache_age_threshold=900
innodb_adaptive_max_sleep_delay=20000  # from 150000 ms (15 sec to 2 sec) delay when busy
innodb_flushing_avg_loops=5  # from 30 to minimize innodb_buffer_pool_pages_dirty count
max_seeks_for_key=64  # from ~ 4 Billion to conserve CPU
max_write_lock_count=16  # from ~ 4 Billion to allow RD after nn lck requests
optimizer_search_depth=0  # from 62 to allow OPTIMIZER autocalc of reasonable limit
innodb_print_all_deadlocks=ON  # from OFF to log event in error log for DAILY awareness
wait_timeout=7200
innodb_flush_neighbors=0 # from ON to conserve CPU cycles when you have SSD/NVME
interactive_timeout=7200
innodb_buffer_pool_dump_pct=90  # from 25 to minimize WARM time on STOP / START or RESTART
innodb_fill_factor=93
innodb_read_ahead_threshold=8  # from 56 to reduce delays by ReaDing next EXTENT earlier
sort_buffer_size=1572864 # from 1M to reduce sort_merge_passes RPS of 1
innodb_stats_sample_pages=32  # from 8 for optimizer to use more accurate cardinality
min_examined_row_limit=1  # from 0 to reduce clutter in slow query log
query_cache_limit=0  # from 2M to conserve RAM because your QC is OFF, as it should be.
query_cache_min_res_unit=512  # from 4096 to increase QC capacity, if EVER used

(mysqldump)
quick
max_allowed_packet = 16M

(mysql)
no-auto-rehash
default-character-set = utf8mb4

(myisamchk)
key_buffer_size = 128M
sort_buffer_size = 128M
read_buffer = 2M
write_buffer = 2M

(mysqlhotcopy)
interactive-timeout

performance – mariadb: Aborted connection .. Got timeout r eading communication packets

What is the typical cause of warnings such as this? They appear periodically, sometimes multiple times per day then not for a day or so.

2021-01-08 13:20:46 203939 (Warning) Aborted connection 203939 to db: ‘lsv’ user: ‘finder’ host: ‘23.227.111.186’ (Got timeout reading communication packets)

This database server is only queried by a few hosts, and it seems to happen with all hosts and all databases on the host. This server is connected by a 1gbit link to the Internet as well as a 10gbit local link to a web server.

This is a mariadb-10.4.17 server on fedora33 with a 5.9.16 kernel and 128GB of RAM. It’s the only function of this box. It’s been happening for quite some time. It doesn’t seem to matter How do I troubleshoot this? Could this be a networking problem?

I would appreciate any ideas you might have. Here is the contents of the my.cnf.

# cat my.cnf |grep -Ev '^$|^#'
(client)
port            = 3306
socket          = /var/lib/mysql/mysql.sock
default-character-set = utf8mb4

(mysqld)
character-set-client-handshake = FALSE
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
max_connections=600
replicate_do_db='txrepdb'
replicate_do_db='sqlgrey'
replicate_do_db='sbclient'
port            = 3306
socket          = /var/lib/mysql/mysql.sock
skip-external-locking
key_buffer_size = 256M
max_allowed_packet = 512M
join_buffer_size = 2M 
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
query_cache_size = 0
query_cache_type = 0
relay_log_space_limit = 500M
relay_log_purge = 1
log-slave-updates = 1
local_infile = OFF
binlog_format = ROW
max_heap_table_size = 1024M 
tmp_table_size = 1024M 
performance_schema=ON
performance-schema-instrument='stage/%=ON'
performance-schema-consumer-events-stages-current=ON
performance-schema-consumer-events-stages-history=ON
performance-schema-consumer-events-stages-history-long=ON
relay-log=havoc-relay-bin
log_bin                 = /var/log/mariadb/mysql-bin.log
expire_logs_days        = 2
max_binlog_size         = 500M
plugin_load=server_audit=server_audit.so
plugin_load_add = query_response_time
server_audit_events=connect,query
server_audit_file_path                  = /var/log/mariadb/server_audit.log
server_audit_file_rotate_size           = 1G
server_audit_file_rotations             = 1
slow-query-log = 1
slow-query-log-file = /var/log/mariadb/mariadb-slow.log
long_query_time = 1
log_error = /var/log/mariadb/mariadb-error.log
binlog_format=mixed
server-id       = 590
report-host=havoc.example.com
innodb_data_home_dir = /var/lib/mysql
innodb_defragment=1
innodb_file_per_table
innodb_data_file_path = ibdata1:10M:autoextend:max:500M
innodb_buffer_pool_size=60G
innodb_log_file_size = 1G
innodb_flush_log_at_trx_commit = 2
innodb_flush_method=O_DIRECT
innodb_lock_wait_timeout = 50
innodb_buffer_pool_instances = 40
open_files_limit=30000  # from 1222 for ~ 50% of planned ulimit -a Open Files of 65536
innodb_open_files=10000  # from 512 to match table_open_cache
innodb_log_buffer_size=64M  # from 8M for ~ 30 minutes log buffered in RAM
innodb_page_cleaners=15  # from 4 to expedite page cleaning
innodb_purge_threads=15  # from 4 to expedite purge processing
innodb_write_io_threads=64  # from 4 to expedite multi core write processing SE5666 Rolando
innodb_read_io_threads=64  # from 4 to expedite multi core read processing SE5666 9/12/11
read_rnd_buffer_size=262144  # from 4M to reduce handler_read_rnd_next of 124,386 RPS
innodb_io_capacity=2100  # from 1100 to allow higher SSD iops
innodb_lru_scan_depth=100  # from 1024 to conserve CPU cycles every SECOND
max_connect_errors=10
table_open_cache=10000  # from 512 to reduce opened_tables RPS of 1
read_buffer_size=1572864 # from 1M to reduce handler_read_next of 32,317 RPS
table_definition_cache=10000  # from 400 to reduce opened table_definitions RPS of 1
log_slow_verbosity=explain  # from nothing or ADD ,explain to enhance SLOW QUERY log
query_prealloc_size=32768 # from 24K to reduce CPU malloc frequency
query_alloc_block_size=32768 # from 16K to reduce CPU malloc frequency
transaction_prealloc_size=32768 # from 4K to reduce CPU malloc frequency
transaction_alloc_block_size=32768 # from 8K to reduce CPU malloc frequency
innodb_fast_shutdown=0
aria_pagecache_division_limit=50  # from 100 for WARM blocks percentage
aria_pagecache_age_threshold=900
innodb_adaptive_max_sleep_delay=20000  # from 150000 ms (15 sec to 2 sec) delay when busy
innodb_flushing_avg_loops=5  # from 30 to minimize innodb_buffer_pool_pages_dirty count
max_seeks_for_key=64  # from ~ 4 Billion to conserve CPU
max_write_lock_count=16  # from ~ 4 Billion to allow RD after nn lck requests
optimizer_search_depth=0  # from 62 to allow OPTIMIZER autocalc of reasonable limit
innodb_print_all_deadlocks=ON  # from OFF to log event in error log for DAILY awareness
wait_timeout=7200
innodb_flush_neighbors=0 # from ON to conserve CPU cycles when you have SSD/NVME
interactive_timeout=7200
innodb_buffer_pool_dump_pct=90  # from 25 to minimize WARM time on STOP / START or RESTART
innodb_fill_factor=93
innodb_read_ahead_threshold=8  # from 56 to reduce delays by ReaDing next EXTENT earlier
sort_buffer_size=1572864 # from 1M to reduce sort_merge_passes RPS of 1
innodb_stats_sample_pages=32  # from 8 for optimizer to use more accurate cardinality
min_examined_row_limit=1  # from 0 to reduce clutter in slow query log
query_cache_limit=0  # from 2M to conserve RAM because your QC is OFF, as it should be.
query_cache_min_res_unit=512  # from 4096 to increase QC capacity, if EVER used

(mysqldump)
quick
max_allowed_packet = 16M

(mysql)
no-auto-rehash
default-character-set = utf8mb4

(myisamchk)
key_buffer_size = 128M
sort_buffer_size = 128M
read_buffer = 2M
write_buffer = 2M

(mysqlhotcopy)
interactive-timeout

ipsec – ikev2 handshake : 4 or 8 packets?

(Unlike IKEv1) the IKEv2 exchange is variable. At best, it can exchange as few as
four packets. At worst, this can increase to as many as 30 packets (if
not more), depending on the complexity of authentication, the number
of Extensible Authentication Protocol (EAP) attributes used, as well
as the number of SAs formed.

So, eight packets is within acceptable range for an IKEv2 negotiation.