mysql – 8 vCPU, 32 GB memory, 250 GB SSD server needs 1 to 2 seconds for a query with 12 SUM functions and a JOIN on a table with 80,000 rows

I was recommended that I ask my question here at DBA. Sorry, should that be considered “double posting”. A bit of information in advance:

I have already made the following settings (my.cnf):

innodb_buffer_pool_size = 8G
innodb_file_per_table = 1
innodb_log_buffer_size = 16M
innodb_log_file_size = 1G
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_sort_buffer_size = 2M
join_buffer_size = 4M
read_buffer_size = 3M
read_rnd_buffer_size = 4M
sort_buffer_size = 4M

Also, I now do the calculation before the “GROUP BY”. Means, I make the calculation in a SELECT ... FROM (SELECT ... GROUP BY ...) t and then I use JOIN. Now to the actual topic…


I have the following 2 tables (browsers and metrics). browsers is a “dimensions table” which stores the name and version of a browser. metrics is a “facts table” which holds the browser_id and metrics, in conjunction with a date. According to explain select (...) no key is used on metrics and the primary key is used on browsers.

--- old query
SELECT browsers.name AS browser_name,
    SUM(visits_count) AS visits_count,
    SUM(clicks_count) AS clicks_count,
    IFNULL((100 / SUM(visits_count)) * SUM(clicks_count), 0) AS ctr,
    SUM(cost_integral) AS cost_integral,
    IFNULL((SUM(cost_integral) / SUM(visits_count)), 0) AS cpv_integral,
    IFNULL((SUM(cost_integral) / SUM(clicks_count)), 0) AS cpc_integral,
    SUM(conversions_count) AS conversions_count,
    IFNULL((100 / SUM(clicks_count)) * conversions_count, 0) AS cvr,
    SUM(revenue_integral) AS revenue_integral,
    IFNULL((SUM(revenue_integral) / SUM(clicks_count)), 0) AS epc_integral,
    (SUM(revenue_integral) - SUM(cost_integral)) AS profit_integral,
    IFNULL((SUM(revenue_integral) - SUM(cost_integral)) / SUM(cost_integral) * 100, 0) AS roi
FROM metrics
JOIN browsers ON browsers.id = browser_id
GROUP BY browsers.name
--- new query
SELECT browsers.*, `t`.*
FROM (
    SELECT browser_id,
        SUM(visits_count) AS visits_count,
        SUM(clicks_count) AS clicks_count,
        IFNULL((100 / SUM(visits_count)) * SUM(clicks_count), 0) AS ctr,
        SUM(cost_integral) AS cost_integral,
        IFNULL((SUM(cost_integral) / SUM(visits_count)), 0) AS cpv_integral,
        IFNULL((SUM(cost_integral) / SUM(clicks_count)), 0) AS cpc_integral,
        SUM(conversions_count) AS conversions_count,
        IFNULL((100 / SUM(clicks_count)) * SUM(conversions_count), 0) AS cvr,
        SUM(revenue_integral) AS revenue_integral,
        IFNULL((SUM(revenue_integral) / SUM(visits_count)), 0) AS epv_integral,
        IFNULL((SUM(revenue_integral) / SUM(clicks_count)), 0) AS epc_integral,
        (SUM(revenue_integral) - SUM(cost_integral)) AS profit_integral,
        ifnull((SUM(revenue_integral) - SUM(cost_integral)) / SUM(cost_integral) * 100, 0) AS roi
    FROM `metrics`
    GROUP BY `browser_id`
    ) AS `t`
INNER JOIN `browsers` ON `browsers`.`id` = `browser_id`

Server:

  • 8 vCPU, 32 GB Memory, 250 GB SSD
  • MySQL 8

Without all the SUM functions, the time of 900ms is reduced by about 250 to 300ms. Without the GROUP BY even down to 1 to 2 digit ms. Unfortunately I need the GROUP BY, as well as the number of SUM functions.

What can be the reason that such a server needs between 1 second and 2 seconds to execute the query on a table with only 80,000 rows? According to explain analyze the SUM functions need 96% of the time (actual time=845.038..845.052) that is needed in total.

-- browsers-Table

CREATE TABLE `browsers` (
  `id` bigint(20) UNSIGNED NOT NULL,
  `name` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL,
  `version` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

ALTER TABLE `browsers`
  ADD PRIMARY KEY (`id`),
  ADD KEY `b_n` (`name`),
  ADD KEY `b_v` (`version`),
  ADD KEY `b_n_v` (`name`,`version`),
  ADD KEY `b_v_n` (`version`,`name`);

ALTER TABLE `browsers`
  MODIFY `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT;
-- metrics-Table

CREATE TABLE `metrics` (
  `reference_date` date NOT NULL,
  `browser_id` bigint(20) UNSIGNED NOT NULL,
  `visits_count` bigint(20) NOT NULL DEFAULT 0,
  `cost_integral` bigint(20) NOT NULL DEFAULT 0,
  `clicks_count` bigint(20) NOT NULL DEFAULT 0,
  `conversions_count` bigint(20) NOT NULL DEFAULT 0,
  `revenue_integral` bigint(20) NOT NULL DEFAULT 0
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

ALTER TABLE `metrics`
  ADD UNIQUE KEY `mu` (`reference_date`,`browser_id`),
  ADD KEY `metrics_browser_id_foreign` (`browser_id`);

ALTER TABLE `metrics`
  ADD CONSTRAINT `metrics_browser_id_foreign` FOREIGN KEY (`browser_id`) REFERENCES `browsers` (`id`) ON DELETE CASCADE ON UPDATE CASCADE;

Even on my localserver, with the same data, I need only ~10ms – so I suspect a faulty setting of the server (according to mysqltuner there are no remarkable suggestions).


Edit #1:

| -> Nested loop inner join  (actual time=182.931..182.937 rows=1 loops=1)
    -> Table scan on landing_pages  (cost=0.35 rows=1) (actual time=0.016..0.020 rows=1 loops=1)
    -> Index lookup on t using <auto_key0> (landing_page_id=landing_pages.id)  (actual time=0.003..0.004 rows=1 loops=1)
        -> Materialize  (actual time=182.911..182.912 rows=1 loops=1)
            -> Table scan on <temporary>  (actual time=0.001..0.002 rows=1 loops=1)
                -> Aggregate using temporary table  (actual time=182.830..182.830 rows=1 loops=1)
                    -> Index lookup on metrics using metrics_campaign_id_foreign (campaign_id=1)  (cost=2065.15 rows=18004) (actual time=0.124..44.976 rows=36266 loops=1)```

TikTok is increasing the maximum length of videos from 60 seconds to 3 minutes

TikTok is increasing the maximum length of videos from 60 seconds to 3 minutes.

All three minutes can be recorded, edited, and uploaded directly within the TikTok app. This update gives creators a canvas to work with that’s three times larger than what they had before.
SEMrush

It’s going to be interesting to see what creators do to hold viewers’ attention for longer than a minute. Longer TikTok videos have been in testing with a limited number of creators, so it’s possible you may have encountered one already.

 

Google cloud: CDN (https) gives static up to 20 seconds

I am using Google cloud.

In the logs of the load balancer, it can be seen that the CDN (https) gives static from 2 to 20 seconds. This happens often, provided that a couple of people use the site.

big request time
big request time

Also, the problem is not only with him, but also with Cloud Run. Sometimes it gives the static for 2 seconds. It is difficult, but really to try to find an explanation. But in the situation with the CDN I cannot find an explanation.

europe1
Belgium

Simple AJAX code that refreshes every x seconds?

I read that admin-ajax.php sends request every 15 seconds.

Knowing that, is there a simple code that I could use to take advantage of that?

What I mean by that, let’s say I have a text that is changing.

But to see the changes, I need to refresh the page every time.

Which I don’t want.

Instead I want to wrap that code around the AJAX:

<ajax code updates every 15 second>

My code goes here

</ajax code updates every 15 second>

to be updated in a real time.

That’s a very rough example, just to get the point.

Is there any simple way to do that?

I wish WordPress had a simple code that you put your code and it would update in real time.

Any help would be appreciated.

I want to refresh the code in the single.php template.

How to configure jest to fail when it takes more than x seconds

Is there a way to fail a jest test when it exceeds x number of seconds?

There is this property: https://jestjs.io/docs/configuration#slowtestthreshold-number, but it’s only for reporting right?

unity – How can I switch targets using coroutine each x seconds?

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class Test : MonoBehaviour
{
    public FLookAnimator lookAnimator;
    public Transform() targets;

    // Start is called before the first frame update
    void Start()
    {
        StartCoroutine(WaitSome());
    }

    // Update is called once per frame
    void Update()
    {
        
    }

    IEnumerator WaitSome()
    {
        lookAnimator.ObjectToFollow = targets(0);

        yield return new WaitForSeconds(10f);

        lookAnimator.ObjectToFollow = targets(1);
    }
}

This changes the targets every 10 seconds but it’s doing it only once and I don’t want to assign the target on my own like targets(0) and targets(1) in this case I have two targets but if I have 20 targets?

How can I loop in the Update over the targets and then every 10 seconds to change between all the targets. every 10 seconds change the target.

json rpc – cpuminer 64bit-ARM Mobile device: Getting Error:”HTTP request failed: Empty reply from server” “json_rpc_call failed, retry after 10 seconds”

So basically I was what I tried here is that I installed Kali Linux on my Android phone using UserLand. My device is 64-bit ARM (aarch64) Machine so I installed CPU MINER which support Arm64, and I followed the instruction given by them. My pool (pool I am using to mine) also support Stratum Protocol which is not going to be an issue.
Using same pool address and same wallet address is working on different miner on different architecture device but not here.
Kindly someone help me out 🙂

here the screenshot of error I am getting.(the yellow bar is wallet address)

warning messages – What should I make of “Simplify: Time spent on a transformation exceeded -4.03955*10^12 seconds”?

Running version “12.0.0 for Microsoft Windows (64-bit) (April 6, 2019)”, in case that matters. I fed Mathematica the command

Simplify(
  Integrate(
    e*Exp(-e*t)*F*Exp(-F *t)* Exp(-(Lambda) - b*t)*
    Integrate(
      Sum((b*s + (Lambda))^y/y! *
        Sum(2^(-y)*Binomial(y, z)*(b*(t - s))^(x - z)/(x - z)!, {z, 0, x}),
      {y, 0, Infinity}),
    {s, 0, t}),
  {t, 0, Infinity}),
e > 0 && F > 0 && (Lambda) > 0 && b > 0 && Element(x, Integers))

(line breaks added for readability, though I don’t think that matters). It output the warning message

Simplify::time: Time spent on a transformation exceeded -4.03955*10^12 seconds, and the transformation was aborted. Increasing the value of TimeConstraint option may improve the result of simplification.

In fact, it gave me that message three times, and then

General::stop: Further output of Simplify::time will be suppressed during this calculation.

Twelve hours after that, it gave me an answer. The warning bugs me for a couple of reasons. First, 4.03955*10^12 seconds is thousands of years. Second, I don’t know what to make of the negative number of seconds. Third, when I Googled I didn’t see any results about this. (The only results on this site that I found are all of the form “Time spent on a transformation exceeded 300 seconds”, which is rather different.)

The number -4.03955*10^12 makes me think there’s some type of overflow error going on, but I’ve got no idea other than that.

ffmpeg – Added an overlay sound using filter_complex, but the overlay sound is 2 seconds behind where it needs to be, how do I adjust this?

I’m relatively new to ffmpeg, based on examples I found I was able to get this to successfully overlay an audio track over the top of a video with sound and successfully stream to Twitch – however, I need a way to start the audio (audio.ogg) at +2 seconds from where it starts off. I’m not sure how I’d change that here.

ffmpeg -re -y -i video.webm 
-filter_complex "amovie=audio.ogg:loop=999,volume=20dB,asetpts=N/SR/TB(aud);(0:a)(aud)amix(a)" 
-map 0:v -map '(a)' 
-c:v libx264 -preset veryfast -b:v 3000k -maxrate 3000k 
-bufsize 6000k -pix_fmt yuv420p -g 50 -c:a aac -b:a 256k -shortest 
-f flv "rtmp://fra05.contribute.live-video.net/app/live_xxxxxxxxxxxxx"

How do I start the audio at +2 seconds (audio.ogg) from the beginning?

Google Cloud connecting first time through ftp but in few seconds reverting to permission denied on write (the file is going blank on this case)

I am connecting to my project files through FileZilla (hosted in Google Clouds). It was working fine till yesterday. Starting yesterday I am getting issues:

When I restart the compute engine FileZilla is working for fine few seconds. Then it reverts back to permission issue or showing Failed while saving files.

I checked with permissions and everything looks fine. My project files is inside ‘var/www/html’

Filezilla log:

Status: Connected to {cloud_ip_here} Status: Starting download of /var/www/html/{app_name}/api/application/controllers/Test.php Status: File transfer successful, transferred 1,920 bytes in 1 second Status: Starting upload of C:Users{my_pc_name}AppDataLocalTempfz3temp-2Test.php Command: put “C:Users{my_pc_name}AppDataLocalTempfz3temp-2Test.php” “Test.php” Error: /var/www/html/{app_name}/api/application/controllers/Test.php: open for write: permission denied Error: File transfer failed