How to prove the performance, Big Omega, of building a binary heap using the recursive method is Ω (nlog (n))

We can learn the big-O of building a binary heap using the recursive method is O (n log n) from the wiki
"This approach, known as Williams' method after the inventor of the binary heaps, is easily executable in a time O (n log n): it performs n insertions at a cost in O (log n). A) "We can also know if we are building the heap using other methods. the big O could be better. Where should we start if we want to prove that the performance, Big Omega, of building a binary heap using the recursive method is Ω (nlog (n)), which is identical to Big -O?

php – Limit and performance – SAAS with MySQL db

I raise requirements and assemble the framework for a saas tool.

For the database:
1 authentication scheme
n schemes for customers. Each customer can create multiple inclusive systems.

Would I have a limitation on the number of mysql schemas in a single instance / server or would this limitation / performance only reflect the server's capacity?

If anyone has any answers or advice, he would be welcome.

Thank you

Performance Tuning – Create Bootable Boots with Nest (List), While, Fold

Here's a video about Truncatable Bonuses. I tried to build it myself. Here is what I tried to use the method of video left Truncatable Primes.

myNextList(n_) := Select(10^(Length(IntegerDigits(n)))*Range(9) + n, PrimeQ);
SetAttributes(myNextList, Listable);

So, a little test:

myNextList(91)

{191, 491, 691, 991}

myNextList(3947)

{}

and

myNextList({13, 23, 43, 53, 73, 83})

{{113, 313, 613}, {223, 523, 823}, {443, 643, 743}, {353, 653, 853, 953}, {173, 373, 673, 773}, {283, 383, 683, 883, 983}}

Now the goal is to repeat the process until we do not get any of them returns a prime number. So I tried

NestWhileList(myNextList, 7, AllTrue(#, PrimeQ) &)
NestWhile(myNextList, 7, AllTrue(#, PrimeQ) &)

{{317, 617}, {137, 337, 937}, {347, 547, 647, 947}, {167, 367, 467, 967}, {197, 397, 797, 997}

which should be continued …

myNextList(%)

{{{6317, 8317}, {2617, 3617}}, {{2137, 3137, 9137}, {4337, 6337, 9337}, {4937, 7937}}, {{2347, 3347, 5347}, {3547 , 4547, 6547, 7547, 9547}, …

I think it's an easy solution, with the test part that I used AllTrue(#, PrimeQ) &. But I do not know how to fix it.

If I come to use

Nest(myNextList, 1, 8)
Nest(myNextList, 7, 8)
Nest(myNextList, 7, 16)

for example, everything worked well. But I want to repeat something from the video and find all the 1422 end points (as shown in the video at 4:49)

performance – SQL Server: Memory Pressure, Enriched User Connections, and CPU Usage

I have the SQL server environment below.

  1. SQL Server 2016 Standard Edition.
  2. 128 GB of RAM
  3. 96 GB allocated to SQL Server
  4. 8 tempdb with 1 GB each.
  5. Have a major database with a size of 2.5 TB.
  6. Reporting is also done from the same SQL instance. Reporting is minimal.

There is a memory pressure because I notice that the cache of my plan is created periodically. I also noticed the indicator that indicates the pressure of the memory when I request a loop buffer.
In addition, I see temporary memory loss, big memory problems when I sometimes see the execution plan of a slow query.
Most read operations arriving in the database involve complex queries, and to improve performance, read operations are performed in parallel with the parameters below.

  • MAX DOP: 8
  • Cost threshold for parallelism = 20

I've tried to improve performance and I've been able to reach them to some extent. But some days the server becomes slow and transactions (read for the most part) take about 30 seconds or even 1 minute to finish. Since most transactions read are running. in parallel, I see the waiting time of CXCONSUMER and transactions, but it is also a question of waiting for resources.

I'm confused where I should start controlling things.
Do I need more memory to support 20x more data here?

Additional information:
I use the TICK battery grafana monitoring tool, to see the CPU usage and other performance counters. Sometimes I see the use of the NEC become very high and, at the same time, the number of user connections that really goes very high. The fact is that they are not real users. (when I checked with the team) and another process is the cause.
I do not know if a locking mechanism can be at the origin of this type of problems if user connections soar, as well as the CPU usage.

performance query – Optimizing the range join (x between a and b) in MariaDB

My query has a join using a BETWEEN clause and causes a full table scan if I correctly understand the EXPLAIN output.

Request

EXPLAIN
 SELECT g.id, g.name, c.id AS child_id, c.date
   FROM g
   JOIN c
     ON c.parent_id = g.id
    AND c.date BETWEEN g.start_date AND g.end_date;

keys

  • Primary key of g: (id, date_start)
  • Primary key of c: (id)
  • Foreign key on c.parent_id references g.id
  • Index on c: (parent_id)

EXPLAIN exit

+----+-------------+-------+------+-------------------+-------------------+---------+-------------+-------+----------+-------------+
| id | select_type | table | type |   possible_keys   |        key        | key_len |     ref     | rows  | filtered |    Extra    |
+----+-------------+-------+------+-------------------+-------------------+---------+-------------+-------+----------+-------------+
|  1 | SIMPLE      | g     | ALL  | PRIMARY           |                   |         |             | 18342 |      100 | ""          |
|  1 | SIMPLE      | c     | ref  | idx_parent_id     | idx_parent_id     |       4 | square.g.id |     3 |      100 | Using where |
+----+-------------+-------+------+-------------------+-------------------+---------+-------------+-------+----------+-------------+

in the g table, there are no overlapping lines in the start / end date for a given ID but the database will not know it because there is no way to get it. indicate as far as I know.

All date columns are of type date.

How can I avoid a full scan of the table, or if I can not, what can I do to improve performance?

This query is used to create a view. Many queries are selected in the application and are attached to other tables in the application. It is therefore important to optimize this view.

performance – Python primality test

Here is a Python implementation of the primality test. Is there anything I could change in the code to improve the runtime?

from sympy import *
from sympy.abc import x

n=int(input("Enter a number : "))

def xmat(r,n):
    return Matrix(((2*x, -1), (1, 0)))*rem(1,x**r-1)*(1%n)

def smallestr(n):
    if n==1 or n%2==0:
        return 0
    else:
        r=3
        while r<1000000:
            u=n%r
            if u==0 and r

You can run this code here.

performance query – Accelerate 180+ rows returned by simple SELECT with PostgreSQL

I have performance issues on a SELECT statement that returns 180+ on over 220 million rows on my managed PostgreSQL. The structure of the table is as follows:

CREATE TABLE members (
    id bigserial NOT NULL,
    customer varchar(64) NOT NULL,
    group varchar(64) NOT NULL,
    member varchar(64) NOT NULL,
    has_connected bool NULL DEFAULT false,
    CONSTRAINT members_customer_group_members_key UNIQUE (customer, group, member),
    CONSTRAINT members_pkey PRIMARY KEY (id)
);

The SELECT "guilty" query is:

SELECT
    group,
    member,
    has_connected
FROM
    members
WHERE
    customer = :customer;

I've already indexed the table:

CREATE INDEX members_idx ON members USING btree (customer, group, has_connected, member);

and the query behave well for most customer value. However, I have a client, let's call him 1234, which represents 80% of the table, the query planner prefers to analyze the entire table according to the following plexplain analysis result:

Seq Scan on public.members  (cost=0.00..5674710.80 rows=202271234 width=55) (actual time=0.018..165612.655 rows=202279274 loops=1)
  Output: community, member, has_connected
  Filter: ((members.customer)::text = '1234'::text)
  Rows Removed by Filter: 5676072
Planning time: 0.106 ms
Execution time: 175174.714 ms

As I said before, PostgreSQL is a managed instance of PostgreSQL 9.6.14 hosted on Google Cloud Platform with 10 vCPUs and 30 GB of RAM. I am rather limited to the available indicators. The only PostgreSQL options that are set on this instance are:

max_connections: 1000
work_mem: 131072 KB
maintenance_work_mem: 2000000 KB

What are my options to solve this problem and greatly reduce the polling time, preferably less than 30 seconds if possible?

seo – How can I improve the performance score website in gtmetrix?

I'm testing my website with gtmetrix. The result looks like this:

enter the description of the image here

I want to improve the rank of the expiration-adding headers, compress the component with gzip and any recommendations that have a bad grade

How can I do it?

Update :

See my update image:

enter the description of the image here

enter the description of the image here

Tell me how to improve only 4 recommendations. That is, adding expired headers, making fewer http requests, using a domain without cookies, and reducing DNS lookups

performance tuning – The integration of the Interpolated function takes an unacceptable time

I have a simple integration that, when using an interpolation function, takes too much time to calculate:

c = 2.99792*10^5;
A = 3.87624*10^-14;
FreeElectronFractionData = {{3000, 1.0829044}, {2984.9246, 1.0828562}, {2969.8493, 1.0828473}, {2954.7739, 1.0828366}, {2939.6985, 1.0828238}, {2924.6231, 1.0828083}, {2909.5478, 1.0827898}, {2894.4724, 1.0827674}, 
    {2879.397, 1.0827404}, {2864.3217, 1.0827077}, {2849.2463, 1.0826683}, {2834.1709, 1.0826207}, {2819.0955, 1.0825632}, {2804.0202, 1.0824939}, {2788.9448, 1.0824106}, {2773.8694, 1.0823111}, {2758.7941, 1.0821927}, 
    {2743.7187, 1.0820531}, {2728.6433, 1.08189}, {2713.5679, 1.0817016}, {2698.4926, 1.0814865}, {2683.4172, 1.0812441}, {2668.3418, 1.0809745}, {2653.2664, 1.0806783}, {2638.1911, 1.0803569}, {2623.1157, 1.0800119}, 
    {2608.0403, 1.0796454}, {2592.965, 1.0792594}, {2577.8896, 1.0788561}, {2562.8142, 1.0784377}, {2547.7388, 1.0780061}, {2532.6635, 1.0775631}, {2517.5881, 1.0771101}, {2502.5127, 1.0766486}, {2487.4374, 1.0761797}, 
    {2472.362, 1.0757042}, {2457.2866, 1.0752228}, {2442.2112, 1.074736}, {2427.1359, 1.0742441}, {2412.0605, 1.0737472}, {2396.9851, 1.0732455}, {2381.9098, 1.0727387}, {2366.8344, 1.0722267}, {2351.759, 1.0717092}, 
    {2336.6836, 1.0711857}, {2321.6083, 1.070656}, {2306.5329, 1.0701194}, {2291.4575, 1.0695754}, {2276.3822, 1.0690234}, {2261.3068, 1.0684627}, {2246.2314, 1.0678928}, {2231.156, 1.0673129}, {2216.0807, 1.0667222}, 
    {2201.0053, 1.06612}, {2185.9299, 1.0655055}, {2170.8545, 1.064878}, {2155.7792, 1.0642365}, {2140.7038, 1.0635802}, {2125.6284, 1.0629083}, {2110.5531, 1.0622197}, {2095.4777, 1.0615136}, {2080.4023, 1.060789}, 
    {2065.3269, 1.0600449}, {2050.2516, 1.0592803}, {2035.1762, 1.0584941}, {2020.1008, 1.0576853}, {2005.0255, 1.0568526}, {1989.9501, 1.0559951}, {1974.8747, 1.0551114}, {1959.7993, 1.0542004}, {1944.724, 1.0532609}, 
    {1929.6486, 1.0522915}, {1914.5732, 1.051291}, {1899.4979, 1.0502581}, {1884.4225, 1.0491914}, {1869.3471, 1.0480894}, {1854.2717, 1.0469509}, {1839.1964, 1.0457743}, {1824.121, 1.044558}, {1809.0456, 1.0433006}, 
    {1793.9703, 1.0420003}, {1778.8949, 1.0406552}, {1763.8195, 1.0392634}, {1748.7441, 1.0378224}, {1733.6688, 1.0363292}, {1718.5934, 1.0347802}, {1703.518, 1.0331707}, {1688.4426, 1.0314941}, {1673.3673, 1.0297415}, 
    {1658.2919, 1.0279002}, {1643.2165, 1.025952}, {1628.1412, 1.0238707}, {1613.0658, 1.0216182}, {1597.9904, 1.0191391}, {1582.915, 1.0163525}, {1567.8397, 1.0131417}, {1552.7643, 1.0093401}, {1537.6889, 1.0047152}, 
    {1522.6136, 0.99895508}, {1507.5382, 0.99166793}, {1492.4628, 0.9824059}, {1477.3874, 0.97072308}, {1462.3121, 0.95625418}, {1447.2367, 0.93878259}, {1432.1613, 0.91826926}, {1417.086, 0.89483803}, 
    {1402.0106, 0.86873404}, {1386.9352, 0.84027554}, {1371.8598, 0.80981198}, {1356.7845, 0.77769325}, {1341.7091, 0.74424993}, {1326.6337, 0.70978283}, {1311.5584, 0.67455941}, {1296.483, 0.63881506}, 
    {1281.4076, 0.60275746}, {1266.3322, 0.56657254}, {1251.2569, 0.53043096}, {1236.1815, 0.4944942}, {1221.1061, 0.45891987}, {1206.0307, 0.4238656}, {1190.9554, 0.3894916}, {1175.88, 0.35596148}, {1160.8046, 0.32344154}, 
    {1145.7293, 0.29209849}, {1130.6539, 0.26209576}, {1115.5785, 0.23358875}, {1100.5031, 0.20671934}, {1085.4278, 0.18161021}, {1070.3524, 0.1583594}, {1055.277, 0.1370358}, {1040.2017, 0.11767584}, 
    {1025.1263, 0.10028183}, {1010.0509, 0.084822106}, {994.97554, 0.07123293}, {979.90017, 0.05942197}, {964.8248, 0.049273035}, {949.74943, 0.040651617}, {934.67406, 0.033410795}, {919.59868, 0.027397081}, 
    {904.52331, 0.022455872}, {889.44794, 0.018436284}, {874.37257, 0.015195242}, {859.2972, 0.012600771}, {844.22182, 0.01053444}, {829.14645, 0.008892887}, {814.07108, 0.007588385}, {798.99571, 0.006548454}, 
    {783.92034, 0.005714659}, {768.84496, 0.005040865}, {753.76959, 0.004491226}, {738.69422, 0.004038206}, {723.61885, 0.003660784}, {708.54348, 0.003342951}, {693.46811, 0.003072499}, {678.39273, 0.002840079}, 
    {663.31736, 0.002638491}, {648.24199, 0.002462145}, {633.16662, 0.002306669}, {618.09125, 0.00216861}, {603.01587, 0.002045214}, {587.9405, 0.00193427}, {572.86513, 0.001833981}, {557.78976, 0.001742876}, 
    {542.71439, 0.00165974}, {527.63902, 0.001583562}, {512.56364, 0.001513494}, {497.48827, 0.001448819}, {482.4129, 0.001388928}, {467.33753, 0.001333298}, {452.26216, 0.00128148}, {437.18678, 0.001233084}, 
    {422.11141, 0.00118777}, {407.03604, 0.001145243}, {391.96067, 0.00110524}, {376.8853, 0.001067531}, {361.80992, 0.001031911}, {346.73455, 0.000998196}, {331.65918, 0.000966223}, {316.58381, 0.000935844}, 
    {301.50844, 0.000906924}, {286.43307, 0.00087934}, {271.35769, 0.00085298}, {256.28232, 0.00082774}, {241.20695, 0.000803522}, {226.13158, 0.000780232}, {211.05621, 0.000757781}, {195.98083, 0.00073608}, 
    {180.90546, 0.000715041}, {165.83009, 0.00069457}, {150.75472, 0.00067457}, {135.67935, 0.000654928}, {120.60397, 0.000635516}, {105.5286, 0.000616174}, {90.453231, 0.000596693}, {75.377859, 0.000576785}, 
    {60.302487, 0.000556011}, {45.227116, 0.000533637}, {30.151744, 0.000508209}, {15.076372, 0.000475883}, {0, 0.000410148}}; 

FreeElectronFraction := Interpolation(FreeElectronFractionData, InterpolationOrder -> 1)

ElectronNumberDensity((Eta)_) := (redShift = 6.64*^18^2/((c - Sqrt(c)*Sqrt(c - 2.*A*(Eta)))/A)^2 - 1.; FreeElectronFraction(redShift)*1.42*^-7*(1. + redShift)^3)

Plot(NIntegrate(ElectronNumberDensity(eta), {eta, (Eta), 3.78*^18}, MaxRecursion -> 15), {(Eta), 1.47*^17, 2.66*^17})

ListPlot(FreeElectronFractionData)

The 25 seconds do not seem much, but this calculation is inside another integral that did not complete in eight hours. As far as I know, this integral is the culprit. More precisely the interpolation function.

I have seen other suggested solutions on this chart, but none of them worked for me. One of the solutions seemed promising: to create a pure function based on the interpolated data and use it in the integral, but that goes beyond my skills.

performance – Is there a better alternative to creating an optional set of elements from ititable in java?

I feel uncomfortable with this method of service that I wrote:

    @Override
    public Optional> getAllTenantsOfUsers(Set usersIds) {

        return Optional.ofNullable(usersIds)
                .map((ids) -> tenantRepository.findAll(QTenant.tenant.users.any().id.in(usersIds)))
                .map((it) -> StreamSupport.stream(it.spliterator(), false))
                .map((strm) -> strm.map((tenant) -> tenantMapper.toDto(tenant)).collect(Collectors.toSet()));

    }

my tenantRepository.findAll returns a iterable of Tenant entity

Is this the best approach to get all the performance, with a good code?