java – Why is multithreading not giving me the speed-up I expected?

I recently decided to learn how to multithread Java programs, so I made a small program to compare the performance of serial and multithreaded programs that perform the same task.

I created a serial program that calculates the number of primes from 1 to 10 million, and timed it 50 times using a test program. Here’s the code for the serial program:

import java.util.Locale;
import java.text.NumberFormat;
/**
 * A serial program that calculates the number of primes less than a
 * certain number (which is hard coded). Used as a basis for
 * benchmarking the multi-threaded programs that do the same thing.
 *
 * @author Tirthankar Mazumder
 * @version 1.2
 * @date 2nd May, 2021
 */
public class PrimalityTestSerial {
    public static final long max = 10_000_000;
    public static void main(String() args) {
        final long startTime = System.currentTimeMillis();
        
        long num_primes = primeCalculator();
        
        final long endTime = System.currentTimeMillis();
        
        NumberFormat nf = NumberFormat.getInstance(Locale.US);
         
        System.out.println("Number of primes less than " + nf.format(max) + ": " + num_primes);
        System.out.println("Took " + (endTime - startTime) + " ms.");
        System.out.println();
    }
    
    private static boolean isPrime(long l) {
        long upper_bound = (long) Math.floor(Math.sqrt(l));
        
        for (long i = 2; i <= upper_bound; i++) {
            if (l % i == 0)
                return false;
        }
        
        return true;
    }
    
    public static long primeCalculator() {
        long num_primes = 0;
        
        for (long i = 2; i <= max; i++) {
            if (isPrime(i))
                num_primes++;
        }
        return num_primes;
    }
}

Here’s the code for the worker class used in the multithreaded version:

/**
 * A worker class for calculating the number of primes from start to end,
 * which are private member variables. Instances of this class are used
 * in the multithreaded version of PrimalityTestSerial.
 *
 * @author Tirthankar Mazumder
 * @version 1.2
 * @date 3rd May, 2021
 */
public class PrimalityTestWorker1 implements Runnable {
    //Member variables
    public static long totalPrimeCount = 0;
    private final long start;
    private final long end;
    
    public PrimalityTestWorker1(long start, long end) {
        this.start = start;
        this.end = end;
    }
    
    private synchronized void increment(long num) {
        totalPrimeCount += num;
    }
    
    private static boolean isPrime(long l) {
        long upper_bound = (long) Math.floor(Math.sqrt(l));
        
        for (long i = 2; i <= upper_bound; i++) {
            if (l % i == 0)
                return false;
        }
        
        return true;
    }
    
    private void numPrimes() {
        long primeCount = 0;
        for (long i = start; i <= end; i++) {
            if (isPrime(i))
                primeCount++;
        }
        increment(primeCount);
    }
    
    public void run() {
        numPrimes();
        Thread.yield();
    }
}

Here’s the main program which uses instances of PrimalityTest1Worker as threads:

import java.util.Locale;
import java.text.NumberFormat;
/**
 * The master program for the multithreaded primality test that creates
 * objects of the PrimalityTestWorker1 to make threads, and then collates
 * the results and prints them to stdout.
 *
 * @author Tirthankar Mazumder
 * @version 1.0=2
 * @date 3rd May, 2021
 */
public class PrimalityTestParallel1Runner {
    public static final int cores = Runtime.getRuntime().availableProcessors();
    //We will spawn as many threads as there are cores on the system, and not
    //more than that because we are not I/O bound here.

    public static final long max = PrimalityTestSerial.max;
    //For consistency.

    public static void main(String() args) {
        long startTime = System.currentTimeMillis();

        primeCalculator();

        long endTime = System.currentTimeMillis();
        
        NumberFormat nf = NumberFormat.getInstance(Locale.US);
        
        System.out.println("Number of primes less than " + nf.format(max) + ": " +
            PrimalityTestWorker1.totalPrimeCount);
            
        System.out.println("Took " + (endTime - startTime) + " ms.");
        System.out.println();
    }
    
    public static void primeCalculator() {
        Thread arrThreads() = new Thread(cores);

        long chunk = max / cores;
        long threadStart = 2;
        long threadEnd = threadStart + chunk;

        for (int i = 0; i < cores; i++) {
            Thread t = new Thread(new PrimalityTestWorker1(threadStart, threadEnd));
            t.start();
            arrThreads(i) = t;

            threadStart = threadEnd + 1;
            threadEnd = (threadEnd + chunk > max) ? max : threadEnd + chunk;
        }

        for (int i = 0; i < cores; i++) {
            try {
                arrThreads(i).join(); 
            } catch (InterruptedException e) {
                System.out.println("Was interrupted.");
                return;
            }
        }
    }
}

Finally, here’s the code for the testing program, which runs each program 50 times and then calculates the average runtimes:

import java.util.Arrays;
/**
 * A wrapper class that handles benchmarking the performances of
 * PrimalityTestSerial and PrimalityTestParallel1Runner and then
 * prints information about the results to stdout.
 *
 * @author Tirthankar Mazumder
 * @version 1.0
 * @date 8th May, 2021
 */
public class PrimalityTestSuite {
    public static final int n = 50;
    //Number of test runs to perform
    
    public static void main(String() args) {
        long totalSerialTime = 0;
        long totalParallelTime = 0;
        
        long serialTimes() = new long(n);
        
        double avgSerialTime = 0;
        double avgParallelTime = 0;
        
        System.out.println("Starting Serial runs...");
        long startTime = System.currentTimeMillis();
        for (int i = 0; i < n; i++) {
            PrimalityTestSerial.primeCalculator();
            serialTimes(i) = System.currentTimeMillis();
        }
        
        for (int i = 0; i < n; i++) {
            serialTimes(i) -= startTime;
            for (int j = 0; j < i; j++) {
                serialTimes(i) -= serialTimes(j);
                //to get rid of the time taken by the previous runs
            }
            avgSerialTime += serialTimes(i);
        }
        
        avgSerialTime /= n;
        
        long parallelTimes() = new long(n);
        
        System.out.println("Starting parallel runs...");
        startTime = System.currentTimeMillis();
        for (int i = 0; i < n; i++) {
            PrimalityTestParallel1Runner.primeCalculator();
            parallelTimes(i) = System.currentTimeMillis();
        }
        
        for (int i = 0; i < n; i++) {
            parallelTimes(i) -= startTime;
            for (int j = 0; j < i; j++) {
                parallelTimes(i) -= parallelTimes(j);
                //to get rid of the time taken by the previous runs
            }
            avgParallelTime += parallelTimes(i);
        }
        
        avgParallelTime /= n;
        
        Arrays.sort(serialTimes);
        Arrays.sort(parallelTimes);
        
        double bestThreeSerialAvg = (serialTimes(0) + serialTimes(1)
                                     + serialTimes(2)) / 3;
        double bestThreeParallelAvg = (parallelTimes(0) + parallelTimes(1)
                                     + parallelTimes(2)) / 3;
        
        System.out.println();
        System.out.println("Results:");
        
        System.out.println("Average of " + n + " Serial Runs: " + avgSerialTime + " ms.");
        System.out.println("Average of " + n + " Parallel Runs: " + avgParallelTime + " ms.");
        
        System.out.println();
        System.out.println("Average speed-up: " + avgSerialTime / avgParallelTime + "x");
        System.out.println();
        
        System.out.println("Average of best 3 Serial Runs: " + bestThreeSerialAvg + " ms.");
        System.out.println("Average of best 3 Parallel Runs: " + bestThreeParallelAvg + " ms.");
        
        System.out.println();
        System.out.println("Average speed-up (w.r.t. best run times): "
                            + bestThreeSerialAvg / bestThreeParallelAvg + "x");
        System.out.println();
    }
}

Here are the results from the test program:

Starting Serial runs...
Starting parallel runs...

Results:
Average of 50 Serial Runs: 4378.92 ms.
Average of 50 Parallel Runs: 1529.2 ms.

Average speed-up: 2.8635364896678x

Average of best 3 Serial Runs: 4328.0 ms.
Average of best 3 Parallel Runs: 1297.0 ms.

Average speed-up (w.r.t. best run times): 3.3369313801079414x

From here, it is obvious that the average speed-up is simply around 3x. However, this is surprising because I expect the multi-threaded program to run 7 to 8 times faster (because the serial program uses just 1 core, whereas the multi-threaded program should use all 8 cores on my system.)

So my question is, why is the multi-threaded program not as fast as I expect it to be?

plugin wp seo yoast – WordPress categories endpoint giving error 404

I have a WordPress blog, Librarier.

PROBLEM

It’s giving me an error when we try to access some of the categories. But it works perfectly on others categories.

for example, these categories don’t work:

https://librarier.com/category/lifestyle/

https://librarier.com/category/health/

this category does work:

https://librarier.com/category/health/

WHAT’S I’VE BEEN TRIED

  1. Change category base on permalinks setting
    As some articles told me to do so
    enter image description here

Others
When I tried to change only the case sensitivity on the endpoint, it works.

https://librarier.com/category/lifestyle/ (error 404)

https://librarier.com/category/Lifestyle/ (working)

NOTE
WordPress: 5.7.1

(Theme)
Veen: 2.1.1

(Plugins)

All-in-One WP Migration: 7.4.2

LiteSpeed Cache: 3.6.4

Site Kit by Google: 1.31.0

Veen Theme Functions: 2.1.4

WPForms Lite: 1.6.6

WPS Hide Login: 1.8.5

Yoast SEO Premium: 15.5

Why is my 3rd party lens giving a Canon Error 01 (camera/lens communication fault)?

What you describe is often caused by ribbon cables inside the lens beginning to crack so that continuity along one or more of the channels is lost only when the cable is in certain positions.

Without knowing what aperture settings are selected when the camera is set to Aperture Priority, it’s difficult to draw much from the fact that you do not experience the issue when using Aperture Priority.

If we can assume that you’ve always got the aperture set to the widest available setting (lowest available f-number), then that would remove the need for the camera to send an instruction to the lens to stop down. It would also remove the need for the lens to confirm to the camera that the aperture diaphragm is now in the requested position. So it may be that the issue is caused when the lens attempts to stop down the aperture diaphragm or when it then attempts to confirm the position of the aperture diaphragm immediately before taking the image.

In your case, this inability to stop down and/or confirm the position of the aperture diaphragm seems to only occur when the ribbon cable the main PC board in the lens uses to communicate with the aperture diaphragm assembly is in the position it is in when the lens is in the fully zoomed position.

Based on the fact that the lens works fine when not fully zoomed, we can eliminate the possibility that it’s an issue caused by an older third party lens that has been reverse engineered to work with existing camera models available when it was designed but does not work with newer camera models that might have introduced parts of Canon’s lens-camera communication protocol that wasn’t used by the older models. I’ve got a 1990s vintage Sigma 70-300/4-5.6 that works fine with my EOS film camera but only works with my EOS digital bodies if the aperture is set to the wide open position, regardless of the focal length.

sharepoint online – web part giving error in SPO but runs ok in SP2013

During the migration of SP2013 farm to SPO I came across a content editor web part which has linked html file containing the following script

<script type="text/javascript">
    function loadJs(path) {
        document.write('x3Cscript type="text/javascript" src="' + path + '">x3C/script>');
    }
    function loadCSS(path) {
        document.write('x3Clink rel="stylesheet" type="text/css" href="' + path + '">x3C/link>');

    }
    loadCSS(_spPageContextInfo.siteServerRelativeUrl + '/_catalogs/masterpage/Applets/StockTicker/Content/App.min.css');

    loadJs(_spPageContextInfo.siteServerRelativeUrl + '/_catalogs/masterpage/Applets/StockTicker/Scripts/AppletConfig.js');

    loadJs(_spPageContextInfo.siteServerRelativeUrl + '/_catalogs/masterpage/Applets/StockTicker/Scripts/App.js');
</script>


<script  type="text/javascript">
    jQuery(document).ready(function () { SP.SOD.executeFunc("sp.js", "SP.ClientContext", function () { Insite.StockTicker.StockTickerFactory.create(); }); });
 
</script>

When I try to load the web part the console gives me an error that .create method could not be found since Insite.StockTicker.StockTickerFactory is undefined. App.js is the file containing the Insite.StockTicker and all its methods. I did a bit of research and found one similar problem where it was suggested to load these files in the exact order.

<script src="/_layouts/1033/init.js" type="text/javascript"></script>
<script src="/_layouts/MicrosoftAjax.js" type="text/javascript"></script>
<script src="/_layouts/sp.core.js" type="text/javascript"></script>
<script src="/_layouts/sp.runtime.js" type="text/javascript"></script>
<script src="/_layouts/sp.js" type="text/javascript"></script>

I am aware that SPO and SP2013 are different but can someone explain to me why the web part works on SP2013 and not on SPO. Correct me If I’m wrong but as far as I understood those scripts are preloaded in SPO and I only have to reference them where needed – in my case in the linked html file.

custom post types – CPT-UI plugin giving me parameter URL

So I have three CPTs with some taxes in them. They were all created with the CPT-UI Plugin.

At first all my posts were giving me 404 on single, so I changed the Rewrite option:

If I set the Rewrite option to True, my single post will work, but URL will be …com/?post-type=post-name

If I set the Rewrite option to False, my single post will return 404 not found page.

What I want is …com/post-type/taxonomy/post-name

This is the same for all 3 CPTs.

dnd 5e – is giving a demigod increaced stats reasonable or not

I made a character to help my players because the game is going to have a ton of hard enemies. Is it reasonable to have an extremely overpowered dmpc early on? The dmpc is made to help the party deal with things like high level demons and other stuff.

The stats are: Str 27, Dex 29, Con 20, Int 21, Wis 22, Cha 24, HP 16, AC 31.

dnd 5e – Would giving Sorcerer archetypes extra “spells known” unbalance them?

Sorcerers don’t have many spells. If I compare them to other full casters (I’m not including Warlocks in this because they work entirely differently), namely Bards, Clerics, Druids and Wizards, then they seem to come up short.

By level 20:

  • Bards know 22 spells;
  • Clerics/Druids can have between 24-25 spells prepared (assuming WIS of 18-20);
  • Wizards know a whopping 44 spells (only including spells gained via levelling) but can only prepare about 25, as per Cleric/Druid;
  • Sorcerers only know 15;

Sorcerers are the clear losers here. Not that I expect them to have as many as Wizards (since that’s their thing), but 15 just seems way too few to me, and I don’t think that Metamagic alone makes up that difference (which is all they seem to have besides spells). So I was thinking of giving them a free (but specific) spell between spell levels 1st-5th, much the same as I did for the PHB Rangers.

This way, the Sorcerer would know 20 spells; still fewer than everyone else, but not by as great-a-margin. As with my other question, the spells I have picked suit their flavour and include spells from within their own spell list and from other spell lists (which would count as sorcerer spells for them), but only one extra spell per spell level (up to 5th). I’m not listing them all here, though, as it would take up too much room (I’ve done this for the UA archetypes, too).

I suppose the player of these “enhanced sorcerers” could potentially swap out the free spell for any old sorcerer spell, effectively giving them 20 total spells without the flavour, but as covered in this question, I’m going to assume they can’t do that, much the same as with the extra Ranger spells (or at the very least, I would rule that they can’t, given that I’d be the DM in this situation, what with it being my homebrew).

So, my question is does the increase in number of spells known impact the strength of the class in a way that might make it preferable to the other spellcasters? I’ve always felt they were the weaker option, but that might just be because I don’t use Metamagic to it’s full potential or something, so will these additional spells and the increased flexibility they provide unbalance anything beyond what I have anticipated?


As a caveat, I’m aware that the XGTE archetypes (and the new Giant Soul Sorcerer archetype from UA, but I don’t care as much about that because it’s just UA, and my main balance concerns are with the official released archetypes) already include an additional spell; namely the Divine Soul’s Divine Magic feature (XGTE, pg. 50):

In addition, choose an affinity for the source of your divine power: good, evil, law, chaos, or neutrality. You learn an additional spell based on that affinity (…)

and the Shadow Sorcerer’s Eyes of the Dark feature (XGTE, pg. 51):

When you reach 3rd level in this class, you learn the darkness spell, which doesn’t count against your number of sorcerer spells known.

Therefore, RAW, these Sorcerers would know 16 spells in total by level 20. I’m planning on not giving Divine Soul sorcerers an extra level 1 spell or Shadow Sorcerers and extra 2nd level spell, so that their total is still 20 known spells by the time they hit level 20. I personally don’t think this matters too much since it’s only one spell’s difference, and their class features do provide them with other benefits, so it’s not like I’m taking their class features away from them and giving them to every other sorcerer archetype.

I thought to mention this in case this impacts balance, and therefore any potential answers, in a way I haven’t anticipated; but this is an aside, just some extra information, and my question is not specifically about this aside, but about the impacts of having any sorcerers with 20 spells.

wordpress.org – WordPress blog posts permalinks giving 404 on nginx

I have the following nginx configuration to serve blog on /blog/ URL.

server {
        listen 80;
        server_name sqcg.in www.example.in;
        root /var/www/website;

        index index.html index.htm index.php;

        # Serve blog
        location /blog {
                return 301 /blog/;
        }

        location /blog/ {
                autoindex on;
                alias /var/www/blog/;
                index index.php index.html index.htm;
                try_files $uri $uri/ /index.php$args;

                location ~ .php$ {
                        include snippets/fastcgi-php.conf;
                        fastcgi_param  SCRIPT_FILENAME    $request_filename;
                        fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
                }
        }

        # Serve other files
        location / {
                try_files $uri $uri/ =404;
        }

        location ~ .php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
        }

        location ~ /.ht {
                deny all;
        }
}

The homepage is working fine at https://example.in/blog and also the admin panel is working perfectly https://example.in/blog/wp-admin/

When the blog posts permalink is set to plain, the blogs posts are opening fine with the URL

https://example.in/blog/?p=123

But on changing the permalink to another format blog/blog/2021/04/16/sample-post/, It is giving 404

https://example.in/blog/blog/2021/04/16/sample-post/

rest – Delphi Rad Server – Giving feedback between some long actions

I have a wish and don’t know how to search for some possible solutions.
I set up a RAD Server that serves informations via REST, works great so far. That Server has some let’s call it trigger functions. One of that functions, is to import a huge set of Data from another web Service. That Import could run for about 3-5 Minutes depending on network speed. So my Problem is the following.

From my Client Application, I can call lets say: “https://myurl.com/functions/importmuchinformations”

That would trigger my Rad Server to do the Import stuff. But my Response to that request, will only appear after that whole import is done. I would like to give the client some Informations in Between. Something like “10000 Datasets imported” and 5 seconds later again the next amount of imported Datasets. But I don’t now how to archive that.

mysql – Mysqld draining memory, using way more then assigned and not giving any back

This is a production server.

Whenever mysqld is restarted the memory consumption slowly builds up to about 75 to 90% of my total memory (while only 8 out of 32GB is allocated to mysqld).
Sometimes this takes 5 hours, sometimes this takes 16 hours.
Usually it sticks around that percentage for a good amount of hours.
If that was just it it would be fine, but after some time it then starts using even more memory, Until it overflows, causing my 32GB swap drive to fill up to 100% sometimes.
Now things start to rapidly slow down to a crawl with mostly timeouts. It does recover after some time to the point it will no longer timeout for a while, but the memory never frees up so its constantly on the verge of dropping out again.

Anyone have an insight on how can this be?
Why is mysqld using so much memory and not returning any free mem?
And why is it not respecting at all the buffer limits that were set?
I have played around for some time with the mariadb.conf, but even reverting all back to default doesn’t seem to have much impact on the memory usage.

Running

mysqld  Ver 10.3.27-MariaDB-0+deb10u1

top stats

mysql     20   0   33.2g  25.1g   3760 S 521.6  80.1   1122:26 mysqld

Memory/buffer specs from mysql tuner

(--) Physical Memory     : 31.3G
(--) Max MySQL memory    : 8.0G
(--) Other process memory: 1.4G                                                                                                                                                                                                               
(--) Total buffers: 7.5G global + 2.9M per thread (151 max threads)                                                                                                                                                                           
(--) P_S Max memory usage: 104M
(--) Global Buffers
(--)  +-- Key Buffer: 128.0M
(--)  +-- Max Tmp Table: 256.0M
(--) Query Cache Buffers
(--)  +-- Query Cache: OFF - DISABLED
(--)  +-- Query Cache Size: 0B
(--) Per Thread Buffers
(--)  +-- Read Buffer: 128.0K
(--)  +-- Read RND Buffer: 256.0K
(--)  +-- Sort Buffer: 2.0M
(--)  +-- Thread stack: 292.0K
(--)  +-- Join Buffer: 256.0K

Mariadb conf

skip-name-resolve
performance_schema      = ON
query_cache_type        = 0
query_cache_size        = 0   
tmp_table_size          = 256M
max_heap_table_size     = 256M
innodb_log_files_in_group       = 4
innodb_buffer_pool_size        = 7G 
innodb_status_file                      #extra reporting
innodb_file_per_table                   #enable always
innodb_flush_log_at_trx_commit  = 2     #2/0 = perf, 1 = ACID
innodb_table_locks              = 0  
innodb_lock_wait_timeout        = 60
innodb_thread_concurrency       = 24 
innodb_commit_concurrency       = 2 
innodb_log_file_size   = 384M
innodb_buffer_pool_instances = 7