CPU high from fb

Hi,

My website have use high CPU due to access from facebook. Is it possible to reduce it?

[code]173.252.87.17 – – [30/Oct/2020:13:16:… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1827274&goto=newpost

performance – Thread Synchronization – CPU Usage exceeds 200 – C program

I have a working program that reads user input keys and echoes them back to the screen using the producer/consumer paradigm (the project requires you to use threads).

While this program does work, it is unfortunately very inefficient. When I run the ‘top’ command, the CPU usage is over 200. Any suggestions to modify this code so it’s not using up a significant percentage of the CPU? This is my first program using threads so I’m not sure if I am doing something wrong if this is normal haha. Thanks in advance.

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

#define NITEMS 10       // number of items in shared buffer

// shared variables
char shared_buffer(NITEMS); // echo buffer
int shared_count;       // item count

pthread_mutex_t mutex;      // pthread mutex
unsigned int prod_index = 0;    // producer index into shared buffer
unsigned int cons_index = 0;    // consumer index into shard buffer

// function prototypes
void * producer(void *arg);
void * consumer(void *arg);

int main() 
{ 
    pthread_t prod_tid, cons_tid1, cons_tid2; 

    // initialize pthread variables
    pthread_mutex_init(&mutex, NULL);
    
    // start producer thread
    pthread_create(&prod_tid, NULL, producer, NULL);

    // start consumer threads
    pthread_create(&cons_tid1, NULL, consumer, NULL);
    pthread_create(&cons_tid2, NULL, consumer, NULL);
    
    // wait for threads to finish
    pthread_join(prod_tid, NULL);
    pthread_join(cons_tid1, NULL);
    pthread_join(cons_tid2, NULL);
            
    // clean up
    pthread_mutex_destroy(&mutex);
    
    return 0;
}

// producer thread executes this function
void * producer(void *arg)
{
    char key;

    printf("Enter text for producer to read and consumer to print, use Ctrl-C to exit.nn");

    // this loop has the producer read in from stdin and place on the shared buffer
    while (1)
    {
        // read input key
        scanf("%c", &key);

        // this loop is used to poll the shared buffer to see if it is full:
        // -- if full, unlock and loop again to keep polling
        // -- if not full, keep locked and proceed to place character on shared buffer
        while (1)
        {
            // acquire mutex lock
            pthread_mutex_lock(&mutex);

            // if buffer is full, release mutex lock and check again
            if (shared_count == NITEMS)
                pthread_mutex_unlock(&mutex);
            else
                break;
        }

        // store key in shared buffer
        shared_buffer(prod_index) = key;

        // update shared count variable
        shared_count++;

        // update producer index
        if (prod_index == NITEMS - 1)
            prod_index = 0;
        else
            prod_index++;
        
        // release mutex lock
        pthread_mutex_unlock(&mutex); 
    }

    return NULL;
}

// consumer thread executes this function
void * consumer(void *arg)
{
    char key;

    long unsigned int id = (long unsigned int)pthread_self();

    // this loop has the consumer gets from the shared buffer and prints to stdout
    while (1)
    {
        // this loop is used to poll the shared buffer to see if it is empty:
        // -- if empty, unlock and loop again to keep polling
        // -- if not empty, keep locked and proceed to get character from shared buffer
        while (1)
        {
            // acquire mutex lock
            pthread_mutex_lock(&mutex);

            // if buffer is empty, release mutex lock and check again
            if (shared_count == 0)
                pthread_mutex_unlock(&mutex);
            else
                break;
        }

        // read key from shared buffer
        key = shared_buffer(cons_index);
        
        // echo key
        printf("consumer %lu: %cn", (long unsigned int) id, key);

        // update shared count variable
        shared_count--;

        // update consumer index
        if (cons_index == NITEMS - 1)
            cons_index = 0;
        else
            cons_index++;
    
        // release mutex lock
        pthread_mutex_unlock(&mutex);
    }

    return NULL;
}

algorithms – Need help w/ Thread Synchronization – CPU percentage is over 200 – C

I have a working program that reads user input keys and echoes them back to the screen using the producer/consumer paradigm (the project requires you to use threads).

While this program does work, it is unfortunately very inefficient. When I run the ‘top’ command, the CPU usage is over 200. Any suggestions to modify this code so it’s not using up a significant percentage of the CPU?

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

#define NITEMS 10       // number of items in shared buffer

// shared variables
char shared_buffer(NITEMS); // echo buffer
int shared_count;       // item count

pthread_mutex_t mutex;      // pthread mutex
unsigned int prod_index = 0;    // producer index into shared buffer
unsigned int cons_index = 0;    // consumer index into shard buffer

// function prototypes
void * producer(void *arg);
void * consumer(void *arg);

int main() 
{ 
    pthread_t prod_tid, cons_tid1, cons_tid2; 

    // initialize pthread variables
    pthread_mutex_init(&mutex, NULL);
    
    // start producer thread
    pthread_create(&prod_tid, NULL, producer, NULL);

    // start consumer threads
    pthread_create(&cons_tid1, NULL, consumer, NULL);
    pthread_create(&cons_tid2, NULL, consumer, NULL);
    
    // wait for threads to finish
    pthread_join(prod_tid, NULL);
    pthread_join(cons_tid1, NULL);
    pthread_join(cons_tid2, NULL);
            
    // clean up
    pthread_mutex_destroy(&mutex);
    
    return 0;
}

// producer thread executes this function
void * producer(void *arg)
{
    char key;

    printf("Enter text for producer to read and consumer to print, use Ctrl-C to exit.nn");

    // this loop has the producer read in from stdin and place on the shared buffer
    while (1)
    {
        // read input key
        scanf("%c", &key);

        // this loop is used to poll the shared buffer to see if it is full:
        // -- if full, unlock and loop again to keep polling
        // -- if not full, keep locked and proceed to place character on shared buffer
        while (1)
        {
            // acquire mutex lock
            pthread_mutex_lock(&mutex);

            // if buffer is full, release mutex lock and check again
            if (shared_count == NITEMS)
                pthread_mutex_unlock(&mutex);
            else
                break;
        }

        // store key in shared buffer
        shared_buffer(prod_index) = key;

        // update shared count variable
        shared_count++;

        // update producer index
        if (prod_index == NITEMS - 1)
            prod_index = 0;
        else
            prod_index++;
        
        // release mutex lock
        pthread_mutex_unlock(&mutex); 
    }

    return NULL;
}

// consumer thread executes this function
void * consumer(void *arg)
{
    char key;

    long unsigned int id = (long unsigned int)pthread_self();

    // this loop has the consumer gets from the shared buffer and prints to stdout
    while (1)
    {
        // this loop is used to poll the shared buffer to see if it is empty:
        // -- if empty, unlock and loop again to keep polling
        // -- if not empty, keep locked and proceed to get character from shared buffer
        while (1)
        {
            // acquire mutex lock
            pthread_mutex_lock(&mutex);

            // if buffer is empty, release mutex lock and check again
            if (shared_count == 0)
                pthread_mutex_unlock(&mutex);
            else
                break;
        }

        // read key from shared buffer
        key = shared_buffer(cons_index);
        
        // echo key
        printf("consumer %lu: %cn", (long unsigned int) id, key);

        // update shared count variable
        shared_count--;

        // update consumer index
        if (cons_index == NITEMS - 1)
            cons_index = 0;
        else
            cons_index++;
    
        // release mutex lock
        pthread_mutex_unlock(&mutex);
    }

    return NULL;
}

javascript – A piece of JS code that renders a cherry animation is eating up 66% of my CPU, how do I deal with it?

var RENDERER = {
    INIT_CHERRY_BLOSSOM_COUNT : 30,
    MAX_ADDING_INTERVAL : 10,
    
    init : function(){
        this.setParameters();
        this.reconstructMethods();
        this.createCherries();
        this.render();
    },
    setParameters : function(){
        this.$container = $('#jsi-cherry-container');
        this.width = this.$container.width();
        this.height = this.$container.height();
        this.context = $('<canvas />').attr({width : this.width, height : this.height}).appendTo(this.$container).get(0).getContext('2d');
        this.cherries = ();
        this.maxAddingInterval = Math.round(this.MAX_ADDING_INTERVAL * 1000 / this.width);
        this.addingInterval = this.maxAddingInterval;
    },
    reconstructMethods : function(){
        this.render = this.render.bind(this);
    },
    createCherries : function(){
        for(var i = 0, length = Math.round(this.INIT_CHERRY_BLOSSOM_COUNT * this.width / 1000); i < length; i++){
            this.cherries.push(new CHERRY_BLOSSOM(this, true));
        }
    },
    render : function(){
        requestAnimationFrame(this.render);
        this.context.clearRect(0, 0, this.width, this.height);
        
        this.cherries.sort(function(cherry1, cherry2){
            return cherry1.z - cherry2.z;
        });
        for(var i = this.cherries.length - 1; i >= 0; i--){
            if(!this.cherries(i).render(this.context)){
                this.cherries.splice(i, 1);
            }
        }
        if(--this.addingInterval == 0){
            this.addingInterval = this.maxAddingInterval;
            this.cherries.push(new CHERRY_BLOSSOM(this, false));
        }
    }
};
var CHERRY_BLOSSOM = function(renderer, isRandom){
    this.renderer = renderer;
    this.init(isRandom);
};
CHERRY_BLOSSOM.prototype = {
    FOCUS_POSITION : 300,
    FAR_LIMIT : 600,
    MAX_RIPPLE_COUNT : 100,
    RIPPLE_RADIUS : 100,
    SURFACE_RATE : 0.5,
    SINK_OFFSET : 20,
    
    init : function(isRandom){
        this.x = this.getRandomValue(-this.renderer.width, this.renderer.width);
        this.y = isRandom ? this.getRandomValue(0, this.renderer.height) : this.renderer.height * 1.5;
        this.z = this.getRandomValue(0, this.FAR_LIMIT);
        this.vx = this.getRandomValue(-2, 2);
        this.vy = -2;
        this.theta = this.getRandomValue(0, Math.PI * 2);
        this.phi = this.getRandomValue(0, Math.PI * 2);
        this.psi = 0;
        this.dpsi = this.getRandomValue(Math.PI / 600, Math.PI / 300);
        this.opacity = 0;
        this.endTheta = false;
        this.endPhi = false;
        this.rippleCount = 0;
        
        var axis = this.getAxis(),
            theta = this.theta + Math.ceil(-(this.y + this.renderer.height * this.SURFACE_RATE) / this.vy) * Math.PI / 500;
        theta %= Math.PI * 2;
        
        this.offsetY = 40 * ((theta <= Math.PI / 2 || theta >= Math.PI * 3 / 2) ? -1 : 1);
        this.thresholdY = this.renderer.height / 2 + this.renderer.height * this.SURFACE_RATE * axis.rate;
        this.entityColor = this.renderer.context.createRadialGradient(0, 40, 0, 0, 40, 80);
        this.entityColor.addColorStop(0, 'hsl(330, 70%, ' + 50 * (0.3 + axis.rate) + '%)');
        this.entityColor.addColorStop(0.05, 'hsl(330, 40%,' + 55 * (0.3 + axis.rate) + '%)');
        this.entityColor.addColorStop(1, 'hsl(330, 20%, ' + 70 * (0.3 + axis.rate) + '%)');
        this.shadowColor = this.renderer.context.createRadialGradient(0, 40, 0, 0, 40, 80);
        this.shadowColor.addColorStop(0, 'hsl(330, 40%, ' + 30 * (0.3 + axis.rate) + '%)');
        this.shadowColor.addColorStop(0.05, 'hsl(330, 40%,' + 30 * (0.3 + axis.rate) + '%)');
        this.shadowColor.addColorStop(1, 'hsl(330, 20%, ' + 40 * (0.3 + axis.rate) + '%)');
    },
    getRandomValue : function(min, max){
        return min + (max - min) * Math.random();
    },
    getAxis : function(){
        var rate = this.FOCUS_POSITION / (this.z + this.FOCUS_POSITION),
            x = this.renderer.width / 2 + this.x * rate,
            y = this.renderer.height / 2 - this.y * rate;
        return {rate : rate, x : x, y : y};
    },
    renderCherry : function(context, axis){
        context.beginPath();
        context.moveTo(0, 40);
        context.bezierCurveTo(-60, 20, -10, -60, 0, -20);
        context.bezierCurveTo(10, -60, 60, 20, 0, 40);
        context.fill();
        
        for(var i = -4; i < 4; i++){
            context.beginPath();
            context.moveTo(0, 40);
            context.quadraticCurveTo(i * 12, 10, i * 4, -24 + Math.abs(i) * 2);
            context.stroke();
        }
    },
    render : function(context){
        var axis = this.getAxis();
        
        if(axis.y == this.thresholdY && this.rippleCount < this.MAX_RIPPLE_COUNT){
            context.save();
            context.lineWidth = 2;
            context.strokeStyle = 'hsla(0, 0%, 100%, ' + (this.MAX_RIPPLE_COUNT - this.rippleCount) / this.MAX_RIPPLE_COUNT + ')';
            context.translate(axis.x + this.offsetY * axis.rate * (this.theta <= Math.PI ? -1 : 1), axis.y);
            context.scale(1, 0.3);
            context.beginPath();
            context.arc(0, 0, this.rippleCount / this.MAX_RIPPLE_COUNT * this.RIPPLE_RADIUS * axis.rate, 0, Math.PI * 2, false);
            context.stroke();
            context.restore();
            this.rippleCount++;
        }
        if(axis.y < this.thresholdY || (!this.endTheta || !this.endPhi)){
            if(this.y <= 0){
                this.opacity = Math.min(this.opacity + 0.01, 1);
            }
            context.save();
            context.globalAlpha = this.opacity;
            context.fillStyle = this.shadowColor;
            context.strokeStyle = 'hsl(330, 30%,' + 40 * (0.3 + axis.rate) + '%)';
            context.translate(axis.x, Math.max(axis.y, this.thresholdY + this.thresholdY - axis.y));
            context.rotate(Math.PI - this.theta);
            context.scale(axis.rate * -Math.sin(this.phi), axis.rate);
            context.translate(0, this.offsetY);
            this.renderCherry(context, axis);
            context.restore();
        }
        context.save();
        context.fillStyle = this.entityColor;
        context.strokeStyle = 'hsl(330, 40%,' + 70 * (0.3 + axis.rate) + '%)';
        context.translate(axis.x, axis.y + Math.abs(this.SINK_OFFSET * Math.sin(this.psi) * axis.rate));
        context.rotate(this.theta);
        context.scale(axis.rate * Math.sin(this.phi), axis.rate);
        context.translate(0, this.offsetY);
        this.renderCherry(context, axis);
        context.restore();
        
        if(this.y <= -this.renderer.height / 4){
            if(!this.endTheta){
                for(var theta = Math.PI / 2, end = Math.PI * 3 / 2; theta <= end; theta += Math.PI){
                    if(this.theta < theta && this.theta + Math.PI / 200 > theta){
                        this.theta = theta;
                        this.endTheta = true;
                        break;
                    }
                }
            }
            if(!this.endPhi){
                for(var phi = Math.PI / 8, end = Math.PI * 7 / 8; phi <= end; phi += Math.PI * 3 / 4){
                    if(this.phi < phi && this.phi + Math.PI / 200 > phi){
                        this.phi = Math.PI / 8;
                        this.endPhi = true;
                        break;
                    }
                }
            }
        }
        if(!this.endTheta){
            if(axis.y == this.thresholdY){
                this.theta += Math.PI / 200 * ((this.theta < Math.PI / 2 || (this.theta >= Math.PI && this.theta < Math.PI * 3 / 2)) ? 1 : -1);
            }else{
                this.theta += Math.PI / 500;
            }
            this.theta %= Math.PI * 2;
        }
        if(this.endPhi){
            if(this.rippleCount == this.MAX_RIPPLE_COUNT){
                this.psi += this.dpsi;
                this.psi %= Math.PI * 2;
            }
        }else{
            this.phi += Math.PI / ((axis.y == this.thresholdY) ? 200 : 500);
            this.phi %= Math.PI;
        }
        if(this.y <= -this.renderer.height * this.SURFACE_RATE){
            this.x += 2;
            this.y = -this.renderer.height * this.SURFACE_RATE;
        }else{
            this.x += this.vx;
            this.y += this.vy;
        }
        return this.z > -this.FOCUS_POSITION && this.z < this.FAR_LIMIT && this.x < this.renderer.width * 1.5;
    }
};
$(function(){
    RENDERER.init();
});
html, body{
  width: 100%;
  height: 100%;
  margin: 0;
  padding: 0;
  overflow: hidden;
}
.container{
  width: 100%;
  height: 100%;
  margin: 0;
  padding: 0;
  background-color: #000000;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="jsi-cherry-container" class="container"></div>

wordpress – How do you know which plugin is consuming CPU?

My site is going down from time to time, I see in the server log that wp-load is requested a lot, I blocked external requests to this file. But I still get 5xx error once a day, I have a lot of plugins, my server settings are good. But how do I determine which plugin is causing this issue?

memory – Will a newer CPU with a few cores or an older CPU with more cores compile code faster?

I am a university student studying computer science, and I am just starting to explore the world of servers. I currently use a 2018 15″ MacBook Pro as my main computer for writing, compiling, and running code (2.6 GHz 6 Core i7, 32 GB 2400 MHz DDR4, Radeon Pro 560X 4 GB GPU). My MacBook Pro is great for my daily driver, as it’s fairly quick at compiling code, although with the fans running at high speeds for the duration of compiling and running the code.

I have recently discovered X11 forwarding (this article explains it a bit), which gives me the ability to view GUIs from applications running on a server without using a VNC viewer or anything too laggy. Given X11 forwarding, I am now wondering if running an IDE on a server and connecting to it with X11 forwarding would yield better compilation times than my laptop. I’ve been looking at an HP ProLiant DL380p Gen8 that has dual 8-core Sandy Bridge Xeon processors (Xeon E5-2660’s), as well as 64 GB of DDR3 ECC memory. The stock GPU in the HP (Matrox G200eH) seems abysmal compared to my Radeon Pro 560X, but I’m fairly certain that Java, Node.JS and the like tend to rely much more heavily on the CPU than they do the GPU. I also know DDR3 memory runs significantly slower than DDR4 does, but I’m still wondering if the additional 10 cores, 20 threads with HyperThreading on the HP would yield better compilation times than my MacBook Pro would. My house has gigabit ethernet, so networking shouldn’t be a bottleneck. Additionally, I would probably put an SSD in the HP for the operating system and apps to run on, so storage speed shouldn’t be a bottleneck either.

My question is this: would running my IDEs on a dual CPU server with more CPU cores, albeit much older CPUs and hardware as a whole, be faster than running my IDEs on my MacBook Pro with fewer cores, but newer hardware? I almost exclusively use the JetBrains suite of IDEs (based on a custom Java Runtime Environment), mainly IntelliJ IDEA, WebStorm, and CLion.

Similarly, would the HP be faster for HandBrake video encoding and transcoding? I’m a teaching assistant and converting MOV files to mp4’s is something I have to do regularly that runs my laptop quite hot and takes a long time.

cpu – If the integer representation used is “0 through 4,294,967,295 (2^32 − 1)”, so does this mean the register cannot handle negative numbers?

From Wikipedia:

A 32-bit register can store 2^32 different values. The range of integer values that can be stored in 32 bits depends on the integer representation used. With the two most common representations, the range is 0 through 4,294,967,295 (2^32 − 1) for representation as an (unsigned) binary number, and −2,147,483,648 (−2^31) through 2,147,483,647 (2^31 − 1) for representation as two’s complement.


So if the integer representation used is “0 through 4,294,967,295 (2^32 − 1)”, so does this mean the register cannot handle negative numbers?


From a similar standpoint, if the integer representation used is “−2,147,483,648 (−2^31) through 2,147,483,647 (2^31 − 1)”, so does this mean that the register cannot handle numbers greater than 2,147,483,647?

Why Windows 10 locks processor’s base frequency to 2.39Ghz when i have 3Ghz cpu?

I have intel core2 duo E6850 with 3Ghz base frequency but after installing Windows 10 (had windows 7) the processor never goes upto that mark, even I am running heavy apps like Andoid Studio the max frequency is is about 2.4Ghz. The task manager shows the same as base frequency while the processor’s actual base frequency is 3Ghz. Here is the screenshot of Task manager:task manager snap

Also here is what system propertires is showing:
System properties snap

I googled about it and have already tried following:

  1. setting power option to high performance
  2. changing max and min performance values of power plan
  3. modifying start value of intelppm in registry to 4
  4. running cmd command “ren intelppm.sys intelppm.sys.bak”
  5. using throttlestop and unchecking “BD PROCHOT”

What works is that when I boot from linux live usb (Zorin OS) I was able to perform tasks and use upto that 3ghz frequency speed. I don’t understand why Windows 10 cannot do that? Should install windows 7 again or perhaps linux?

High CPU when executing a Postgres query spanning multiple tables

We have an application which executes jobs. The job usually consists of a single transaction in which around 4k rows are inserted/updated across 13 relations.
The transaction takes around 2 minutes to execute.

This job leads to a CPU spike on the server.

Is there a way to fine tune the config to avoid this spike?

performance – What can cause higher CPU time and duration for a given set of queries in trace(s) ran on two separate environments?

I’m troubleshooting a performance issue in a SQL Server DR environment for a customer. They are running queries that consistently take longer in their environment than our QA environment. After analyzing traces that were performed in both environments with the same parameters/filters and with the same version of SQL Server (2016 SP2) and the exact same database, we observed that both environment were picking the same execution plan(s) for the queries in question, and the number of reads/writes were close in both environments, however the total duration of the process in question and the CPU time logged in the trace were significantly higher in the customer environment. Duration of all processes in our QA environment was around 18 seconds, the customer was over 80 seconds, our CPU time was close to 10 seconds, theirs was also over 80 seconds. Also worth mentioning, both environments are currently configured to MAXDOP 1.

The customer has less memory (~100GB vs 120GB), and slower disks (10k HHD vs SSD) than our QA environment, but but more CPUs. Both environments are dedicated to this activity and should have little/no external load that wouldn’t match. I don’t have all the details on CPU architecture they are using, waiting for some of that information now. The customer has confirmed they have excluded SQL Server and the data/log files from their virus scanning. Obviously there could be a ton of issues in the hardware configuration.

I’m currently waiting to see a recent snapshot of their wait stats and system DMVs, the data we originally received, didn’t appear to have any major CPU, memory or Disk latency pressure. I recently asked them to check to see if the windows power setting was in performance or balanced mode, however I’m not certain that would have the impact we’re seeing or not if the CPUs were being throttled.

My question is, what factors can affect CPU time and ultimately total duration? Is CPU time, as shown in a sql trace, based primarily on the speed of the processors or are their other factors I should be taking in to consideration. The fact that both are generating the same query plans and all other things being as close as possible to equal, makes me think it’s related to the hardware SQL is installed on.