algorithm – How to make Print() method memory & CPU efficient?

You are receiving n objects in a random order, and you need to print them to stdout correctly ordered by sequence number.

The sequence numbers start from 0 (zero) and you have to wait until you get a complete, unbroken sequence batch of j objects before you output them.

You have to process all objects without loss. The program should exit once it completes outputting the first 50000 objects Batch size j = 100

The object is defined as such:

    {
    "id" : "object_id", // object ID (string)
    "seq" : 0, // object sequence number (int64, 0-49999)
    "data" : "" // ()bytes
    }
    Step                Input Value                Output State j = 1                  Output state j = 3
    0                       6
    1                       0                           0
    2                       4                           0
    3                       2                           0
    4                       1                           0,1,2                               0,1,2
    5                       3                           0,1,2,3,4                           0,1,2
    6                       9                           0,1,2,3,4                           0,1,2
    7                       5                           0,1,2,3,4,5,6                       0,1,2,3,4,5

func (receiver *Receiver) Print(seqNumber uint64, batchSize uint64, outputFile io.Writer) (error, bool) {

    fmt.Fprintf(outputFile, "( ")
    if seqNumber >= receiver.outputSequence.length {
        receiver.outputSequence.bufferSizeIncrease(seqNumber)
    }
    receiver.outputSequence.sequence(seqNumber) = true

    printedCount := uint64(0) // check for MAX_OBJECTS_TO_PRINT
    var nthBatchStartingIndex uint64
    MaxObjectsToPrint := config.GetMaxPrintSize()
Loop:
    for nthBatchStartingIndex < receiver.outputSequence.length { // check unbroken sequence
        var assessIndex = nthBatchStartingIndex
        for j := assessIndex; j < nthBatchStartingIndex+batchSize; j++ { // Assess nth batch
            if j >= receiver.outputSequence.length { //index out of range - edge case
                break Loop
            }
            if receiver.outputSequence.sequence(j) == false {
                break Loop
            }
        }

        count, printThresholdReached := receiver.printAssessedBatchIndexes(assessIndex, printedCount, batchSize, MaxObjectsToPrint, outputFile)
        if printThresholdReached { // print sequence threshold reached MAX_OBJECTS_TO_PRINT
            fmt.Fprintf(outputFile, " )  ")
            fmt.Fprintf(outputFile, " ----for input value %dn", seqNumber)
            return nil, false
        }
        printedCount += count
        if printedCount >= MaxObjectsToPrint { // print sequence threshold reached MAX_OBJECTS_TO_PRINT
            fmt.Fprintf(outputFile, " )  ")
            fmt.Fprintf(outputFile, " ----for input value %dn", seqNumber)
            receiver.Log.Printf("****MaxObjectsToPrint threshold(%d) reached n", MaxObjectsToPrint)
            return nil, false
        }
        nthBatchStartingIndex = assessIndex + batchSize // next batch
    }
    fmt.Fprintf(outputFile, " )  ")
    fmt.Fprintf(outputFile, " ----for input value %dn", seqNumber)
    return nil, true
}

Here is the complete solution, written for this problem.


Print() is the method that does heavy lifting in this code, with varying size of memory & heavy CPU usage:

  1. How to make receiver.outputSequence memory effective by using datastructure other than array? because newBufferSize := 2 * seqNumber is doubling memory…

  2. How to make Print method have effective CPU usage?

macbook pro – What’s the purpose of the metal sponges around the CPU?

While reposting my Retina late 2013 MacBook Pro I noticed one of the spongy metal disks around the CPU partially broke.

What’s the purpose of these?

I reassembled the machine and it seems to work fine, but I’m wondering what problems will it have with one of these items missing.

TIA,
Edoardo

logic board close-up

sql server – High CPU Usage on certain query when changing DB Compatibility Level

We migrated from SQL 2008R2 to SQL 2016 a few years ago and have been running on Compatibility Level 100 since that point.

We are looking to make use of memory optimization introduced in SQL 2016 and that requires setting the compatibility level to 130.

We have had a few instances where a certain query will max out the CPU (that normally only takes a few seconds to run on Compatibility Lvl 100, but when we change it to Level 130 the CPU maxes out and does takes 2 minutes and 37 seconds to run vs about 2 seconds on the old compatibility level)

enter image description here

I did pull the execution plan for each of these

CompatibilityLvl100 Plan

CompatibilityLvl130 Plan

I am not a DBA so I can see there are differences between the execution plans but I could not see any reason for it to max out the CPU and take almost 6000% longer to run the same query with the same indexes etc.

I also executed this query to get some more details about the running query, I have uploaded the results as a comparison to a Google Sheet here.

One thing i noticed is that the slow running query was using 192 worker threads, whereas the fast running query only used 32. But I still couldn’t figure out what was causing the high CPU usage.

Be glad of anyone’s help or direction of something I may have missed!

SteadyTurtle.com – Since 2012 – Xeon 4c/8t CPU – 32GB – Unlimited traffic – 2x2TB – $95/m


SteadyTurtle is in the hosting industry for over 7 years providing Dedicated Servers and connectivity services with wholesale. Our team is a friendly dedicated support, always willing to go the extra mile to help our client.we provide any kind of support to our clients with even scripts e.t.c Anything !!

Budget Server features:

250 Mbps Bandwidth

Full root access

Detailed report of Ips and resources on your control panel

Ips upto /24

Quick rDNS Setup

Plesk, cPanel/WHM, or CloudLinux licenses available

32GB

Xeon D-1520 – 4c/8t CPU

32GB DDR4 RAM

2x2TB Disks

250 Mbps Bandwidth

Free anti-DDoS protection

Canada, UK, France, Poland, Germany : $95 monthly



Order

64GB

Xeon D-1541 – 8c/16t CPU

64GB DDR4 RAM

2x2TB Disks

250 Mbps Bandwidth

Free anti-DDoS protection

Canada, UK, France, Australia : $165 monthly



Order

Payment Methods

We accept payments via PayPal, Bitcoins, Bitcoin cash, Litecoin, Ethereum, payza(Alertpay), Perfect Money, All Credit Cards.

Support

You may contact us via live chat, Skype, emails and via tickets (recommended).

To open a ticket, visit here! we do reply to ticket within 60 minutes

Email: sales@steadyturtle.com

Warm Regards,

Steadyturtle Sales
http://steadyturtle.com

cpu – What subfields in computer sciences may one study without learning Object Oriented Programming?

Object Oriented Programming is a type of programming paradigm.

A Computer Science degree is mostly theoretical (not only machine learning and applied statistics! Believe me there is so much more), so you wont see any of this in most courses, however, in a Software Engineering degree I suppose you do learn more about OOP.

Anyways, OOP is always good to know. Its not as complicated as you would think from its fancy name, and it gives a nice way to write organized code, and most programming languages support that kind of programming.

However, there are some programming languages that use a different type of programming paradigm called “functional programming”. I recommend you to take a look at it too.

If you are wondering about what kinds of things there are in a CS degree, feel free to ask me!

BTW: This Stack Exchange site is for theoretical computer science, so questions about theoretical computer science problems are seen here all of the time.

MRT uses 100% of CPU

Note: I have already read MRT Process using large unbounded amount of memory (but here the context is different: "I’d rather not remove it") and a few similar questions/articles like MRT is peaking my cpu, but a good complete solution was not easily findable.

On my Macbook Pro with High Sierra, the CPU fan was making a loud noise, and after doing top in Terminal, I noticed that a process called MRT was using ~100% CPU.

How to remove MRT (Malware Removal Tool)?

Is it logical to enable the turbo speed of a VM node CPU?

Hi,

Just wondering, is it logical to enable the turbo speed of a VM node CPU?… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1813539&goto=newpost

One process in my server using 100% cpu

Please check below

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 2561 redis     20   0 2761160   2.3g   1028 S 200.0 30.0  58:33.49 sysupdate
 2576 redis     30  10  577576  61120    540 S 194.7  0.8  54:20.02 networkservice

I have no idea where they came from. How can I manage them

maximising openvino throughput on cpu

I am trying to use my openvino model on my server with cpu. I want to load all 8 cores (or at least 4-5), that my cpu has, and I want to maximise throughput. I use https://github.com/openvinotoolkit/model_server, but I don’t understand what parameters I have to tweak to achieve maximal throughput.

sudo docker run -d -v /open_vino_model_mod_five_brands_b2:/models/logodetection/1 -e LOG_LEVEL=DEBUG -p 9000:9000 openvino/ubuntu18_model_server  
/ie-serving-py/start_server.sh ie_serving model --model_path /models/logodetection --model_name logodetection --port 9000  --shape auto --grpc_workers 4  --nireq 4 --plugin_config "{"CPU_THROUGHPUT_STREAMS": "4"}"

So I think I should change these values: grpc_workers, nireq, CPU_THROUGHPUT_STREAMS, but my fps is even lower when I try to change these values than with default values.

ubuntu – nginx won’t use 100% cpu

i’m just benchmarking a default install of nginx trying to get it to serve as many requests per second as possible. i expected it to use 100% cpu under load like other webservers do, but i can’t get it to use more than about 40%.

i tried on Ubuntu and Manjaro with the same results.

ab -k -c 500 -n 1000000 http://localhost/ is getting about 150,000 requests per second. i imagine it could serve at least twice as much if it used the full cpus but i can’t figure out how. it’s just serving the default nginx html page. is this normal? what could be blocking it?

i’ve made sure to use these settings:

access_log off;
worker_processes auto;
worker_connections 1024;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
gzip on;
expires 365d;