memory – My PC gets frequent random blue screens

As the title says, my PC gets random, frequent blue screens.
My PC is overclocked to the minimal overclock preset, which is handled by a knob on my motherboard.
I stress tested my PC using a program called Aida64 extreme. It only/always occurs when stress testing the “FPU”
Stress testing any of the other components (GPU, SSD, CPU, Memory) does not cause a blue screen to occur. Any advice on what the problem might be?

PC specs: Cooler Master 750w PSU MSI MEG Z390 GODLIKE NZXT kraken x63 I9 9900K MSI RTX 2080s gaming X trio Corsair Vengeance pro 3200MHz Fractal Design R6 Samsung 970 EVO

tips and tricks – Is there a way to rescue orphaned and expiring Qantas Frequent Flyer miles?

Yes, there is: since transfers are free, unlimited and can be any number over 5,000 miles, you can transfer some miles to them, and transfer the combined sum back! Here’s how it works:

  • Family member has (say) 4,000 orphaned miles
  • You transfer 5,000 miles (the minimum) to your family member
  • Family member now has over 5,000 miles, so they can transfer the combined 9,000 miles to you

That’s it! And if you have multiple people in the same situation (say, kids/spouse who flew together on the same flight), you can reduce the overhead a bit by chaining the transfers: A->B->C->A.

Anyone Suddenly Experiencing New Root User SSH Attacks ON A Frequent Basis

G’day,

We run a cluster of 9 servers.

In the past we’d receive alerts for failed SSH logins about once or twice a day across the enti… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1826154&goto=newpost

mysql – DataBase Design for frequent delete operations

We have a use case wherein we want to store incoming data and invalidate all the previously existing data pertaining to one parent entity.

ie Parent p1 –> has List< child> old_data ( already stored in the table )
now some new data comes in List new_data

PS the table only has one index is the foreign key p1.ID

EXAMPLE

EXISTING DATA

——–|————-|——-

ID Parent.ID DATA

  1.       P1.ID.       D1
    
  2.       P1.ID.       D2
    
  3.       P1.ID        D3
    

INCOMING DATA ( D4,D5,D6)

What we want

——–|————-|——-

ID Parent.ID DATA

  1.       P1.ID.       D4
    
  2.       P1.ID.       D5
    
  3.       P1.ID        D6
    

so we want to delete all the old_data present in the table first and then insert new_data

The major concern we see here is performance degradation wrt to frequent delete operation which will occur every time(frequency of incoming data associated with the parent is high) new_data comes in. ( we are not going through the setting of isValid = 0 approach since it will bloat the table in no time )
What’s the best possible way to approach this. We are currently planning to use MariaDB.

Do we move to any other RDBMS or NoSQL DB which gives better performance wrt deletes? or any other suggestions?

algorithms – Finding the most frequent element, given that it’s Theta(n)-frequent?

Very partial answer: At least for $alpha > 0.5$, yes.

  1. $text{candidate}$ <- (null value), $text{count}$ <- 0

  2. For each element $x$ in the array

    1. If $x = text{candidate}$ then

      1. increment $text{count}$
    2. else

      1. If $text{count} = 0$

        1. $text{candidate} leftarrow x$, $text{count} leftarrow 1$
      2. else

        1. decrement $text{count}$

The candidate remaining at the end of the array is the majority element. A potential-function argument can show this to be the case (I was taught this in a teaser for an online algorithms class).

This can be extended to $alpha = 0.5$ by first finding two distinct elements of the array, then running the above on the array without one of them, then on the array without the other, then finally checking the frequency of the values you get from those two runs.

But – such a trick will probably not work for lower $alpha$ values.

icons – how to Indicate frequent flight routes when booking flights

I’m designing a flow that will reduce the likelyhood of users getting zero results when they search for a flight. When looking at examples of how other airline apps do this, i came accross the following example (scoot airlines app).

I’m trying to figure out if the black airplane icon is an indiates more common flight routes. I find it’s not very intuitive.

What would be a better design approach to execute this solution?

enter image description here

algorithms – Counting a number of sequences where the item is the most frequent one

Consider an array $a$, where elements are numbers. Consider all subsequence of $a$. For each subsequence $s$, we find an element $k$ with the largest number of occurrences in $s$ (if there are several options, choose the smallest such $k$).

Problem: for each number $k$, find the number of subsequences of $a$ for which $k$ is the chosen number.

Example:

Input: (1, 2, 2, 3)
Output:
1 -> 6 ((1), (1, 2), (1, 2'), (1, 3), (1, 2, 3), (1, 2', 3))
2 -> 8 ((2), (2'), (2, 3), (2', 3), (2, 2'), (1, 2, 2'), (2, 2', 3), (1, 2, 2', 3))
3 -> 1 ((3))

My idea is that, I make a map $cnt$ where $cnt(x)$ is number of times $x$ occurred in array $x$. I am trying to find the answer for $k$ then i will create a map $temp$ where $temp(i)=min(cnt(k), cnt(i))$.Then consider the value of $cnt(k)$ to be $m$ then in a subset element $k$ can occur from $0$ to $m$ times and using this i will find answer for all of its occurrences from $0$ to $m$ which is a simple number of sub sets problem and finally i will have my answer.

But the complexity of my algorithms is bad as it is quadratic or worse. Can it be improved?

matlab – Does frequent changing of the random seed reduce the randomness of results?

I wrote a Matlab program whose algorithm is like:

for epoch = 1:1000,
    rng('shuffle') %seed based on time
    for generation = 1:100,
        % solve the puzzle using the random number to shuffle values in the puzzle
    end
end

rng seeds the random number generator based on the current system time. I’m using Matlab’s default random number generator, and the reason I put rng within the epoch loop, is because I wanted to make sure the puzzle got solved differently each time.

But, one of the conference reviewers wrote a review comment that said:

“One normally seeds a PRNG (pseudo random number generator” once
during initialisation. Changing the seed repeatedly REDUCES the
randomness of results!!!! Move this out of your algorithm. Low
diversity in a PRNG can actually improve results!”

Is this actually true? Would my program have produced better randomness if the seed was initialized like this?

rng('shuffle') %seed based on time
for epoch = 1:1000,
    for generation = 1:100,
        % solve the puzzle using the random number to shuffle values in the puzzle
    end
end

When I thought through it, I realized he may have meant that changing the random seed within an epoch may result in one or more epochs starting from the same random seed, and that’s why it may reduce the randomness. Is there any other explanation or is the reviewer’s understanding flawed?

macbook pro – Frequent NVRAM resetting for WIFI hardware

I have a MacBook Pro (13 “, Mid 2012) model running Mac OS 10.14.6 (Mojave). Of late I’m experiencing a lot of problems with the WIFI hardware frequently “shutting down” or going into a inoperational state. I found the fix of resetting the NVRAM working for me. But this thing is happening very frequently.

My Macbook habits : I keep opening a lot of tabs on my safari and a lot of pdfs on my Preview. I mostly keep my pro in sleep state. Rarely do I shut down! And sometimes I code. I shifted to Mojave from the prior operating system, where the problem actually initiated, thinking a software update would fix things permanently.

What I would like help on is understanding the functioning on the NVRAM:
why does the NVRAM resetting fix the WIFI?

What makes the WIFI hardware go bust again?

Is it a hardware problem,
as in my macbook is too old and needs a wifi hardware repair? If so,
then why does resetting NVRAM temporarily fix things?

I’d like some understanding to this situation I am stuck in!