## Temporal Complexity – In the case of a right rotation for red-black trees, there is a less efficient way to recolor it, why is it no longer O (log (n))?

So, the first time I tried to recolor this memory insertion case, I ended up coloring the right side again; Of course, left-handed coloring is more efficient because the loop ends at this point. If, however, we check in the right case if the grandparent of a is the root (the color is black) and if not continue the loop of this node, I read that it makes the recoloration not O (log (n)) more, why is it? I always seem to be O (log (2n)) at worst, even if the number of rotations made is no longer O (1).

## SQL Server – more efficient calculation of moving average over 3 periods

I have a table of numerical answers to the survey questions. The goal is to call a stored procedure, with a parameter for PeriodType ("Quarter", "Month" or "Year") that returns this output:

``````Year    Period    Score    WeightedAverage
2019    1         85.7     85.7
2019    2         87.6     85.9
2019    3         90.5     88.6
...    ...        ....     ....
``````

Couple of notes:

• Weighted should be a moving average of 3 periods, but NOT the average of the current averages and the previous two. It must take into account all individual responses so that a period of 1,000 responses has a greater effect than a period of 50 responses. The use of windowed functions such as LEAD / LAG or AVG OVER () does not seem to be the solution.
• The configuration is shown below and the #answers table contains approximately 302,000 rows. The last query takes 35 seconds. Would like this optimized the best possible.
• The scan of the table to get # answers (c) creates an index spool (eager) then a table spool (lazy) in the execution plan. Line estimate 34; running estimate 2,705; actual rows 160,773,768; actual performances 33,037 (yikes). There must be a faster way that my lower brain can not solve.
• Running SQL Server 2017 Enterprise, but compatibility level 100 (SQL 2008)

ResponseID int,
QuestionID int,
CompletedDate datetime,
dateID int
)

– the code to insert 302 000 answers would go here

– create a table of all possible dates (there are no gaps), with an identity for each
SELECT id = identity (int, 1, 1), (year) = DATEPART (year, CompletedDate), period = CASE @PeriodType WHEN & # 39; Month & # 39; THEN DATEPART (month, CompletedDate) WHEN & # 39; Quarter & # 39; THEN DATEPART (quarterly, CompletedDate) ELSE NULL FIN
GROUP BY DATEPART (Year, CompletedDate), CASE @PeriodType WHEN & # 39; Month & # 39; THEN DATEPART (month, CompletedDate) WHEN & # 39; Quarter & # 39; THEN DATEPART (quarter, CompletedDate) ELSE NULL FIN
ORDER BY DATEPART (year, CompletedDate), CASE @PeriodType WHEN & # 39; Month & # 39; THEN DATEPART (month, CompletedDate) WHEN & # 39; Quarter & # 39; THEN DATEPART (quarter, CompletedDate) ELSE NULL FIN

– update the answer table with the dateID assigned
UPDATE a SET dateID = b.id
INNER JOIN #dates b ON DATEPART (year, a.CompletedDate) = b. (Year) AND ((CASE @PeriodType WHEN 'Month THEN DATEPART (month, a.CompletedDate) WHEN' Quarter THEN DATEPART (quarter, a.CompletedDate) ELSE NULL END) = b.period OR (b.period IS NULL AND @PeriodType NOT IN ('Month', 'Quarter')))

– attach the answers to themselves according to (dateID – 2) for a score of 3 periods
SELECT #dates. (Year), #dates. (Period), internal.score, internal.weightedaverage
DE # dates
LEFT EXTERNAL JOINT (
SELECT b. (Year), b.period, score = CAST (10 * AVG (a.Answer) AS decimal (10, 1)), weightedparate = CAST (10 * AVG (c.Answer) AS decimal (10, 1))
INNER JOIN #dates b ON a.dateID = b.id
INNER JOIN #answers c ON ID.dat> a.dateID – 2 AND c.dateID <= a.dateID
GROUP BY b. (Year), b.period
) internal ON #dates. (year) = internal. (year) AND #dates. (period) = internal. (period)
ORDER BY #dates. (Year), #dates. (Period)

My apologies for formatting, I do not see how to indent and clean how that t-sql is displayed. Thank you for your help.

## postgresql – The most efficient way to insert data into the archive table without influencing production?

For example, say we have Table A:

``````create table if not exists T(
column1 int,
column2 varchar,
column3 date
);
``````

and archive table TArchive:

``````create table if not exists TArchive(
column1 int,
column2 varchar,
column3 date
);
``````

What would be the best approach for inserting older data than `x` date in TArchive without locking the T table in production? Assuming the table T has a large amount of lines.

I have been researching this for hours. In SQL Server, you have different approaches such as: https://www.brentozar.com/archive/2018/04/how-to-delete-just-some-rows-from-a-really-big-table/
But in PostgreSQL, I can hardly find anything.

Do you just have to extract the data directly from the T table and import it into TArchive?

Do you first need to import your data into a temporary table and then import them into the archive table? And if so, why would this approach be better when you make two inserts for the same data?

How many functions should you do? A function to govern them all? Or a function for archiving and another for deleting old data?

Are there other approaches?

## r – Why is the logistic regression model formed with tensorflow so efficient?

I led a logistic regression model with tensorflow, but the accuracy of the model was so bad (accuracy = 0.68). The model was formed with the help of a simulated data set and the result should be very good. is there something wrong with the code?

`````` #simulated dataSet
sim_data <- function(n=2000){
library(dummies)
age <- round(abs(rnorm(n,mean = 60, sd = 20)))
lac <- round(abs(rnorm(n,3,1)),1)
wbc <- round(abs(rnorm(n,10,3)),1)
sex <- factor(rbinom(n,size = 1,prob = 0.6),
labels = c("Female","Male"));
type <- as.factor(sample(c("Med","Emerg","Surg"),
size = n,replace = T,
prob = c(0.4,0.4,0.2)))
linPred <- cbind(1,age,lac,wbc,dummy(sex)(,-1),
dummy(type)(,-1)) %*%
c(-30,0.2,4,1,-2,3,-3)
pi <- 1/(1+exp(-linPred))
mort <- factor(rbinom(n,size = 1, prob = pi),
labels = c("Alive","Died"))
dat <- data.frame(age=age,lac=lac,wbc=wbc,
sex=sex,type=type,
mort = mort)
return(dat)
}
set.seed(123)
dat <- sim_data()
dat_test <- sim_data(n=1000)
#logistic regression in conventional method
mod <- glm(mort~.,data = dat,family = "binomial")
library(tableone)
ShowRegTable(mod)
#diagnotic accuracy
pred <- predict.glm(mod,newdata = dat_test,
type = "response")
library(pROC)
roc(response = dat_test\$mort,predictor = pred,ci=T)
Call:
roc.default(response = dat_test\$mort, predictor = pred, ci = T)

Data: pred in 301 controls (dat_test\$mort Alive) < 699 cases (dat_test\$mort Died).
Area under the curve: 0.9862
95% CI: 0.981-0.9915 (DeLong)
predBi <- pred >= 0.5
crossTab <- table(predBi,dat_test\$mort)
(crossTab(1)+crossTab(4))/sum(crossTab)

(1) 0.941

#choose different cutoff for the accuracy
DTaccuracy <- data.frame()
for (cutoff in seq(0,1,by = 0.01)) {
predBi <- pred >= cutoff;
crossTab <- table(predBi,dat_test\$mort)
accuracy = (crossTab(1)+crossTab(4))/sum(crossTab)
DTaccuracy <- rbind(DTaccuracy,c(accuracy,cutoff))
}
names(DTaccuracy) <- c('Accuracy','Cutoff')
qplot(x=Cutoff, y = Accuracy, data = DTaccuracy)
(!(image showing accuracy by varying different cutoff)(1))(1)

#tensorflow method
library(caret)
y = with(dat, model.matrix(~ mort + 0))
x = model.matrix(~.,dat(,!names(dat)%in%"mort"))
trainIndex = createDataPartition(1:nrow(x),
p=0.7, list=FALSE,times=1)

x_train = x(trainIndex,)
x_test = x(-trainIndex,)
y_train = y(trainIndex,)
y_test = y(-trainIndex,)

# Hyper-parameters
epochs = 30             # Total number of training epochs
batch_size = 30        # Training batch size
display_freq = 10      # Frequency of displaying the training results
learning_rate = 0.1   # The optimization initial learning rate
#Then we will define the placeholders for features and labels:
library(tensorflow)
X <- tf\$placeholder(tf\$float32, shape(NULL, ncol(x)),
name = "X")
Y = tf\$placeholder(tf\$float32, shape(NULL, 2L), name = "Y")

#we will define the parameters. We will randomly initialize the weights with mean “0” and a standard deviation of “1.” We will initialize bias to “0.”

W = tf\$Variable(tf\$random_normal(shape(ncol(x),2L),
stddev = 1.0),
name = "weghts")
b = tf\$Variable(tf\$zeros(shape(2L)), name = "bias")

#Then we will compute the logit.

pred = tf\$nn\$sigmoid(logits)
#The next step is to define the loss function. We will use sigmoid cross entropy with logits as a loss function.

entropy = tf\$nn\$sigmoid_cross_entropy_with_logits(labels = Y,
logits = logits)
loss = tf\$reduce_mean(entropy,name = "loss")
#The last step of the model composition is to define the training op. We will use a gradient descent with a learning rate 0.1 to minimize cost.

init_op = tf\$global_variables_initializer()
#Now that we have trained the model, let’s evaluate it:

correct_prediction <- tf\$equal(tf\$argmax(logits, 1L), tf\$argmax(Y, 1L),
name = "correct_pred")
accuracy <- tf\$reduce_mean(tf\$cast(correct_prediction, tf\$float32),
name = "accuracy")
#Having structured the graph, let’s execute it:

with(tf\$Session() %as% sess, {
sess\$run(init_op)
for (i in 1:5000) {
sess\$run(optimizer,
feed_dict = dict(X=x_train, Y=y_train))
}
sess\$run(accuracy,
feed_dict=dict(X = x_test, Y = y_test))
})
(1) 0.69
``````

The accuracy obtained by the glm () method is quite good (precision = 0.941), as expected; However, the accuracy was only 0.69 with the TensorFlow method. How can I solve the problem?

## How can I make my python code faster / more efficient? And is multiprocessing useful in this case?

I have a code that constantly checks a web page with a price and when the price matches a number that I've set, the thing is bought. I'm looking for ways to increase speed.

I have added multiprocessing and its performance may have been improved; but, i do not know if this is the best scenario for this code and there are probably better methods of speed and efficiency.

``````of the multiprocessing.dummy import pool
from functools partial import

session = requests.session ()

print ("connected")
other:

crsf_token = ""

def () token:
while true:
global crsf_token
crsf_token = re.search (r "XsrfToken.setToken('(.*?)');", session.get (https://www.example.com&#39;) .text) .group (1)
time.sleep (5)

with swimming pool () as swimming pool:
while true:
try:
req = session.get ("https://www.example.com")
if req.status_code == 429:
time.sleep (5)
Carry on

allposts = (f) https: //www.example.com&Price= {i ("Price")} & # 39;
for i in req.json () ("data") ("Sellers") if i ("Price") <= 10)
if allposts:
pool.map (partial (session.post, headers = {"X-CSRF-TOKEN": crsf_token}), all elements)
except requests.urllib3.exceptions.ConnectTimeoutError as E:
pass

``````

This question was asked about the stack overflow and someone recommended to ask it here. So this is it

## design – Efficient recovery of many UX + Performance data

The key here is to understand the requirements, and what is the minimum for the amount of data your users need to see at a given time.

Your users may be familiar with what is a reduced view, without the complex logic, and then, when selecting a particular record, an HTTP request is made to retrieve another subset of the complexities surrounding that record.

Another key part is knowing when to use caching. It is possible that queries, and possibly your ORM (if any), provide caching solutions for data queries, that it may not be necessary to query .

Finally, feel free to use the database, its additional features and its own engine power to help unload the work on the server.

## Why are the white countries richer and more efficient than the non-white countries? United States, Europe, Canada, Australia, New Zealand?

stop being a racist white boy.

many degenerate whites think that they are superior to being whites, even if they are the degenerates of the white race LOL

@John, you hinted that Europe was a country and I corrected the fact that you are stupid and ignorant

why do you claim to be smart and superior to be white if you do not even know that Europe is a continent?

why are you racist when you are a degenerate loser?

## validation – C # console application better or more efficient to validate entries

As you suggested, you have a lot of redundant code. And rightly, you want to adhere to the DRY principle.

All of these input methods have the same model internally.

``````static void Main(string() args)
{
GetWeightInput();
GetHeightInput();
GetWidthInput();
GetDepthInput();

}
``````

..ideally you would like to be able to call them like this:

``````static void Main(string() args)
{
var weight = AskInteger("Enter weight of parcel: ");
var height = AskInteger("Enter height of parcel: ");
var width = AskInteger("Enter width of parcel: ");
var depth = AskInteger("Enter depth of parcel: ");

Console.WriteLine("press any key to terminate the application..");
}
``````

Or if you want to provide a complex object `Request` with 4 properties and a lambda to provide both a message to the user as a definition expression to materialize the query

``````static void Main(string() args)
{
var request = new Request();

Console.WriteLine("press any key to terminate the application..");
}
``````

How would you implement `AskInteger` is a challenge that I leave you.

### Misc

• Do not pollute your code with non-value added regions. `#region User Weight Input Validation`
• Try to avoid comments without meaning `_isInvalidInput = true;//reset value to true`

## c # – Is there a more efficient way to proceed? Instead of wrapping my switch statement in a try catch?

I have a switch statement that can possibly get null references. Instead of checking all possible cases, I just put the entire statement in a try-catch and if error, returns DateTime now .. which is correct, but is it optimal from the performance / maintainable point of view ?

`````` private DateTime GetEventTimeFor(Route route, Stop stop, NotificationSubscriptionType subscriptionType)
{
ServiceableStop serviceableStop = stop as ServiceableStop;
try
{
switch (subscriptionType)
{
return stop.ArrivalTime.Value;
return stop.DepartureTime.Value;
serviceableStop = stop as ServiceableStop;
return serviceableStop.ServiceStartTime.Value; //possible null reference exception
serviceableStop = stop as ServiceableStop;
return serviceableStop.CancelledTime.Value;
return route.StartTime.Value;
return route.DepartureTime.Value;
return serviceableStop.ArrivalTime.Value.Add(serviceableStop.DepartureTime.Value - serviceableStop.ServiceStartTime.Value); //possible null reference exception
return ((ServiceableStop)route.Stops.First(x => x.State == Stop.StopState.Servicing)).ServiceStartTime.Value;
return route.ArrivalTime.Value;
return ((MidRouteDepotStop)route.Stops.First(x => x.GetType() == typeof(MidRouteDepotStop))).ArrivalTime.Value;
return route.ArrivalTime.Value;
return route.CompleteTime.Value;
return route.StartTime.Value.Add(route.PostStartDelay.Value); //possible null reference exception
return route.PreStartDelayStartTime.Value; //possible null reference exception
return route.StartTime.Value.AddSeconds(route.PostStartDelay.Value.Seconds); //possible null reference exception
return ((UnknownStop)route.Stops.First(x => x.GetType() == typeof(UnknownStop))).ArrivalTime.Value;
return ((RestrictedStop)route.Stops.First(x => x.GetType() == typeof(RestrictedStop))).ArrivalTime.Value;
return ((RestrictedStop)route.Stops.First(x => x.GetType() == typeof(RestrictedStop))).ArrivalTime.Value;
default:
return DateTime.UtcNow;
}
}
catch (Exception e)
{
return DateTime.UtcNow;
}

}
$$```$$
``````

## dnd 5th – Is it efficient to bring Polearm Master and Sentinel to the Barbarian?

I am building a Level 8 Zealot Barbarian for an ongoing campaign and I am thinking of taking the Master Cudgel and Sentinel feats because it is a nice combo to protect your group and apply additional damage. of rage.

I would take the human variant for one of the exploits but that would lose some skill points. I'll finish with Str: 16 Dex: 14 Con: 16 -> the rest does not count.

Will this construction be effective in combat, or will the taking of these two feats result in a dump of statistics?