## matlab – Does frequent changing of the random seed reduce the randomness of results?

I wrote a Matlab program whose algorithm is like:

``````for epoch = 1:1000,
rng('shuffle') %seed based on time
for generation = 1:100,
% solve the puzzle using the random number to shuffle values in the puzzle
end
end
``````

`rng` seeds the random number generator based on the current system time. I’m using Matlab’s default random number generator, and the reason I put `rng` within the epoch loop, is because I wanted to make sure the puzzle got solved differently each time.

But, one of the conference reviewers wrote a review comment that said:

“One normally seeds a PRNG (pseudo random number generator” once
during initialisation. Changing the seed repeatedly REDUCES the
randomness of results!!!! Move this out of your algorithm. Low
diversity in a PRNG can actually improve results!”

Is this actually true? Would my program have produced better randomness if the seed was initialized like this?

``````rng('shuffle') %seed based on time
for epoch = 1:1000,
for generation = 1:100,
% solve the puzzle using the random number to shuffle values in the puzzle
end
end
``````

When I thought through it, I realized he may have meant that changing the random seed within an epoch may result in one or more epochs starting from the same random seed, and that’s why it may reduce the randomness. Is there any other explanation or is the reviewer’s understanding flawed?

Posted on

## unity – How do I reduce the size of my symbols.zip?

I am making an Android app in Unity. Google recommends using a symbols.zip, which Unity auto-generates. However, the symbols.zip that Unity is generating for my app is over 1 GB, and the max size that Google will accept is 300 MB.

How can I reduce the size of my symbols.zip? ## What Is Internet Latency And How To Reduce It Like A Pro? Internet latency is the secret enemy of internet (broadband) users and slows down the superfast internet connections. If you are a broadband user and looking at the listed download speed that service providers are proudly advertising is actually half and other half id latency. In this article, you will learn about connection speed and how it is measured, what is internet latency and its main causes with useful ways to reduce it.

## Reduce TTFB Cpanel / WHM Server

Reduce TTFB Cpanel / WHM Server | Web Hosting Talk

‘);
var sidebar_align = ‘right’;
var content_container_margin = parseInt(‘350px’);
var sidebar_width = parseInt(‘330px’);
//–>

1. ## Reduce TTFB Cpanel / WHM Server

Hello .. we have a server in which we have noticed the TTFB very high, we use cpanel + apache + cloudlinux and we would like to know experiences on how to improve that aspect and reduce the TTFB a little .. someone can tell us experiences about it .. or some way to improve that value ??

Thank you

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•

## python – Reduce Sensor Messaging Noise to MQTT

I’m running MicroPython on an ESP32 relaying sensor data to my MQTT server for Home Assistant. I only want it to emit a message when state has changed and a motion detected state to hold for a minute before returning to a clear state. I see a lot of examples using sleep, but I don’t like the blocking nature of sleep as I will be adding more sensors. Instead I’ve been using ticks_ms() and ticks_diff() to keep the state from fluttering on/off too much, but I can’t help but think there’s a better/more elegant way to do this that I’m not seeing.

There’s some repetition and nesting that sticks out to me

``````from umqtt.robust import MQTTClient
from machine import Pin, unique_id
from time import ticks_diff, ticks_ms
from ubinascii import hexlify

#Config
MQTT_SERVER = "X.X.X.X"
MQTT_PORT = 1883
MQTT_USER = b"USER"
MQTT_CLIENT_ID = hexlify(unique_id())
MQTT_TOPIC = b"esp/motion"
mqtt = MQTTClient(MQTT_CLIENT_ID, MQTT_SERVER, MQTT_PORT, MQTT_USER, MQTT_PASSWORD)

ledPin = Pin(2, Pin.OUT)
motPin = Pin(15, Pin.IN)
previousState = 0
delay_ms = 60000
clock = ticks_ms()

def main():
global clock, previousState, delay_ms

try:
mqtt.connect()

while True:
state = motPin.value()

if state == 1:
ledPin.on()

if previousState == 0:
if ticks_diff(ticks_ms(), clock) >= 0:
print('motion_start')
mqtt.publish(MQTT_TOPIC, 'motion_start')
clock = ticks_ms() + delay_ms
previousState = state
else:
clock = ticks_ms() + delay_ms

else:
ledPin.off()

if previousState == 1:
if ticks_diff(ticks_ms(), clock) >= 0:
print('motion_stop')
mqtt.publish(MQTT_TOPIC, 'motion_stop')
previousState = state

finally:
ledPin.off()
mqtt.publish(MQTT_TOPIC, 'motion_stop')
mqtt.disconnect()

if __name__ == "__main__":
main()
``````

## algebraic manipulation – How to use Reduce with large expressions?

While the expressions in my application are large, dimensionality is low. So I was expecting `Reduce`to give some output. Since that wasn’t the case, I wonder why I’m not getting results in this case. The problem is to find conditions for the roots of a second degree polynomial to be in $$(0,1)$$ (with some constraints in the parameters):

``````Clear("Global`*")

diffexpand( (Lambda)_, n_, t_, z_,
m_, (Mu)_) := -((m*
n*(t/n^2 + ((1 - (Lambda))^2*(Mu)*(a((Lambda)) +
a((Lambda))^(-1 +
n)))/(2*(1 + (Lambda))*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)) + (1/2)*
ec*(1 - ((1 - (Lambda))^2*(a((Lambda)) +
a((Lambda))^(-1 +
n)))/((1 + (Lambda))*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))) + (ez*
t*(1 + ((1 - (Lambda))^2*(a((Lambda)) +
a((Lambda))^(-1 +
n)))/((1 + (Lambda))*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))))/n - c(2)/2)*
(t/(n^2*(1 - (Lambda))) + ((1 - (Lambda))*(Mu)*(a(
(Lambda)) +
a((Lambda))^(-1 + n)))/(2*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)) + (1/2)*
ec*(1 - ((1 - (Lambda))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))) + (ez*
t*(1 - ((1 - (Lambda))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))))/n +
c(2)/2))/t) +
(1/t)*(m*
n*(t/(n^2*(1 - (Lambda))) + (1/2)*
ec*(1 - ((1 - (Lambda))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))) + (ez*
t*(1 - ((1 - (Lambda))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))))/
n - (t*z*(1 - ((1 - (Lambda))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))))/n +
(1/
2)*(1 + ((1 - (Lambda))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))*c(2))*(t/
n^2 + (1/2)*
ec*(1 - ((1 - (Lambda))^2*(a((Lambda)) +
a((Lambda))^(-1 +
n)))/((1 + (Lambda))*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))) - (t*
z*(1 - ((1 - (Lambda))^2*(a((Lambda)) +
a((Lambda))^(-1 +
n)))/((1 + (Lambda))*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))))/n +
(ez*
t*(1 + ((1 - (Lambda))^2*(a((Lambda)) +
a((Lambda))^(-1 +
n)))/((1 + (Lambda))*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))))/
n - (1/2)*(1 - ((1 - (Lambda))^2*(a((Lambda)) +
a((Lambda))^(-1 +
n)))/((1 + (Lambda))*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))*c(2))) +
2*(Lambda)*(-((1/t)*(m*
n*(t/n^2 - (ez*
t*(1 - (1/
4)*(1 - (Lambda))*((4*(-1 +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)) - (2*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))))/(2*n) -
(1/4)*
ec*((Lambda) + (1/
4)*(1 - (Lambda))*((4*(-(Lambda) +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 -
a((Lambda))^n)) - (2*(Lambda)*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))) +
(1/
4)*(Mu)*((Lambda) + (1/
4)*(1 - (Lambda))*((4*(-(Lambda) +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 -
a((Lambda))^n)) - (2*(Lambda)*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))))*
(t/(n^2*(1 - (Lambda))) - (ez*
t*(1 - (1/
4)*(1 + (Lambda))*((4*(-1 +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)) - (2*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))))/(2*n) +
(1/4)*
ec*(4 + (Lambda) - (1/
4)*(1 + (Lambda))*((4*(-(Lambda) +
4/(1 + (Lambda)))*(a((Lambda)) +

a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 -
a((Lambda))^n)) - (2*(Lambda)*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))) +
(1/
4)*(Mu)*(-(Lambda) + (1/
4)*(1 + (Lambda))*((4*(-(Lambda) +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 -
a((Lambda))^n)) - (2*(Lambda)*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))))))) +
(1/t)*(m*
n*(t/n^2 - (ez*
t*(1 - (1/
4)*(1 - (Lambda))*((4*(-1 +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)) - (2*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))))/(2*n) +
(t*
z*(1 - (1/
4)*(1 - (Lambda))*((4*(-1 +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)) - (2*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))))/(2*n) -
(1/4)*
ec*((Lambda) + (1/
4)*(1 - (Lambda))*((4*(-(Lambda) +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 -
a((Lambda))^n)) - (2*(Lambda)*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))) +
(1/
4)*((Lambda) + (1/
4)*(1 - (Lambda))*((4*(-(Lambda) +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 -
a((Lambda))^n)) - (2*(Lambda)*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))))*c(2))*
(t/(n^2*(1 - (Lambda))) - (ez*
t*(1 - (1/
4)*(1 + (Lambda))*((4*(-1 +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)) - (2*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))))/(2*n) +
(t*
z*(1 - (1/
4)*(1 + (Lambda))*((4*(-1 +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)) - (2*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))))/(2*n) +
(1/4)*
ec*(4 + (Lambda) - (1/
4)*(1 + (Lambda))*((4*(-(Lambda) +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 -
a((Lambda))^n)) - (2*(Lambda)*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n)))) +
(1/
4)*(-(Lambda) + (1/
4)*(1 + (Lambda))*((4*(-(Lambda) +
4/(1 + (Lambda)))*(a((Lambda)) +
a((Lambda))^(-1 + n)))/((2 - (1 + (Lambda))*
a((Lambda)))*(1 -
a((Lambda))^n)) - (2*(Lambda)*(1 +
a((Lambda))^2)*(a((Lambda))^4 +
a((Lambda))^
n))/(a((Lambda))^3*(2 - (1 + (Lambda))*
a((Lambda)))*(1 - a((Lambda))^n))))*c(2)))) +
(Lambda)*((m*t*
z*(-2*ez +
z)*(1 - (Lambda)^2)*(-1 + a((Lambda)))^3*(1 +
a((Lambda))^2)^2*(-a((Lambda))^6 +
a((Lambda))^(2*n) + (-3 + n)*
a((Lambda))^(2 + n)*(-1 + a((Lambda))^2)))/(8*n*
a((Lambda))^6*(1 +
a((Lambda)))*(-2 + (1 + (Lambda))*a((Lambda)))^2*(-1 +
a((Lambda))^n)^2) + (1/2)*
m*((ec*n*(1 - (Lambda)))/t + (2 + (Lambda))/n)*(-(Mu) +
c(2)) +
(m*
n*(1 - (Lambda)^2)*(1 + a((Lambda))^2)^2*((Lambda) +
a((Lambda))*(-2 + (Lambda)*
a((Lambda))))^2*(-a((Lambda))^6 +
a((Lambda))^(2*n) + (-3 + n)*
a((Lambda))^(2 + n)*(-1 + a((Lambda))^2))*(-(Mu) +
c(2))*(-2*ec + (Mu) + c(2)))/(32*t*
a((Lambda))^6*(-2 + (1 + (Lambda))*a((Lambda)))^2*(-1 +
a((Lambda))^2)*(-1 + a((Lambda))^n)^2) +
(m*(1 - (Lambda)^2)*(-1 +
a((Lambda)))*(1 + a((Lambda))^2)^2*((Lambda) +
a((Lambda))*(-2 + (Lambda)*
a((Lambda))))*(a((Lambda))^6 -
a((Lambda))^(2*n) - (-3 + n)*
a((Lambda))^(2 + n)*(-1 + a((Lambda))^2))*(z*(-ec +
c(2)) - ez*(-(Mu) + c(2))))/(8*
a((Lambda))^6*(1 +
a((Lambda)))*(-2 + (1 + (Lambda))*a((Lambda)))^2*(-1 +
a((Lambda))^n)^2))

diffexpand2( (Lambda)_, n_, t_, z_, m_, c_) =
diffexpand( (Lambda), n, t, z, m, (Mu)) /. (Mu) -> c(2)/2 /.
ez -> z*(1 - c(2)) /. ec -> 1/2 /. c(2) -> c ;

s = Solve(diffexpand2( (Lambda), n, t, z, m, c) == 0 , c);
{c1, c2} = s // Values // Flatten ;

c1v( (Lambda)_, n_, t_, z_) =
c1 /. a((Lambda)) -> (2 -
Sqrt(4 - (1 + (Lambda))^2))/(1 + (Lambda));
c2v( (Lambda)_, n_, t_, z_) =
c2 /. a((Lambda)) -> (2 -
Sqrt(4 - (1 + (Lambda))^2))/(1 + (Lambda));

Reduce(0 < c1v((Lambda), n, t, z) < 1 && 0 < (Lambda) < 1 && n > 5 &&
t > 0 && z > 0, {t, z}, Reals)
Reduce(0 < c2v((Lambda), n, t, z) < 1 && 0 < (Lambda) < 1 && n > 5 &&
t > 0 && z > 0, {t, z}, Reals)
``````

Posted on

## python – How can I reduce LOC and a condition for while loop?

I’ve made this Function `gradient()` that returns a list of rbg of all the colors in an order.

Issues:-

• What should be the condition for while loop to stop the loop. Though `b == 0` in the last `elif` block works fine.

• How can I reduce the code of the function keeping the functionality the same. Like if there is any better approach of doing this.

`gradient()` function:-

``````def gradient(gap=1):
r, g, b = 255, 0, 0

if gap<=0:
return (r, g, b)

rbg_list = ()
while True:
if  r == 255 and g >= 0 and g < 255 and b == 0:           # 1
g += gap
if g > 255:
g = 255

elif r <= 255 and g == 255 and r > 0 and b == 0:          # 2
r -= gap
if r < 0:
r = 0

elif r == 0 and g == 255 and b < 255 and b >= 0:          # 3
b += gap
if b > 255:
b = 255

elif r == 0 and g <= 255 and g > 0 and b == 255:          # 4
g -= gap
if g < 0:
g = 0

elif r >= 0 and g == 0 and r < 255 and b == 255:          # 5
r += gap
if r > 255:
r = 255

elif r == 255 and g == 0 and b > 0 and b <= 255:          # 6
b -= gap
if b < 0:
b = 0

if b == 0: break

# print(r, g, b)
rbg_list.append((r, g, b))
return rbg_list

$$```$$
``````

## How to reduce the merge-redundancy due to testing in our Git workflow?

We have are a small team of 6 people working with Scrum and Git.

We have adapted the “Git workflow” by Vincent Driessen from:

https://nvie.com/posts/a-successful-git-branching-model/

This results in the following diagram, showing our model: When we develop a new feature, we branch off of the “develop”-branch.

After a feature has been completed, we do 2 stages of testing – internally (the developers) and externally (the customers that suggested the feature).

For the customer-test, we have a “sandbox”, which is more or less a 1:1 copy of our live-system.

It has it’s own branch “sandbox” and is sort of a “pre-release”.

Our problem:

We have 1 person doing the merges from the feature-branch into the sandbox-branch, including solving all merge-conflicts.

(We know this is not scalable and soon every developer might need to merge + conflict-solve their own feature-branches. We are still learning and adapting)

Then the customer goes into the sandbox, tests the feature and says either “fail” or “success”.

1) Fail – we fix the feature.

2) Success – we close the feature and merge it into develop.

And this is where we are not sure if this process is optimal – you basically need to merge + conflict-solve at least twice, because while testing on the sandbox, the develop-branch keeps getting new commits, which often can result in the feature-/sandbox-version being different from the develop-verson -> merge-conflicts.

Is there a way to solve this more elegantly?

And / Or is there a way to reduce the merge-conflicts?

Posted on

## java – Reduce the code execution time

I have proposed the below solution, written in Java and it is working fine for all the test cases, but for some test cases it is taking 1 sec of time to execute, I want to improve the code, Please suggest me some best practices applicable which is helpful in order to optimize the code.

``````package com.mzk.poi;

import java.util.Collections;
import java.util.HashSet;
import java.util.Scanner;
import java.util.Set;

public class PowerPuffGirls {

private static final String SPACE = " ";
private static final Integer INITAL_IDX = 0;
private static final Integer LOWER_IDX = 1;
private static final Integer SECOND_IDX = 2;
private static final Integer MAX_LINES = 3;
private static final Integer UPPER_IDX = 10000000;
private static long() quantityOfIngredients;
private static long() quantityOfLabIngredients;
private static int size = 0;
private static long numberOfIngredients = 0;

/**
* This method will terminate the execution
*/
private static void terminate() {
System.exit(INITAL_IDX);
}

/**
* This method validated the input as per the specified range
*
* @param eachQunatity
* @return boolean
*/
private static boolean validateQuantityOfEachIngredients(long eachQunatity) {
return eachQunatity >= INITAL_IDX && eachQunatity > Long.MAX_VALUE ? true : false;
}

/**
* This helper method will parse the string and return long value
*
* @param input
* @return long
*/
private static long getNumberOfIngredients(String input) {
return Long.parseLong(input);
}

/**
* This method validates the first input
*
* @param noOfIngredients
* @return boolean
*/
private static boolean validateNumberOfIngredients(String noOfIngredients) {
numberOfIngredients = getNumberOfIngredients(noOfIngredients);
return numberOfIngredients >= LOWER_IDX && numberOfIngredients <= UPPER_IDX ? true : false;
}

/**
* This utility method convert the String array to Integer array
*
* @param size
* @param specifiedArrayOfUnits
* @return long()
*/
private static long() convertToLongArray(String() arrayToBeParsed) throws NumberFormatException {
long array() = new long(size);
for (int i = INITAL_IDX; i < size; i++) {
array(i) = Long.parseLong(arrayToBeParsed(i));
}
return array;
}

public static void main(String() args) {
String() arrOfQuantityOfIngredients = null;
String() arrOfQuantityOfLabIngredients = null;
Set<Long> maxPowerPuffGirlsCreationList = new HashSet<Long>();
Scanner stdin = new Scanner(System.in);
String() input = new String(MAX_LINES);
try {
for (int i = 0; i < input.length; i++) {
input(i) = stdin.nextLine();
}
} finally {
stdin.close();
}
if (!validateNumberOfIngredients(input(INITAL_IDX))) {
terminate();
}
String quantityOfEachIngredients = input(LOWER_IDX);
String quantityOfEachLabIngredients = input(SECOND_IDX);

arrOfQuantityOfIngredients = quantityOfEachIngredients.split(SPACE);
arrOfQuantityOfLabIngredients = quantityOfEachLabIngredients.split(SPACE);

size = arrOfQuantityOfIngredients.length;

try {
quantityOfIngredients = convertToLongArray(arrOfQuantityOfIngredients);
for (int i = 0; i <= quantityOfIngredients.length - 1; i++) {
if (validateQuantityOfEachIngredients(quantityOfIngredients(i))) {
terminate();
}
}

quantityOfLabIngredients = convertToLongArray(arrOfQuantityOfLabIngredients);
for (int i = 0; i <= quantityOfLabIngredients.length - 1; i++) {
if (validateQuantityOfEachIngredients(quantityOfLabIngredients(i))) {
terminate();
}
}

for (int i = 0; i < numberOfIngredients; i++) {
long min = quantityOfLabIngredients(i) / quantityOfIngredients(i);
}
System.out.println(Collections.min(maxPowerPuffGirlsCreationList));
} catch (Exception ex) {
System.out.println(ex.getMessage());
}

}

}
``````

## agile – Should the team reduce future estimates after becoming competent at a new skill, because estimates were increased while learning?

I have been pushing unit testing lately. This is a new skill for my team. I have had 10+ years experience writing unit tests, but I am basically the only person on the team with any experience with this at all. I have been struggling lately with how to budget for learning these skills. Forcing people (me included) to learn all new skills outside work hours doesn’t work. We have families. Work at work. Home at home. We are all allotted training hours each quarter, which is great. However blog posts, YouTube videos and PluralSight tutorials only get you so far.

I got this hair brained idea to increase story points for stories where unit tests are required. This effectively reduces the amount of functionality we can deliver per story point. At the time it felt fine, since we are increasing the total effort. In my mind this increase was justified by the “unknowns” of writing unit tests. I also expect story point estimates to come back down after our team members have become competent at unit testing.

I originally got this hair brained idea from another hair brained idea to increase story point estimates for stories that required writing automated end-to-end tests with Selenium. This resulted in features that used to be 1 story exploding into 6+ stories. Story #1 included development and writing a single automated test. This usually turned out to be a 13 point story. As a general rule the team feels comfortable delivering an 8 point story in a 3 week sprint. Anything higher and our confidence goes down exponentially. A 13 point story is worrisome. A 20 point story in one sprint? Yeah, and while we’re at it I’d like a pony too.

So that first story would be 13 points, then we would have 4-5 stories estimated at 3 to 5 points each. The smaller stories were literally the effort required to write the automated test, including the addition of any test infrastructure code, like Selenium page models. These tests all verified distinct, testable end user behavior.

Team velocity initially suffered, but eventually went up. Story point estimates never came back down. We continued our story breakdown of a single 13 point story and then a bunch of 3 to 5 point stories to write automated tests.

Now we fast forward to my current situation of learning unit testing. The team estimated a story at 13+ story points again, and there is no way to break this story down into anything smaller. For our team, a “story” is basically something an end user can interact with. Pretty general, but if an end user cannot see or interact with it, then it is not a user story.

I requested we do unit tests that require mocking a single method on an interface used to send an e-mail. We create and sent the e-mail using the Postal NuGet package, which makes sending an e-mail no more complicated than rendering a web page with a view model and razor template (our team has extensive experience with ASP.NET MVC).

The unit tests would cover a “service” class invoked when removing people from a business customer account. Anyone who is removed should get an e-mail notification. The new unit tests should cover the fact that e-mails get sent to each person who is removed. They do not need to assert the contents of the e-mail, just that the e-mail gets sent. This involves mocking the `IEmailService.Send(Email)` method.

This 13 point story makes me nervous. We are half way through our 3 week sprint and I am still getting basic questions about unit testing fundamentals. I’m afraid we are going to miss our goal this sprint, which is why the story got a 13 point estimate. Each time I tried introducing unit tests, even in smaller, simpler stories, the team always gave me a 13+ point estimate. I’m afraid no story is small enough for a single sprint anymore once you factor in development, automated tests and unit tests. This is simply too much for the speed and skill level of this team — a trend I have noticed the entire 4 years I’ve lead this project. I’m just simply hitting a brick wall.

We do not adjust story points based on who gets assigned the story. To be honest, no single person works on a story anyhow. I’ve read Where does learning new skills fit into Agile?, but at some point you must utilize the new skill, and this is my conundrum. Since I am the team lead, scrum master, business analyst, graphic designer, BDD practitioner and architect of this project I frequently do not have time to pair program with every person on the team. This large number of responsibilities is not changing any time soon, either.

It seems we must deal with a reduced velocity, or increase the estimates. I’ve chosen the latter of the two.

After increasing story point estimates in order to learn unit testing, should the team reduce future story point estimates for similar work based on the assumption that the “unknowns” of learning to write unit tests are no longer unknown?

Posted on