## sharpness – What are the optical problems in these test images and what are the causes?

As recommended to me in the question How to determine which objectives are good for testing the quality of teleconverters? I was taking test photos trying to determine the quality of the lens / teleconverter. No, I didn't have an appropriate test chart, but I thought that the barcode on a box I had around would do me halfway.

By joining 2 1-1 cropping of my tests, I correct the colorcast and the color fringes during the raw conversion in Digikam.
The first image is the bare lens that the second uses the Kenko Pro300 3X teleconverter.

Test photos taken with a Canon 5D Mark II and an SMC Pentax-M 135mm f3.5 adapter.

• Focus: Manual
• Mode: Priority on opening
• Remote release: Yes
• Tripod: Yes
• Mirror lock-up: No (will use this when taking a new shot)

As we can see, no image is clear when observing the pixels at this level, but the second image (with teleconverter) is even less clear, not surprisingly.

What are the optical issues in these images, can they be identified from the attached files? Is it just me who lacks focus? Is anti-aliasing built into the camera? Is it the quality of the lens (would a better lens be sharper with the rest of the setup)?

## user search – How to test USSD flows?

I am working on some USSD (Unstructured Supplementary Service Data) feeds which essentially help health workers in remote communities without Internet access to collect basic data (height, age, weight) from residents of this locality.

My question is: how can I test the usability and the formulation (local language translation) of USSD streams?

## Reset Mongo's cache for test performance – Windows

recently i am working on mongo db and i am trying to check the performance of mongodb with a record of 2 million. I have to clear the cache each time I run the test. I don't want to restart the computer / server every time I want to run the test. I tried to clear the collections cache, but the result was not as expected. Is there a way to clear the cache like there is in Linux?

## c ++ – Programming test for work in Game Dev – expected levels of documentation, etc.

If this is not the right place to ask this question – let me know and I'll be happy to ask it elsewhere!

I am finishing a C ++ skill test for a position of "Junior Engine Programmer" in a game studio in the United Kingdom. The test is to create an orientation demo and bring it to the screen. I won't go into the details of the test too much, but the brief doesn't mention any documentation, unit testing, etc.

One speaker told me that I should definitely include both, although I'm not asked, and by another that I should use my time more wisely to do an excellent implementation. What is the thing done here? The only thing close that the brief mentions is to point out where I have used other libraries.

Is there anything else I should consider also submitting with the implementation? Think of technical specifications such as class diagrams, etc., or anything really.

Thank you!

## math software – Test if the polynomial is in the algebra of other polynomials

A collection $$Sigma$$ of polynomials is an algebra if: (1) $$lambda f + eta g in Sigma$$ for all $$f, g in Sigma, lambda, eta in mathbb {R}$$ and (2) $$f, g in Sigma$$ involved $$fg in Sigma$$. We say that $$P$$ is in the algebra of $${P_1, dots, P_n }$$ if $$P$$ is in the smallest algebra containing $$P_1, dots, P_n$$.

I was wondering if there was a way, on any computer software, to check if a $$P$$ as in the algebra of a given collection $$P_1, dots, P_n$$.

Example: to take $$n ge 1$$ and leave $$P_1 = x_1 + dots + x_n, P_2 = x_1 ^ 2 + dots + x_n ^ 2, dots P_n = x_1 ^ n + dots + x_n ^ n$$. So everything $$n$$ following symmetric functions are in the algebra generated by $$P_1, dots, P_n$$: $$x_1 + points + x_n$$ $$x_1x_2 + points + x_ {n-1} x_n$$ $$x_1x_2x_3 + points + x_ {n-2} x_ {n-1} x_n$$ $$points$$ $$x_1 points x_n$$

I would appreciate any help.

## c # – Unit test of the asynchronous TCP server

I built an asynchronous multi-client TCP server for RPC use. It works well, but I had trouble testing some features individually:

1. Connect 2x clients, is the number of clients 2
2. Connect 1x client, disconnect the client, the number of clients is zero

I want to test that the server is robust with the management of disconnections and multiple connections. The test below only fails due to planning.

Unit test

``````        (TestMethod)
public void Start_TwoConnections_ClientsIsTwo()
{
var handler = new HandlerStub();
using (server = new APIServer(handler))
using (var client1 = new TcpClient())
using (var client2 = new TcpClient())
{
server.Start();
// await Task.Delay(500); <-- This will fix the problem, but is surely unreliable.
Assert.AreEqual(2, server.Clients);
}
}
``````

Server extract

``````        public void Start()
{
// Root try-catch, for unexpected errors
try
{
IsRunning = true;
do // Retry loop
{
// Start server errors
try
{
server.Start();
}
catch (SocketException ex)
{
Console.WriteLine(string.Format("Error {0}: Failed to start server.", ex.ErrorCode));
}
}
while (!server.Server.IsBound && !IsDisposed);
}
catch (Exception ex)
{
IsRunning = false;
Console.WriteLine(string.Format("Unexpected Error: {0}", ex.ToString()));
throw ex;
}
}

{
try
{
// Multi-client listener loop
do
{
var connection = await AcceptConnection();
}
while (!IsDisposed && IsRunning);
}
catch (SocketException ex)
{
Console.WriteLine(string.Format("Error {0}: Server socket error.", ex.ErrorCode));
CleanupConnections();
}
}
``````

How can this code be refactored to improve its testing capacity?

## dnd 5e – Ramifications of the modification of the attack skill test

What changes when I substitute an attack instead of a capacity test? What mechanisms apply to one, but not to the other?

Example 1:

Class function that modifies the special push attack

### Eldritch bash

You can use a bonus action to try to push a creature within 5 feet of you with your shield, which causes melee attack instead of strength (athletics) to check the strength of the melee. 39; opponent (athletics) or dexterity (acrobatics), not his AC.

Example 2:
Function / exploit of class which allows the picking with the fist rather than to check the dexterity (tools of the thieves).

### Percussion hooking

As an action, you can perform a melee attack against the lock, treating the picking DC as an AC. On strike, the lock opens.

(example of wording subject to change)

## Test environment variable strategies – software engineering stack exchange

I am in the process of setting up an API for the OPTIONS request for a pre-flight check on CORS calls.

The host Allowed-Origins between local, test and prod will be different, so I moved this file to a dotenv file.

Now when I create my unit test to validate it, there is a problem

Let's say that local has a value of localhost: 3000 but test has test.site.com and prod is site.com

So now the test that checks the headers

``````assertTrue(response.getHeaders()("Access-Control-Allow-Origin")(0)==="localhost:3000")
``````

Will only pass locally.

Some ideas I had and why I think they wouldn't cover the cases:

A hard code like above each env condition would leave the test failing in another environment.

Load the env files and validate this but this is useless because I confirm that the file is equal to the file.

Load the env file for each env and hard code the expectation in the assertion. Could work, the plan is not to have local or prod tested.

## unit tests – Is it possible to test if / if the trees correctly without coding for the implementation?

The brevity escapes me. I was commenting on the data-driven random tests and just wanted to continue.

In this case, with only four possible entry points. I agree on the listing of all possibilities. However, this is a simple case.

I really want to emphasize that tests based on randomized data are useful. They have the potential to express the behavior of a complex system better than unit tests. If testing as documentation is a goal, that's fine. This means that you can view the tests and know what to expect from the system. In addition, this is documentation that is tested. No chance of not matching the real system.

To be clear, I do not suggest doing a single test taking into account all possible entries by exposed API. I will come back to this.

We have to be clear: the randomized data are not unit tests. Furthermore, their support in automatic test frameworks is poor … because most of these frameworks were first created for unit tests.

The common argument against them is the lack of repeatability, we need a test system that will remember, document and reproduce the faulty conditions found. Not having a good framework could be a reasonable reason for not using such tests.

We have the idea that if all the tests pass, the system is OK. This implies a methodology that ensures that we have written enough tests to capture the behavior of the system. Otherwise, all of the successful tests could mean that we haven't done enough testing to find out when the system crashes. The methodology I would suggest for achieving this, when possible, is the London-style test-driven design.

Please note "where possible". Sometimes not. My pet is testing a thread primitive. Most tests will have an inherent race condition, then to prevent the test framework from freezing (which is a bad thing ™), we introduce an expiration time. We have now moved from unit tests to acceptance tests. This is why the common advice for creating such a threading code is to base it on the first principles … but enough about it.

Your tests characterize points in the input space, if the input space is large enough (say, all integers or all floats), you cannot write a test for every possible value … well you could try, with some generation of code, I guess. However, note that the entry space can be practically infinite, for example, the set of all possible strings.

There is one person in the mix, there must be. Someone conceptualizes behavior on an input space from certain data points that are tested. It is useful to express these tests in more abstract terms. If I test an identity function, I can only test that it returns what I entered for a limited set of values. If a human chooses a value at random that is good enough for unit testing, then the system must choose a value at random should also be good enough.

Back to how to approach the current situation with random data-driven tests … you say:

An administrator can open any file

It’s a test. You correct that the test user is an administrator and you let the file be random. After all, the file shouldn't matter.

any public file can be opened by any user

Another test. You now correct that the file is public and leave the user random.

A non-administrator user cannot open a non-public file

Another test. Not random in this one. We can say that it is a non-randomized test … then we can say that the randomized tests complement the non-randomized ones.

By the way, you would tell the system to test a number of combinations of values. Thanks to that, yes, that is enough.

Most importantly, if you are following a test-driven design methodology, you will need to find the easiest way to pass those failed tests. And that will lead to the correct implementation.

And this will do well because of the close correspondence between requirements and tests. If I hardcode the entries, the simplest implementation that passes the tests could be to simply return the correct value for the hard coded entries, until I hardcode enough of them. ; inputs … and in this case, I firmly believe to hard code all combinations.

Of course, the fear of random use is that I could be in a situation where all the tests have passed, yes the system is not OK, because by chance the values ​​chosen by the tests were those which avoided a latent bug.

The same is true if you, the human, choose the values ​​you put in your unit tests. All tests can pass and there may be a latent bug that cannot be found as this does not happen with the entry you have chosen.

In this regard, data-driven random tests have the upper hand: they could, by chance, discover the bug. We must, of course, not lose sight of the values ​​used in the randomized test in order to be able to reproduce it … which also functions as a regression test.

Of course, you will avoid not characterizing the behavior of the system in your tests if you have a good methodology (and that you work in a problematic area where this is possible). Well, the same methodology can work with data-driven randomized tests. And these tests will be more succinct and express the behavior of the system – for the benefit of humans who read it – in cleaner tests (assuming you have a good framework for this type of testing).

## python – Test OCR on generated images

I would recommend thinking about `readimage.py`& # 39; s `tess()` and `cune()` like a sort of black box that uses OCR. Anyway, this code is intended to be used for a science fair project where I am testing Tesseract and Cuneiforms' abilities to read text on images with different font sizes and colors, etc. Any thoughts, obvious mistakes, etc.?

`main.py:`

``````# Command-line arguments and other functionalities
import os
import sys
import math
import random
import ast
import argparse

# Image handling and OCR
import drawimage
import distance

# Constants
DIMENSIONS = (850, 1100, 50, 50) # Width, Height, Side Margin, Top Margin
DICTLOC = "dict.txt"
COLORS = {
"R" : ((255,0,0), "Red"),
"G" : ((0,255,0), "Green"),
"W" : ((255,255,255), "White"),
"B" : ((0,0,0), "Black"),
"Y1" : ((255,252,239), "Yellow1"),
"Y2" : ((255,247,218), "Yellow2"),
"Y3" : ((255,237,176), "Yellow3"),
"Y4" : ((255,229,139), "Yellow4"),
}

parser = argparse.ArgumentParser()
parser.add_argument("-p", "--pages", type=int, help="Pages per Setting", default=1)
parser.add_argument("-f", "--fonts", help="Comma-Seperated List of fonts", default="freefont/FreeMono.ttf")
parser.add_argument("-tc", "--txtcolors", help="Comma-Seperated Color Initials", default="B")
parser.add_argument("-bc", "--bgcolors", help="Comma-Seperated Color Initials", default="W")
parser.add_argument("-bs", "--bodysizes", type=str, help="Comma-Serperated Body Font Heights", default="25")
args = parser.parse_args()
pages = args.pages
fonts = args.fonts.split(",")
txtcolors = (COLORS(c) for c in args.txtcolors.split(","))
bgcolors = (COLORS(c) for c in args.bgcolors.split(","))
bodysizes = (int(s) for s in args.bodysizes.split(","))
verbose = args.verbose

# Grab dictionary as list of words
worddict = worddict.split("n")

def image_stats(file, correct, language="eng", tessconfig=""):
tess = {}
tess_out = " ".join(tess_out.split()).strip()
tess("dist") = distance.lev(correct, tess_out)
tess("per") = round((len(correct)-tess("dist"))/len(correct),4)
tess("tpc") = round(tess("time")/len(correct)*1000, 4)

cune = {}
cune_out = " ".join(cune_out.split()).strip()
cune("dist") = distance.lev(correct, cune_out)
cune("per") = round((len(correct)-cune("dist"))/len(correct),4)
cune("tpc") = round(cune("time")/len(correct)*1000, 4)
return tess, cune

def main():
if os.path.exists("fullout.txt"):
os.remove("fullout.txt")
if os.path.exists("avgout.txt"):
os.remove("avgout.txt")
fullout = open("fullout.txt",mode='a')
fullout.write("tCuneiformtTesseracttCuneiformtTesseracttCuneiformtTesseracttCuneiformtTesseractn")
avgout = open("avgout.txt", mode='a')
avgout.write("tCuneiformtTesseracttCuneiformtTesseracttCuneiformtTesseracttCuneiformtTesseractn")
for font in fonts:
for txtcolor in txtcolors:
for bgcolor in bgcolors:
fullout.write(f"Font: {font}, {txtcolor(1)} on {bgcolor(1)}tCuneiformtTesseracttCuneiformtTesseracttCuneiformtTesseracttCuneiformtTesseractn")
avgout.write(f"Font: {font}, {txtcolor(1)} on {bgcolor(1)}tCuneiformtTesseracttCuneiformtTesseracttCuneiformtTesseracttCuneiformtTesseractn")
for bodysize in bodysizes:
cune_stats = ()
tess_stats = ()
avgout.write(f"{bodysize}")
for page in range(pages):
fullout.write(f"{bodysize}")
title = drawimage.generate_words(worddict, random.randint(1,10))
body = drawimage.generate_words(worddict, 10000)
img, correct = drawimage.create_page(title, body, DIMENSIONS, txtcolor(0), bgcolor(0), headsize, bodysize, font)
img.save("img.png")
correct = " ".join(correct).replace("n", " ")
tess, cune = image_stats("img.png", correct)
tess_stats.append(tess)
cune_stats.append(cune)
fullout.write(f"t{cune('time')}t{tess('time')}t{cune('tpc')}t{tess('tpc')}t{cune('dist')}t{tess('dist')}t{cune('per')}t{tess('per')}n")
cune = {}
tess = {}
for stat in cune_stats(0):
cune(stat) = round(sum((i(stat) for i in cune_stats)) / len(cune_stats), 4)
tess(stat) = round(sum((i(stat) for i in tess_stats)) / len(tess_stats), 4)
avgout.write(f"t{cune('time')}t{tess('time')}t{cune('tpc')}t{tess('tpc')}t{cune('dist')}t{tess('dist')}t{cune('per')}t{tess('per')}n")
fullout.close()
avgout.close()
if __name__ == "__main__":
main()
``````

`drawimage.py:`

``````from PIL import Image, ImageDraw, ImageFont
import random

# Turn words into lines, based on size of page and font, then return lines and height of lines
def word_space(words, font, height, spaceh=30):
linew = 0
linet = ""
lines = ()
wordnum = 0
while wordnum < len(words):
if len(linet) > 0: linet += " "
linet += words(wordnum)
if font.getsize(linet)(0) > DIMENSIONS(0) - (2*DIMENSIONS(2)):
if spaceh * (len(lines)+1) > height:
linet = linet(:-(len(words(wordnum))+1))
break
else:
linet = linet(:-(len(words(wordnum))+1))
if font.getsize(words(wordnum))(0) > DIMENSIONS(0) - (2*DIMENSIONS(2)):
print("Word too long, skipping: " + words(wordnum))
wordnum += 1
else:
lines.append(linet)
linet = ""
else:
wordnum += 1
if linet:
lines.append(linet)

return lines, spaceh * len(lines)

# Add text to an image, return new image
def add_text(img, text, pos, font, fcolor):

d = ImageDraw.Draw(img)
d.text(pos, text, font=font, fill=fcolor)
return img

# Draw an entire page, return image and correct text
def create_page(title, body, DIM, txtcolor, bgcolor, titlesize, bodysize, font):
global DIMENSIONS
DIMENSIONS = DIM
img = Image.new('RGBA', (DIMENSIONS(0), DIMENSIONS(1)), bgcolor+(255,))
titlefont = ImageFont.truetype(font, titlesize)
bodyfont = ImageFont.truetype(font, bodysize)
titlespaced, titleh = word_space(title, titlefont, DIMENSIONS(1)-40, spaceh=titlesize+10)
for i, line in enumerate(titlespaced):
bodyspaced, margin = word_space(body, bodyfont, DIMENSIONS(1)-40-titleh-20, spaceh=bodysize+10)
for i, line in enumerate(bodyspaced):
img = add_text(img, line, (50,((bodysize+10)*i)+titleh+20), bodyfont, txtcolor)
return img, titlespaced+bodyspaced

# Generate and return a given number of words
def generate_words(worddict, length):
words = ()
for j in range(length):
word = random.choice(worddict)
mod = random.randint(1,10)
if mod == 1:
word = word.upper()
elif mod == 2:
word = word.capitalize()
elif random.randint(1,15) == 1:
word += "."
words.append(word)
return words
``````

`readimage.py:`

``````import subprocess
import os
import time
from PIL import Image
import pytesseract

# Functions to run either OCR on a given image.

def tess_ocr(file, language="eng", config=""):
# Run and time Tesseract, return output
start = time.time()
out = pytesseract.image_to_string(Image.open(file), language, config=config)
return out, round(time.time() - start, 4)

def cune_ocr(file, language="eng"):
# Run Cuneiform on image
start = time.time()
subprocess.call(("cuneiform", "-o", "cuneout.txt",file), stdout=subprocess.PIPE)

# Fetch and return output
if os.path.exists("cuneout.txt"):
return out, round(time.time() - start, 4)
else:
print("Cuneiform reported no output, returning empty string")
return "", round(time.time() - start, 4)
``````

`distance.py:`

``````import numpy

# Find Levenshtein Distance between two strings
def lev(a,b):
sizex = len(a)+1
sizey = len(b)+1
matrix = numpy.zeros((sizex,sizey))
for x in range(sizex):
matrix(x,0) = x
for y in range(sizey):
matrix(0,y) = y

for y in range(1, sizey):
for x in range(1, sizex):
cost = 0
if a(x-1) != b(y-1):
cost = 2
matrix(x,y) = min(
matrix(x-1,y) + 1,
matrix(x,y-1) + 1,
matrix(x-1,y-1) + cost
)

return int(matrix(sizex-1,sizey-1))
``````