data structures – Algorithm for Estimating Number of Unique Monthly Visitors

Is there a way to estimate the number of unique monthly visitors to a site based on a sample of, say, one week of data? This isn’t as simple as just multiplying the number of unique visitors the first week by 4, due to the hotel problem. If 10 people visit your site the first week and the same people are the only visitors to your site the second, third, and fourth week, the total number of monthly unique visitors to your site is only 10.

I know you can use HLL to estimate the number of unique visitors to a site in O(1) space. I’m wondering if there’s a similar approach to estimate how many unique visitors there will be after a month.

optics – Estimating the optical resolution of a lens

In Tony & Chelsea Northrup’s recent video titled 5 Lies Camera Companies Tell You, they mention that the optical resolution of a kit lens is often lower than the resolution of the sensor at around 5 minutes in. Wikipedia has a section dedicated to measuring the optical resolution.

However, this is not comprehensive. Is there a relatively simple method of estimating the resolution of a lens? I don’t have an actual use for this information, so if it’s significantly easier to estimate rather than calculate, that is satisfactory too.

Some lenses – such as the Raspberry Pi lenses for the High Quality Camera – do quote a number for resolution, but none of the lenses on my system (Micro-Four Thirds) as far as I was able to find.

algorithms – Estimating users standard deviation given avg, min, max for various tests

Given a series of tests, where we are given one users score, the overall minimum, the overall maximum, and the overall average, how would I estimate the user’s standard deviation on total score (ie. sum of all their tests)?

We cannot assume that the lowest scoring person from one test was the lowest scoring in the next test, but I think it is fair to assume that people generally stay within some score bands (although if this can be done without that assumption, that would be better).

My intuition tells me that this seems to be some sort of application of Monte Carlo, but I can’t seem to figure out how to actually do this.

What process should follow when estimating the SharePoint project

I have new customer needs. I wanted to establish for each component. What standard process should I follow.

Estimating the use of data for Google Voice App

I use the Google Voice app for iPhone. With a roaming cost of $ 0.2 per MB, I would like to understand what to expect if the iPhone GV application:

  • cost to simply keep it on: no message sent or received
  • cost per message

The goal is to avoid an unreasonable bill ($ 50 +) for the simple sending of text messages.

statistics – Find the method of estimating moments of θ, MLE and MSE for the Bernoulli distribution

Have I made mistakes? I feel like my MLE is a little messy and I have not used the fact that $ 0 theta the 1/2 $.

With $ X_1, …, X_n $ iid $ f (x | theta) = theta ^ {x} (1- theta) ^ {1-x}, x = 0 $ or $ 1,0 the theta the $ 1/2

First I try:

Suppose a sample estimator: $ bar {X} = frac { sum_ {i = 1} ^ {N} x_i} {N} $ and $ E (x) = sum_ {x = 1.0} ^ {} x theta ^ {x} (1- theta) ^ {1-x} = 0 + theta $

So $ hat { theta} = bar {X} = frac { sum_ {i = 1} ^ {N} x_i} {N} $ is my estimation moments

With MSE:

$ E ([frac{sum_{i=1}^{N}x_i}{N}-theta]^ 2) = var ( frac { sum_ {i = 1} ^ {N} x_i} {N}) +[E(frac{sum_{i=1}^{N}x_i}{N})-theta]^ 2 $

$ = var ( frac { sum_ {i = 1} ^ {N} x_i} {N}) +[theta-theta]^ 2 = var ( frac { sum_ {i = 1} ^ {N} x_i} {N} $

For MLE

$ prod_ {i = 1} ^ {N} theta ^ {x_i} (1- theta) ^ {1-x_i} = theta ^ { sum _ {}} {_ x}} prod_ {i = 1} ^ {N} (1- theta) ^ {1-x_i} $

take ln

$ log (1- theta) sum _ {} ^ {} (1-x_i) + ln ( theta) sum _ {} ^ {} x_i $

will find $ frac {d} {d theta} = $ 0 for $ theta $with $ frac {d} {d theta} = frac { sum _ {} ^ {} (x_i)} { theta} + frac { sum _ {}} {{} (1-x_i)} {1- theta} $

So $ theta = frac { sum _ {} ^ {} (x_i)} { sum _ {} ^ {} (x_i) – sum _ {} ^ {} (1-x_i)} $ is my daughter

With MSE:

$ E ([frac{sum_{}^{}(x_i)}{sum_{}^{}(x_i)-sum_{}^{}(1-x_i)}-theta]^ 2) = var ( frac { sum _ {} ^ {} (x_i)} { sum_ {} ^ {} (x_i) – sum _ {} ^ {} (1-x_i)}) +[E(frac{sum_{}^{}(x_i)}{sum_{}^{}(x_i)-sum_{}^{}(1-x_i)})-theta]^ 2 $

and

$ sum _ {} ^ {} (x_i) – sum _ {} ^ {} (1-x_i) = N, $

$ = var ( frac { sum_ {i = 1} ^ {N} x_i} {N} $

And so they have the same ESM, but my method estimator of moments can have negative values ​​while the MLE is always positive if $ x_i $ is all positive or negative values. The MLE is therefore a better estimator, is not it?

Computer Vision – When estimating the motion of the camera from the fundamental matrix, why can not the translation be restored to scale using epipolar geometry?

Thanks for contributing an answer to Computer Science Stack Exchange!

  • Please make sure to respond to the question. Provide details and share your research!

But to avoid

  • Ask for help, clarification, or answer other answers.
  • Make statements based on opinions; save them with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, read our tips for writing good answers.

pr.probability – Estimating the size of the remainder in a random partition

Choose a sequence of real numbers $ x_i $ as following. To put $ x_0 = $ 1. Yes $ x_i $ is chosen, then choose $ x_ {i + 1} in[0, x_i]$ according to the uniform distribution. Obviously we have $ x_i rightarrow 0 $ with probability 1. Put $ I_i =[x_i, x_{i-1}]$.

Then we choose $ n $ random numbers $ y_1, ldots, y_n $ in $[0, 1]$ independently of the uniform distribution. Let $ J to be the union of all $ I_i $, for which there are $ y_j $ with $ y_j in I_i $. Again, it is obvious that $ | J | rightarrow 1 $ as $ n rightarrow infty $ with probability 1. My question is how fast $ 1- | J | $ rot? I would expect that the distribution of $ 1- | J | $ is not very focused, so I am interested in both an estimate of the expected value, median values ​​and extreme values.

The above problem occurs quite naturally in the analysis of algorithms, so I would expect that this problem has been dealt with by someone. Several random structures have size parts according to the distribution of $ x_i $and picking $ y_i $ corresponds to the selection of random points in a random structure and to the study of the component containing this point. The question then is how much of the entire structure has been left unexplored.

co.combinatorics – Estimating Maximum Clique Size via Matrix Ranking

let $ mathrm {M} in lbrace0,1 rbrace ^ n $ to be the matrix of adjacency of a graph $ mathrm {G} left (V, E subskeq lbrace lbrace u, v rbrace | u, v in V rbrace right) $ control $ n $.

Question:
is it true that $$ mathrm {rank} ( mathrm {M}) = max_ {n} mathrm {K} _ {n n subseteq mathrm {G} $$
resp. how is the rank of the adjacency matrix related to the size of the maximum clique?