real analysis – The Cantor distribution is singular (with respect to lebesgue measure)

If we define the Cantor distribution $mu$ as the distribution that has $F=$“Cantor function” as it’s cumulative distribution function, how do we show that $mu$ is singular with respect to the Lebesgue measure? If $lambda$ is the Lebesgue measure I have to show that if $lambda(A)=0$ then $mu(A)=0$. For a point-set ${x}subsetmathbb{R}$ it does hold since $lambda({x})=0$ and $$mu({x})=mu(bigcaplimits_{n=1}^{infty}(x-frac1n,x))=limlimits_{Ntoinfty}mu(bigcaplimits_{n=1}^{N}(x-frac1n,x))=limlimits_{Ntoinfty}mu((x-frac1N,x))limlimits_{Ntoinfty}F(x)-F(x-frac1N)=0$$ since $F$ is continuous. But how to show the property for a general $A$ $lambda$-measurable?

How to measure text similarity (Jaro-Winkler) in Teradata?

In Oracle we can measure text similarity with Jaro-Winkler like the following:

SELECT UTL_MATCH.JARO_WINKLER_SIMILARITY('STACKEXCHANGE', 'STAMPEXCHANGE') MYSTRING
FROM DUAL;
--98

And it turns out that Teradata has Jaro-Winkler too, as explained here. Unfortunately I just don’t understand the doc and example there.

So far what I can do in Teradata is with EDITDISTANCE:

SELECT EDITDISTANCE('STACKEXCHANGE', 'STAMPEXCHANGE') MYSTRING;
--2

So, how to measure text similarity with Jaro-Winkler in Teradata? Could anyone please give me some simple example?

unity – Measure a distance point to point in unity3D?

There are some obvious mistakes here. Firstly:

Vector3 startPoint = cam.ScreenToWorldPoint(Input.mousePosition);

This does not store your start position in your member variable startPoint so you can read it back on a later frame. This creates a new temporary variable called startPoint that you throw away at the end of the function. Remove the Vector3 in front to use the member variable instead.

Secondly, you don’t provide a depth value to ScreenToWorldPoint. This can still work if your camera is orthographic, but if you’re using a perspective camera then you’re asking “What’s the position on this ray at a distance of 0 units from the camera?” – no matter what screen position you use for that ray, the answer is always the camera’s own position.

You may want to assign Input.mousePosition to a Vector3 variable, and set its z coordinate to your chosen depth. Or fire a ray using ScreenPointToRay and check where that ray intersects your scene, using the hit position as your start/endPoint.

Thirdly, this only measures distances along the world x axis. You can use Vector3.Distance() if you want to measure the distance on all axes. Or, if you want to measure a distance on your screen, you can skip the ScreenToWorldPoint conversion entirely, and work with your Input.mousePosition Vector2s directly.

gt.geometric topology – Thurston measure under finite covers

Let $S= S_{g,n}$ be a finite type orientable surface of genus $g$ and $n$ punctures and let $mathcal{ML}(S)$ denote the corresponding space of measured laminations. The Thurston measure, $mu^{Th},$ is a mapping class group invariant and locally finite Borel measure on $mathcal{ML}(S)$ which is obtained as a weak-$star$ limit of (appropriately weighted and rescaled) sums of Dirac measures supported on the set of integral multi-curves.

The Thurston measure arises in Mirzakhani’s curve counting framework. Concretely, given a hyperbolic metric $rho$ on $S_{g,n}$, let $B_{rho} subset mathcal{ML}(S)$ denote the set of measured laminations with $rho$-length at most $1$. Then $mu^{Th}(B_{rho})$ controls the top coefficient of the polynomial that counts multi-curves up to a certain $rho$-length and living in a given mapping class group orbit.

Question: Fix a hyperbolic metric $rho$ on $S$ and a finite (not necessarily regular) cover $p: Y rightarrow S$. Let $rho_{p}$ denote the hyperbolic metric on $Y$ obtained by pulling $rho$ back to $Y$ via $p$. Is there a straightforward relationship between $mu^{Th}(B_{rho_{p}})$ and $mu^{Th}(B_{rho})$? For example, is the ratio
$$ frac{mu^{Th}(B_{rho})}{mu^{Th}(B_{rho_{p}})} $$
uniformly bounded away from $0$ and $infty$? Does it equal a fixed value, independent of $rho$? If so, can it be easily related to the degree of the cover $Y rightarrow S$?

It seems hard to approach the above by thinking about counting curves on $Y$ versus $S$, because “most” simple closed curves on $Y$ project to non-simple curves on $S$. But, maybe the generalizations of curve counting for non-simple curves due to Mirzakhani (https://arxiv.org/pdf/1601.03342.pdf) or Erlandsson-Souto (https://arxiv.org/pdf/1904.05091.pdf) could be useful. Of course, both apply to counting curves in a fixed mapping class group orbit, so it’s not clear (to me) how to apply these results either since multi-curves on $Y$ can project to curves on $S$ with arbitrarily large self intersection.

Thanks for reading! I appreciate any ideas or reading suggestions.

fa.functional analysis – Measurable selection involving measure valued random variable

Let $(Omega, mathcal{F}, mathbb{P})$ be a probability space and let $mathcal{M}(mathbb{R}^d)$ be the space of finite signed measures on $mathbb{R}^d$ endowed with the narrow topology (i.e. the initial topology w.r.t. $C_b(mathbb{R}^d)$, the set of real valued, continuous and bounded functions on $mathbb{R}^d$) and the corresponding Borel $sigma$-algebra. Let $mu: Omega to mathcal{M}(mathbb{R}^d)$ be measurable and let $a in mathbb{R}^d$ be a fixed real number. Let us define the multifunction
$$F : Omega rightrightarrows C_{0,1}(mathbb{R}^d):={ varphi in C_0(mathbb{R}^d) mid |varphi|_{infty} le 1 }$$
(where $C_0(mathbb{R}^d)$ is the Banach space of continuous functions vanishing at infinity with the supremum norm) as
$$F(omega) := left { varphi in C_{0,1}(mathbb{R}^d) mid int_{mathbb{R}^d} varphi text{ d} mu(omega) ge a right }.$$

Can we find a measurable selection $f: Omega to C_{0,1}(mathbb{R}^d)$ of $F$, meaning that $f$ is measurable and $f(omega) in F(omega)$ for every $omega in Omega$?

I tried with the Kuratowski–Ryll-Nardzewski measurable selection theorem but I am not able to prove that ${ omega in Omega mid F(omega) cap U }$ is measurable for every $U subset C_{0,1}(mathbb{R}^d)$ open.

Any hint would be really appreciated!

Lebesgue measure and example function

give functions f,g so that f is continuous in [0,1] x [0,1] and in almost all equal to g while g is not continuous in any point

can someone help me

functional analysis – How do I show that null sets of a spectral measure are null sets on the induced complex measure?

I’m going through my lecture notes again for functional analysis, and I came up on this property that I can’t seem to prove. So $E(omega)$ is your garden-variety spectral measure, and we define another complex-valued measure $mu_{x,y}(omega) = langle x , E(omega)y rangle$. The notes state that any $E$-null set $omega$ is also a $|mu_{x,y}|$-null set, but I don’t know how to show this directly. It’s clear that $omega$ is a $mu_{x,y}$-null set, but I haven’t been able to move this to the absolute value.

I also couldn’t find a name for this measure $mu_{x,y}$; if it does have a formal name, that would also be great to know!

How can I measure the exact range of focus of a given fixed focus webcam?

The concept of depth of field is really just an illusion, albeit a rather persistent one. Only a single distance will be at sharpest focus. What we call depth of field are the areas on either side of the sharpest focus that are blurred so insignificantly that we still see them as sharp. Please note that depth-of-field will vary based upon a change to any of the following factors: focal length, aperture, magnification/display size, viewing distance, etc.

There’s only one distance that is in sharpest focus. Everything in front of or behind that distance is blurry. The further we move away from the focus distance, the blurrier things get. The questions become: “How blurry is it? Is that within our acceptable limit? How far from the focus distance do things become unacceptably blurry?”

What we call depth of field (DoF) is the range of distances in front of and behind the point of focus that are acceptably blurry so that to our eyes things still look like they are in focus.

The amount of depth of field depends on two things: total magnification and aperture. Total magnification includes the following factors: focal length, subject/focus distance, enlargement ratio (which is determined by both sensor size and display size), and viewing distance. The visual acuity of the viewer also contributes to what is acceptably sharp enough to appear in focus instead of blurry.

The distribution of the depth of field in front of and behind the focus distance depends on several factors, primarily focal length and focus distance.

The ratio of any given lens changes as the focus distance is changed. Most lenses approach 1:1 at the minimum focus distance. As the focus distance is increased the rear depth of field increases faster than the front depth of field. There is one focus distance at which the ratio will be 1:2, or one-third in front and two-thirds behind the point of focus.

At short focus distances the ratio approaches 1:1. A true macro lens that can project a virtual image on the sensor or film that is the same size as the object for which it is projecting the image achieves a 1:1 ratio. Even lenses that can not achieve macro focus will demonstrate a ratio very near to 1:1 at their minimum focus distance.

At longer focus distances the rear of the depth of field reaches all the way to infinity and thus the ratio between front and rear DoF approaches 1:∞. The shortest focus distance at which the rear DoF reaches infinity is called the hyperfocal distance. The near depth of field will very closely approach one half the focus distance. That is, the nearest edge of the DoF will be halfway between the camera and the focus distance.

For why this is the case, please see:

Why did manufacturers stop including DOF scales on lenses?
Is there a ‘rule of thumb’ that I can use to estimate depth of field while shooting?
How do you determine the acceptable Circle of Confusion for a particular photo?
Find hyperfocal distance for HD (1920×1080) resolution?
Why I am getting different values for depth of field from calculators vs in-camera DoF preview?
As well as this answer to Simple quick DoF estimate method for prime lens

ux field – How to measure where your site/app ranks in Anderson’s UX pyramid?

It would be quite difficult to get the same level of granularity as the Anderson’s UX pyramid, but the way that the different levels of user experience is ranked gives us a clue as to how we can possibly go about it as a starting point.

I suggest that the ‘chasm’ that is difficult to cross, which is at the level of CONVENIENT allows you to at least work out which side your organisation’s products and services lie. And if you look immediately below CONVENIENT there is USABLE, which is a relatively well-defined quality that can be measured semi-quantitatively in many different ways (look for questions relating to measuring usability or usability testing).

There are more formal processes to dissect the various levels of experiences, again not at the granularity that the pyramid describes, but a starting point would be the Kano’s model of customer satisfaction that defines ‘Delighters’ which can help you see if there are elements of your products/services that lie in the DESIRABLE level.

convex geometry – The surface area measure in terms of support functions

$defRR{mathbb{R}}$Let $K$ be a closed bounded convex body in $RR^n$. The support function $h_K$ on $RR^n$ is defined by
$$h_K(v) = max_{w in K} langle v,w rangle.$$

Let $S^{n-1}$ be the unit sphere. The surface area measure is the measure on $sigma$ such that, for an open set $U$ in $S^{n-1}$, the measure $sigma(U)$ is the $(n-1)$-dimensional Lebesgue measure of the set of $w in partial K$ where at which supporting hyperplanes normal to $v$ make contact. (I believe this is fairly standard terminology; I am reading Schneider’s “Convex Bodies
The Brunn-Minkowski Theory” as my reference.)

Now, if $K$ is smooth, then $sigma$ is a smooth $(n-1)$-form, and we can compute $tfrac{sigma}{mathrm{Area}}$ in terms of the Hessian of $h$. Namely, restrict $h$ to the affine hyperplane $v+v^{perp}$. (In other words, the tangent plane to $S^{n-1}$ at $v$.) Then $tfrac{sigma}{mathrm{Area}}$ is the determinant of the Hessian of this restricted function.

On the other hand, suppose that $K$ is a polytope, so $h$ is piecewise linear. Then $sigma$ is an atomic measure, concentrated on the normals to the facets of $K$. Then we can also compute $sigma$ in terms of $h$ restricted to $v+v^{perp}$: For $u$ in $v^{perp}$, take the directional derivative $tilde{h}(u) := lim_{t to 0^+} tfrac{h(v+tu)-h(v)}{t}$. Then $tilde{h}(u)$ is (I believe) the support function of the facet normal to $v$, and $sigma(v)$ is the volume of that facet.

What I am trying to understand is how to interpolate between these formulas. In general, is $sigma$ something in terms of the restriction of $h$ to $v+v^{perp}$. And why am I seeing second derivatives in the smooth case and first (directional) derivatives in the polytopal case?