## Half precision floating point question — smallest non-zero number

There’s a floating point question that popped up and I’m confused about the solution. It states that

IEEE 754-2008 introduces half precision, which is a binary
floating-point representation that uses 16 bits: 1 sign bit, 5
exponent bits (with a bias of 15) and 10 significand bits. This format
uses the same rules for special numbers that IEEE754 uses. Considering
this half-precision floating point format, answer the following
questions: ….

What is the smallest positive non-zero number it can represent?

bias = 15
Binary representation is: $$0 , 00000 , 0000000001 = 2^{-14} * 2^{-10}=2^{-24}$$

I’ve understood the binary representation part, but how does it get to those exponents of 2??

Posted on

## litecoin – easyminer question – Bitcoin Stack Exchange

So normally i use nicehash and I like it… every 7 days i take out like 60-70.00 worth of bitcoin. I am trying easyminer (free version) and my question is can it handle multiple computers with the same email and litecoin key? I

For example on one i see 30 shares and one says 45 shares and the other says 120 so when I go to cashout will it know all 3 are on the same system? Hope I am saying it right..

Posted on

## A question on entailments in sequents

Suppose $$Gammavdash Avee Delta$$, where as usual $$Gamma$$ and $$Delta$$ are thought of as sets pf propositions and the turnstyle is for logical consequence, or entailment.

Given the assumption, may one consider the relation between the top line and the bottom line of the sequent $$frac{Gammavdash A, Delta}{Gammavdash Avee B, Delta}$$ to be an entailment on a par with – as in, having the same nature as – the relation between the left and the right side of $$Gammavdash Avee Delta$$, or is there something which prohibits such a point of view?

Posted on

## complexity theory – The subset sum problem is not in P because the question is about lossy compressed data? Why not?

Where is there a gap or error in my reasoning?

The subset sum problem deals with a set of n numbers, which is the result of lossy compression of an array r of numbers (r = (2^n)-1).

The compression algorithm in brief:

1. check if the size of the set r +1 is a power of two. If not, terminate compression
2. check all combinations without repetition of size log2(r+1) if any of them can be decompressed (summing all possible subsets without empty subset) to the set r of numbers. If so, return the combination
3. if no suitable combination is found, terminate compression.

The subset sum problem is to check whether the set r of numbers (that is, the set of sums of subsets of the set n of numbers) contains the number you are looking for. To do this, you can represent the set r of numbers as an array and search it. Searching the array in the worst case requires checking each element, which is O(r) (for a decompressed set) or O(2^n) (for a compressed set). Faster ways such as binary search are known, but they require knowledge of the ordering of numbers, so that by checking one number we also gain knowledge of where to look for other numbers. Without knowing anything about the ordering of the numbers in the array, you can’t search the array faster than linearly, because by checking the numbers one by one, you don’t get any knowledge of where the number you’re looking for might be.

Does the compressed set of n numbers have information about the ordering of the array of r numbers? No.

The number of possible orderings of a set r of numbers is r!, or ((2^n)-1)!, which is a number greater than n! Pigeonhole principle says that if we have ‘a’ objects that we put into b drawers and a > b then there must be at least one drawer where there will be at least two objects. According to this rule, the set of n numbers cannot contain explicit information about the ordering of the elements in the array r of numbers, because many orderings of the array r of numbers can be compressed into the same array of n numbers, with the same elements and their ordering, and we do not get any other information besides this set. The ways to generate combinations in the compression algorithm are many, by which we can consider that every ordering of an array r of numbers can be compressed into every possible ordering of a set of n numbers.

If the subset sum problem is in P, then there exists a search algorithm that will answer the subset sum problem in polynomial time calculated from the size of the compressed set of n numbers. From this, we can conclude that to find a number in an array r of numbers in faster than linear time, it is sufficient to subject the array to lossy compression rather than sorting. When we subject the sorted array r of numbers to compression, we retain the ability to search it in faster-than-linear time, despite the fact that the compression was lossy and as a result, we lost ordering knowledge and most of the content knowledge. The data can be reconstructed, but only by generating the sums of every possible subset and sorting them, since we know from principle pigeonhole that the original ordering knowledge has been lost and must be acquired anew.

Losing knowledge of the ordering of an array r of numbers ( and most of its contents) while still being able to search it in faster than linear time does not make sense, from which it follows that there is no algorithm that allows this, which means that the subset sum problem is not in P, and since SSP is in NP then P != NP.

Posted on

## I have a question. Can the price change if your transaction remain unconfirmed for a couple of hrs

I had to make a payment with bitcion to a wallet with specific amt. I made exact payment, but price bitcoin was going up. But my transaction was still unconfirmed till the evening. They rejected my payment bc the payment was a \$3 more when they confirmed it.Can the price change if your transaction remain unconfirmed for a couple of hrs. It

Posted on

## computational complexity – Question about PSPACE and polynomial time reduction to QSAT

If the decision version of a problem $$X$$ is in $$mathsf{PSPACE}$$ and we know $$mathsf{QSAT}$$ is $$mathsf{PSPACE}$$-complete, then is it true that $$X leq_p mathsf{QSAT}$$? Or is it the case that $$mathsf{QSAT} leq_p X$$? I’m not quite sure I understand the concept of polynomial-time reducible, especially with regards to $$mathsf{PSPACE}$$. Any help would be appreciated!

Posted on

## Newbie question about the security of mixing

If I am running my own node, and using a mixer to anonymize my BTC, will it not be connected anyway because the transactions are to/from my own node?

I mean I am sending the BTC to the mixer from my node, and then getting it back to another address, sure, but I am accessing that address from my own node…

Even if I am using TOR, is there not something that identifes the nodes and tells observers that these two addresses have something in common, the node on my machine?

## brownian motion – Question on the choice of boundary in the CUSUM test when we make some resampling

Question on the choice of boundary in the CUSUM test when we make some resampling

We are considering to make a CUSUM test for some economical time series $$X = (x_1,..,x_n)$$. Suppose $$X$$ contains many noises which need to be removed in some ways. Simply speaking, the CUSUM test considers the empirical process,

$$W_n = frac{1}{sigma sqrt{n}}sum_{i=1}^{n} (x_i – mu )$$

and check if it hits the boundary corresponding to the Brownian bridge. In practice, for execution of the CUSUM test, we utilize an R package, `strucchange`. Now we have a question. The question is related to the scaling of boundary since we are considering to make a resampling or filtering raw data, and perform a CUSUM test, again.

Question is as follows: Do we need to make any rescaling the boundary when we make a filtering data? For instance, should we need to multiply the original boundary by $$frac{1}{sqrt{m}}$$ (m: sample length)? Since we are using the open source R package, `strucchange` in performing the CUSUM test, it is not so clear if we need to take care of such a rescaling factor of the boundary function.

Posted on

## canon – Question About Charging and Recording

canon – Question About Charging and Recording – Photography Stack Exchange
Posted on

## soft question – How hard do mathematicians have to work to learn?

So this is a soft question, but I’m looking for a collection of specific instances or stories that provide a reasonable breadth or representative picture. The target audience for this question is those with PhD-level knowledge of math. That doesn’t mean you have to have a PhD to respond, but you should ideally possess the rough equivalent of PhD expertise in some mathematical content area even if self-taught.

How hard did you struggle with learning math? Let’s break it down into content categories:

1. Subject matter up to before calculus, say typical topics before university, arithmetic, algebra, introductory trig, statistics. What is typically taught in secondary/high school or lower.
2. For calculus and other typical lower-level undergraduate university/college coursework.
3. For typical upper-level undergraduate courses/content.
4. Lower-level graduate, e.g. core courses or general electives.
5. Upper-level graduate, research level, and beyond.

Question: How common is it for PhD-level mathematicians to have struggled and to what degree of struggle:

• in high school or lower?
• in upper-level, research level, and beyond content?

Maybe this could be gotten at by a question like: How many hours do you spend obtaining understanding of a typical research paper that is not within your field of expertise? When you want to learn a new concept, does it always come easy, or does it sometimes come with a lot of struggle, how many hours/days?

I’ll argue that this is a useful question to ask as many undergraduates struggle with upper level content, for example, and might get discouraged thinking that it should be easy for them. I discuss this with my students and openly talk about what I struggled with and what I found easier to learn. I imagine my experience is fairly typical for PhD-level mathematicians, like I’m somewhere in the middle in terms of how hard I had to work. But I don’t know this for sure.

Maybe the way to answer the question is to give a rough estimate for how many hours outside of class you needed to work (or would have needed to work) to achieve A/B equivalent scores in the content.

As an example: Math was essentially easy for me before college. I mean, there were times in high school where I played with equations trying to figure things out, but I at most rarely struggled to do what was required of an assignment. Calculus and trigonometry in college was a little more difficult, especially towards the later topics, and it took a bit of practice/going to office hours. I truly struggled with some content of upper level math. Sometimes I was lazy about it, but sometimes I worked for many hours to learn it (that generally paid off too). Also some (say roughly half) upper level content came fairly easily. Graduate school was definitely much more difficult where learning definitely required many hours outside of class working. I still find high level research mathematics extremely difficult (~15 years post-PhD). It definitely comes faster/easier now conceptually, but much of it is still just way beyond me, and I have to work very hard. Before college, I would have said I was the best student at my school (having little to no struggle essentially). During university, I was probably somewhere in the upper half (struggling less than most). In graduate school, I was roughly in the middle (struggling similarly to the average student, but about half of them learning with less effort than me). I feel that is a fairly honest assessment (to within a rough degree of accuracy).

What I’m looking for is similar assessments from people roughly at the PhD-level of mathematics knowledge, anyone from professional researchers who don’t teach at all and have lots of grant funding, to those in teaching positions who do little to now professional research (I’m in the latter category). And even maybe recreational practitioners who just study on their own but have no formal credentials. The key is being honest with the assessment. I’d like to normalize both struggling and having difficulty, but also excelling and having an easy time at it. I think having a broad sampling of experiences would be very useful information for students in particular.

I know this is a soft question, but I’d like to argue that it isn’t opinion-based. It is asking for specific information, and presumably mathematicians are careful enough to give reasonably accurate answers. I didn’t know of another place to post such a question because it requires access to the specific audience that happens to be present at this site. It didn’t seem appropriate for math-meta either, nor for math educators forum because I don’t just want to target those who teach.