machine learning – CNN Predict a class and a precision of staying stuck

My model is a binary classifier.

With the same exact architecture, the model sometimes gets high accuracies (90%, etc.), at other times, it only predicts a class (so the precision remains locked on one digit all the time) and on other occasions, the loss value is "nan". (too big or too small for the value of the loss to be a figure I guess).

I've tried to simplify my architecture (up to 2 layers conv2D and 2 dense layers), seed a random number of core initializers (2) and change the rate of ## EQU1 ## 39 learning, but none of these actually solves the problem of inconsistency help the model to train once with great accuracy, but if I run it again without changing any code, I'll 39 get a very different result, an immutable accuracy, as it only predicts a class all the time, or a loss "nan").

How can I solve this problem of:
1. Model having unchanging predictions for the entire series (predicting only one class all the time).
2. Inconsistent and non-reproducible results (when the above problems come and go without code modification)
3. Get at random values ​​of "nan" losses. (How can I get rid of it permanently?)

Thank you!!!!

linux – machine does not start – changed drive order

So we have a beefy Dell file server, a set of drives in a raid configuration and a small root / boot drive. All is well, now the boot drive that was / dev / sda is now / dev / sdq and the machine will not boot. Nothing has changed from a material point of view.

How can I restore this drive to become / dev / sda and then boot? what determines the driving order? This is the drive number 15 (as I said, beefy file server), but in one way or another, it was / dev / sda before and / dev / sdq now. Other machines with the same hardware configuration always have a drive always under / dev / sda

I have forgotten the model number, but it is new enough to have iDRAC9 management hardware.

Computability – How can the VC dimension of the Turing machine be finished?

The VC dimension of a class of assumptions $$mathcal {H}$$ is defined to be the size of the maximum set $$C$$ such as $$mathcal {H}$$ can not shutter. This document shows that the VC dimension of the set of all Turing machines with $$n$$ states is $$Theta (n log n)$$.

However, suppose we take all of these Turing machines, for example $$n$$ large enough for the universal Turing machine to be part of $$mathcal {H}$$. The result indicates that there is a set $$C$$ (Wlog, $$C subset {0,1 } ^ *$$) of size, say, $$n ^ 2$$, such as $$mathcal {H}$$ can not burst. To my knowledge, this means that there is no member of $$mathcal {H}$$ who can calculate the function
$$f (x) = begin {case} 1 text {if} x in C (1) \ 0 text {else} end {cases}$$
Or $$C (1)$$ are the points with the label "1".

But $$C$$ is finished so $$f$$ is clearly calculable, so there is a Turing machine $$M_C$$ who calculates it so $$M_C$$ can be simulated by the universal Turing machine, which is in $$mathcal {H}$$and it's a contradiction. Where is the problem with this argument?

undecidability – Which languages, decided by a turing machine, are decidable?

How to decide if a language is decidable and / or semi-decidable?

I have these languages:

a) { < M > | L (M) ⊆ 0 *}

b) { < M > | L (M) contains at least one word of even length}

c) { < M > | L (M) is semi-decidable}

re) { < M > | L (M) is decidable

My problem is that I do not know now how to interpret this notation. I know it is the code of a slot machine and L (M) is the language chosen by the slot machine.

But I'm not quite sure I understand "the language decided by a machine to read."

Does this mean the following? I run the turing machine M and "collect" all the possible words accepted by the turing machine. The collection of these words is the language?

Also: if L (M) means that it is the language DECIDED by an automatic machine, does this not imply a decidability? How can L (M) be semi-decidable if it is supposed to be the language chosen by TM?

I think there is a major flaw in my thoughts, but which one is it?

For a) and b), I think they are not decidable because of Rice's sentence: There must be MTs who decide on a language that is an element of 0 * and those who do not. . Therefore, the problem is not a problem, but it is also a functional property of TM. Same thing for b.

How do you know if these languages ​​a) and b) are semi-decidable? I could build a TM that excludes all words of the form 0 * (and not other words), that is why I would say that a is semi-decidable. But then, I think I might as well build a TM that rejects all the words that do not have the form 0 *. And that would mean that it's decidable. But that contradicts my interpretation of Rice's sentence.

b is more difficult (at least in my mind), because the check if L (M) contains at least one word of equal length, I should check all the words of L (M) and since L (M) could be infinite , maybe not possible. So it would not be decidable. But it would be semi-decidable, because if I build a TM to decide on L (M) and repeat a word of equal length, I could accept it.

I know there are a lot of errors in my argument (but I do not know what those errors are). This subject is very new to me. I am grateful for tips on how to solve such kinds of decision issues. Most of the examples I found online relate to the decision whether M, not L (M) is decidable: {| M do this or that.

If the human brain is a measuring machine, how can it be certain that some problems are undecidable?

I've read recently about the idea that the human brain could be a Turing machine (or complete). If this is true, how is the brain able to say that a certain problem is undecidable, for example? the paradox of the liar. I suppose that an automatic machine will not be able to say that the statement of the paradoxical liar is a logical paradox without decisive answer and will remain stuck forever.

logic – Can undecidability theorems be detected by a machine?

this question was originally written in mathoverflow, but a comment recommended me to rewrite it as a CS question.

This is not a mathematically formalized question. I'm sorry for that, but think it's more a mathematical than a philosophy.

When we prove a theorem A that says "B is undecidable", we do not try to prove neither B nor (not B). Can a machine do the same thing? Can he detect the "meaning" of a statement, like "something is undecidable"?

Here's one reason I do not think so.

Suppose a sentence

``````universal_Turing_machine (program, input, output)
``````

is true if and only if the resulting output of "program" with "input" given is "output". Of course, if the program does not stop, it would be wrong for any "entry" and "exit".

Now let x be the Godel number of a sentence. Consider the following sentence:

``````there is no such thing as:
y is the Godel number of a string that ends with an x-coded phrase
and universal_Turing_machine (program, y, true)
``````

If the program acts as a "decision program accepting valid evidence", this sentence obviously means "a sentence encoded as x is not provable". Otherwise, this sentence does not mean any undecidability. Therefore, if a machine can detect undecidability theorems, it must detect programs that act as "decision programs accepting valid evidence".

But according to Rice's theorem, detecting programs with a specific property is not possible.

Do you think this "reason" makes sense? As this is not a pure mathematical question, I hope to listen to your opinions. Thank you.

How is it that Windows (or Linux) is apparently independent of the machine?

How can I, for example, insert a Windows installation DVD on almost any computer, have an AMD or Intel processor and still boot? How is the installation program written so that it can run on one or the other processor?

copy paste – Exchange Clipboard only with a virtual machine running in Parallels 14

I'm using Parallels Desktop 14 for Mac, I'm using Ubuntu in the virtual machine, with Parallels special software installed in Ubuntu.

For security reasons, I always turn on the security panel configuration settings for `Isolate Linux from Mac`. Unfortunately, I would sometimes like to transfer text between the host Mac and the guest virtual machine.

Is there a way to only allow the exchanging of the clipboard with the Mac and VM but to isolate it otherwise (no shared files, etc.)?

sandbox – Is there a way to safely run unreliable code on a local machine?

Generally speaking, the term you are looking for is "sandboxing" (as in a place where these kids can do damage without affecting anything else). Sandboxing is a kind of difficult problem, but it is also very useful. It is therefore generally used in different places.

For example, not all modern web browsers have one, but two layers of sandbox. The old is the JavaScript sandbox: users run unapproved JS code on their computer all the time, usually safely. This is implemented as Sandbox API; JS that you can run on a browser simply has no function you can call nor any way to define functions that act in a particularly malicious way (like reading or writing arbitrary files, opening arbitrary network connections, etc.); everything is secure, limited to a secure or unavailable subset). However, JS is complicated and the JS runtime environments are prone to security issues. Modern browsers also render their rendering and JS engine in a sandbox. This second sandbox is implemented as sandbox privilege process; The browser process that creates the window generates many child processes, each with extremely limited permissions, and uses them to process all unreliable code, communicating with them through very carefully secured and minimal interprocess communication channels. So, even if a malicious script finds a way out of the API sandbox and can call arbitrary C functions, most of the interesting tasks (such as file reads) will fail because the system will not work properly. Exploitation indicates that the process is not allowed to do so.

Process-based sandboxes are quite common nowadays, and all modern operating systems have [at least some] support for them. Windows, Mac, iOS, and Android application stores all offer sandboxed applications. Linux provides a sandbox feature used by elements such as Docker (and Chrome on Linux). FreeBSD (and its derivatives) have "prisons", etc. There are many ways to do it. A relatively simple solution can be created simply by using user permissions and access control lists. you create a new user account for the sandbox, give it no default access (which is tricky, because normally there are many things everyone can read at least), then give this account the Access to things the sandbox code is allowed to touch. A process started as a user will have only very limited access to the system, until / unless they find a way out.

Unfortunately, creating a secure sandbox tends to be somewhat platform-specific and complex for each platform. I have personally reviewed and discovered breaches in the sandboxes used by several products of major software companies (you've heard about it, they might even have them open right now). The sandbox model of the app store, which gives the developer little control over what can be done, in exchange for the operating system managing the entire creation and application of the sandbox, is appealing and if you are writing for Mac or recent Windows, I recommend you take it into account.

Another type of sandbox, available on any modern desktop operating system but quite expensive to run, is a virtual machine sandbox (VM). By using any major VM platform (VMWare, VirtualBox, Hyper-V, whatever), you can create a VM that has little or no access to the host operating system. This is the usual way for cloud computing providers to work; From Amazon's point of view, your small EC2 instance runs untrusted code, but needs to share the hardware with other unreliable users to be profitable, and virtual machines are used for this purpose. This is also a way to run potentially malicious code, because the host operating system can monitor what the virtual machine does, but the virtual machine can not control the host.

Turing machine recognizable

We have the following language:

$$textsf {DEC-HALT} = { langle M rangle mid$$ $$M$$ is a TM and the set of words that $$M$$ it's stop is recognizable Turing $$}$$

I do not know how to prove if this language is recognizable, decidable or not.