My model is a binary classifier.
With the same exact architecture, the model sometimes gets high accuracies (90%, etc.), at other times, it only predicts a class (so the precision remains locked on one digit all the time) and on other occasions, the loss value is "nan". (too big or too small for the value of the loss to be a figure I guess).
I've tried to simplify my architecture (up to 2 layers conv2D and 2 dense layers), seed a random number of core initializers (2) and change the rate of ## EQU1 ## 39 learning, but none of these actually solves the problem of inconsistency help the model to train once with great accuracy, but if I run it again without changing any code, I'll 39 get a very different result, an immutable accuracy, as it only predicts a class all the time, or a loss "nan").
How can I solve this problem of:
1. Model having unchanging predictions for the entire series (predicting only one class all the time).
2. Inconsistent and non-reproducible results (when the above problems come and go without code modification)
3. Get at random values of "nan" losses. (How can I get rid of it permanently?)