optimization – What is the correct term / theory for predicting binary variables based on their continuous value?

I am working with a linear programming problem in which we have about 3500 binary variables. As a general rule, it takes about 72 hours for IBM Cplex to achieve a goal with a gap of about 15-20% with the best link. In the solution, we obtain about 85 to 90 binary files worth 1, the others being null. The value of the goal is about 20 to 30 million. I have created an algorithm in which I predict (setting their values) 35 binary (with the value 1) and leaving those that remain solved through the Cplex. This reduced the time needed to reach the same goal at around 24 hours (the best limit is slightly compromised). I've tried this approach with the other (same type of problems) and it worked with them too. I call this approach "probabilistic prediction", but I do not know what is the standard term that designates it in mathematics?

Here is the algorithm:

Let y = ContinousObjective (AllBinariesSet);
WriteValuesOfTheContinousSolution ();
Let count = 0;
Let managedbinaries = EmptySet;
while (count < 35 )
{
Let maxBinary =AllBinariesSet.ExceptWith(processedJourneys).Max();//Having Maximum Value between 0 & 1 (usually lesser than 0.6)            
processedJourneys.Add(maxBinary);
maxBinary=1;
Let z = y;
y = ContinousObjective(AllBinariesSet);
if (z > y + 50000)
{
// Reset maxBinary
maxBinary.LowerBound = 0;
maxBinary.UpperBound = 1;
y = z;
}
other
{
WriteValuesOfTheContinousSolution ();
account = account + 1;
}
}

In my opinion, this works because the matrix of solutions is very rare and there are too many good solutions.