You can look up the concrete difference between
double variables easily by searching for those terms; this group is not for answering simple definition queries, and anyway the details vary depending on your programming language and implementation.
But you can reason from fundamental principles about what the answer must be like. Increased memory size for a data type is a cost. A cost must be offset by some benefit, otherwise people wouldn’t bother having the costly type in the first place. The principal assets that computing systems deal in are time and space. Therefore, increased space requirements probably allow us to perform some computation in less time.
Why would this be? Doesn’t
double simply support the same computation with higher precision? It is only the same computation as long as you don’t actually are about that higher precision. If you do care about a certain level of precision, then
double allows you to do things in one step that you would otherwise have to emulate with multiple
And there you have your answer:
double allows you to perform calculations in a particular range or at a particular precision that is greater than the range, or the precision, you could achieve with
float values. This means that it very much depends on the details of your task description whether or not it is worthwhile to use the smaller or the larger numeric type. (And that is exactly the answer you’d get if you simply asked that question e.g. in a C programming forum.)