vendredi 11 septembre 2015

int or float to represent numbers that can be only integer or "#.5"

Situation

I am in a situation where I will have a lot of numbers around about 0 - 15. The vast majority are whole numbers, but very few will have decimal values. All of the ones with decimal value will be "#.5", so 1.5, 2.5, 3.5, etc. but never 1.1, 3.67, etc.

I'm torn between using float and int (with the value multiplied by 2 so the decimal is gone) to store these numbers.

Question

Because every value will be .5, can I safely use float without worrying about the wierdness that comes along with floating point numbers? Or do I need to use int? If I do use int, can every smallish number be divided by 2 to safely give the absolute correct float?

Is there a better way I am missing?

Other info

I'm not considering double because I don't need that kind of precision or range.

I'm storing these in a wrapper class, if I go with int whenever I need to get the value I am going to be returning the int cast as a float divided by 2.



via Chebli Mohamed

Aucun commentaire:

Enregistrer un commentaire