I don’t really think I had to do anything more than this in ANY of the programs I have made before.
for reference, I am dividing the window height by the image height to attain a ratio to alter the width of the image. it SHOULDN’T be this difficult. Create the resulting variable a float, and it automatically produces the proper decimal value. Since the image height is greater than the window height, I am getting nothing but zeros.
This is xCode 5.02 on Maverics, on a macbook air using openframeworks 0.8.0
C++ will not promote types unless it has to, and you are dividing two integers, so the result is an integer. Only at the time of assigning it to a float the result is promoted, but it’s too late because the decimal part is gone already. If one of the arguments at the right side of the “=” is a float then the other ones will be promoted to float too.
actually what you need to convert to a float is the divisor like in:
float jTest = iTest/float(iTest)
to be sure i usually convert both things to floats, this works like that because integer operations have been traditionally faster than float ones, there’s even different parts of the cpu dedicated to do integer and float operations. so if you try to divide 2 integers the compiler will do an integer division returning an integer
Thanks, both of you, for the info. I just hadn’t had a problem with the more complicated mathematical things I had done in the past, and such a simple thing was causing headaches.
And, yeah… I had a bad case of the dumb, and it was the division by itself that was the main problem. If I could, I would erase this thread out of sheer embarrassment. I shouldn’t code past midnight.
Scratch that, the code didn’t work without the float() that Arturo suggested. This one works now:
This only works with the float(). Should it be included somewhere in the documentation? I could’t find any information online about this behaviour last night. (my google-fu may have been lacking)