As long as you're working with rational numbers or something that can be written as an expression, you can use integer data and division or other operations to represent whatever decimal or fraction you want.
This is a rather contrived solution, but let's say you have a number like 3.2 that you want to work with but don't want to use floating point for accuracy reasons. Rather than
Code:
float importantNumber = 3.2;
perhaps you could do
Code:
int importantNumber[2] = {32, -1};
where the first number is an integer representation and the second is the power of ten you need to multiply the first by to get your floating point number. There's probably a better way to do it, but even this way is better than using floats if you need accuracy. If you needed to do a calculation using importantNumber, you could do (importantNumber[0] * 10^(importantNumber[1])) and then whatever calculation you need to do, and then store the result the same way.
It's usually not important for smaller scale applications, but in applications where accuracy is absolutely necessary, you have to do something like this, as floating point representations lose accuracy relatively fast (this includes doubles, as they're still representing a number as a floating point rather than an integer).