I found on the Wikipedia page for IEEE 754 a diagram that shows the relationship between floating point precision and the actual floating point value (https://en.wikipedia.org/wiki/IEEE_754#Formats):
Is it correct to interpret this diagram so, that smaller values are generally "more accurate" than bigger values?
In my concrete case, I have an application that counts mass flow (in range of 0-90 grams/s) into a float accumulator/totalizer. I thought it would be better, in respect to the precision diagram, I mentioned first, to use a "bigger unit" for the totalizer itself, so that the actual float value is always at a lower range: this means in my case using tons instead of grams for the totalizer unit.
Is that consideration correct or would be there no benefit for using a float totalizer who counts in grams or tons?
The correct interpretation of the diagram is that precision is relative to magnitude.
Let's say that your number is around 100 (because you're measuring in grams). That means that your precision is around 1e-5 grams (That's about the mass of a small grain of sand.)
Now let's say that you instead measure in units of MASS OF THE SUN. Now your number is 5e-32 solar masses. The relative precision is now around 5e-37 solar masses. That's about the mass of a small grain of sand.
So no, no benefit. The point of floating point numbers is to keep you from having to worry about magnitude.
If you're worried about the overall accuracy of summing many numbers, use Kahan summation.