i have a series of experiments, where the outcome of each single experiment is an integer with a known minimum value (which would be the best case) and unknown maximum value. These values are different for every experiment.
Is there a way to normalize such experiments to (as example) [0,1] or any other interval?
The outcome of the experiments is a set of integer c1,c2,...,cn like this:
f(g1)=c1 with 2 <= g1_min <= c1 <= inf
f(g2)=c2 with 2 <= g2_min <= c2 <= inf,
...
f(gn)=cn with 2 <= gn_min <= cn <= inf.
Im looking for a way to visualize the outcome of the experiment and to display how much each cn differs from the respective (optimal) gn_min.
Do you have any idea how to do this?
Greetings :-)
You could play around with something like:
(1.0 / (x - min + 1.0))
You can take the logarithm of the denominator if you want that kind of scale. The result approaches 1 for infinity. Basic calculus.
1.0 / (log10(x - min + 1) + 1.0);