I used this weighted random number generator.
import random
def weighted_choice(weights):
totals = []
running_total = 0
for w in weights:
running_total += w
totals.append(running_total)
rnd = random.random() * running_total
for i, total in enumerate(totals):
if rnd < total:
return i
as follows:
# The meaning of this dict is a little confusing, so here's the explanation:
# The keys are numbers and values are weights of its occurence and values - 1
# are weights of its disoccurence. You can imagine it like biased coins
# (except for 2 which is fair coin).
probabilities = { 0 : 1.0, 1 : 1.0, 2 : 0.5, 3 : 0.45, 4 : 0.4, 5 : 0.35,
6 : 0.3, 7 : 0.25, 8 : 0.2, 9 : 0.15, 10 : 0.1
}
numberOfDeactivations = []
for number in probabilities.keys():
x = weighted_choice([probabilities[number], 1 - probabilities[number]])
if x == 0:
numberOfDeactivations.append(number)
print "chance for ", repr(numberOfDeactivations)
I see quite often 7
, 8
, 9
, 10
in the result.
Is there some proof or guarantee that this is correct to probability theory?
This is mathematically correct. It's an application of inverse transform sampling (although the reason it works in this case should be relatively intuitive).
I don't know Python, so I can't say whether there are any subtleties that make this particualr implementation invalid.