I'm taking a Machine Learning class and we are given our first statistics- "programming" exercise.
So the exercise goes like this:
Recall the story from the lecture “Two sellers at Amazon have the same price. One has 90 positive and 10 negative reviews. The other one 2 positive and 0 negative. Who should you buy from?” Write down the posterior probabilities about the reliability (as in the lecture). Calculate p(x > y|D1, D2) using numerical integration. You can gernate Beta distributed samples with the function scipy.stats.beta.rvs(a,b,size).
What we know from the lecture is the following:
applied two Beta-binomial models: p(x|D1) = Beta(x|91, 11) and p(y|D2) = Beta(y|3, 1)
Compute probability that seller 1 is more reliable than seller 2:
p(x > y | D1, D2 ) = ∫∫ [x > y] Beta (x| 91, 11) Beta (y| 3, 1) dx dy
So my attempts in Python are like that:
In [1]: import numpy as np
from scipy import integrate, stats
In [2]: f = lambda x, y: stats.beta.rvs(91, 11, x) * stats.beta.rvs(3, 1, y)
In [3]: stats.probplot(result, x > y)
And I receive an error that states:
... The maximum number of subdivisions (50) has been achieved....
but ultimately there is an answer to the calculation that is approx. 1.7 . (We are told that the answer is approx. 0.7 )
My question is: How do I calculate the [x > y] part, meaning: probability that seller 1 (x) is more reliable than seller 2 (y) ?
almost right, I'd do something like:
from scipy import stats
N = 10000
P = sum(stats.beta.rvs(3, 1, size=N) < stats.beta.rvs(91, 11, size=N))
P / N
and if you want a graphical display:
import matplotlib.pyplot as plt
import numpy as np
X = np.linspace(0.6, 0.8, 501)
Y = stats.beta.pdf(X, 1 + P, 1 + N - P)
plt.plot(X, Y)
there might be library code to do the plotting more nicely.
The above gives a Monte-Carlo estimate of the answer. If you want a better numerical estimate you can get there under quadrature using:
from scipy.integrate import dblquad
from scipy import stats
a = stats.beta(91, 11)
b = stats.beta(3, 1)
dblquad(
lambda x, y: a.pdf(x) * b.pdf(y),
0, 1, lambda x: x, lambda x: 1)
which gives me an estimate of ~0.712592804 (with an error of 2e-8).
If you want to get even more accurate you'd want to do something analytic:
https://stats.stackexchange.com/questions/7061/binomial-probability-question
which involves the use of some transcendentals I struggle to get my head around.