I have an array named 'data_count', storing the counts of 9 digital groups. I used a Benford’s Law to generate a expected count array called 'expected_counts'. I want to test whether two arrays have the same distribution. I have used the stats.chisquare and stats.chi2_contingency functions, but the results turned out quite different. The [scipy guide] (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html) says they should have the same results. Why it didn't work on my case? Please help me, thanks a million.
res = chi2_contingency(obs, correction=False)
(res.statistic, res.pvalue) == stats.chisquare(obs.ravel(),
f_exp=ex.ravel(),
ddof=obs.size - 1 - dof)
Here is my codes:
import numpy as np
from scipy import stats
data_count = [34, 10, 8, 16, 14, 5, 4, 7, 4]
expected_counts = [31, 18, 13, 10, 8, 7, 6, 5, 5]
expected_percentage=[(i/sum(expected_counts))*100 for i in expected_counts]
data_percentage=[(i/sum(data_count))*100 for i in data_count]
# method 1
res1 = stats.chisquare(f_obs=data_percentage, f_exp=expected_percentage)
print(res1.pvalue)
# method 2
combined = np.array([data_count, expected_counts])
res2 = stats.chi2_contingency(combined, correction=False)
print(res2.pvalue)
The result output is : 0.04329908403353834 0.45237501133745583
The documentation of chi2_contingency
does not suggest that your code will produce the same statistic and p-value for the two tests. It shows the relationships between the two tests under the following conditions:
In that case:
import numpy as np
from scipy import stats
# Example contingency table
observed = np.array([[10, 10, 20],
[20, 20, 20]])
# Expected under the null hypothesis of independence
expected = stats.contingency.expected_freq(observed)
# According to the documentation of `chi2_contingency`
dof = observed.size - sum(observed.shape) + observed.ndim - 1
res1 = stats.contingency.chi2_contingency(observed, correction=False)
res2= stats.chisquare(observed.ravel(), f_exp=expected.ravel(),
ddof=observed.size - 1 - dof)
np.testing.assert_allclose(res1.statistic, res2.statistic)
np.testing.assert_allclose(res1.pvalue, res2.pvalue)
Neither of these conditions hold in your case.
Instead, your data is in the form of the frequency of observations in each category, and your null hypothesis is that your observations were drawn from a distribution with the provided expected frequencies. In that case, chisquare
would be the appropriate function to use - if the expected and observed counts were equal. Under the assumption that the relative frequencies in your expected_counts
are correct and that it is just not normalized properly:
data_count = np.asarray([34, 10, 8, 16, 14, 5, 4, 7, 4])
expected_counts = np.asarray([31, 18, 13, 10, 8, 7, 6, 5, 5])
# Observed and expected counts must be equal.
# Assuming that the relative frequencies of your expected counts
# are correct and that it is just not normalized properly:
expected_counts = expected_counts * np.sum(data_count) / np.sum(expected_counts)
# I don't know how you generated your expected frequencies, but
# assuming no `ddof` adjustment is needed:
res = stats.chisquare(data_count, expected_counts)
# Power_divergenceResult(statistic=16.255158633621637, pvalue=0.03887025202788101)