This looks a bit confusing. When you need to calculate the std, you can easily use np.std()
. And Std is square root of variance. However, when we calculate variance of a sample, we divide it by n-1
. So if we use np.std()
this shouldn't give us a correct output.
Is there another way to calculate standard deviation of a sample or do we need to calculate it manually?
You can specify the denominator degrees of freedom when you use np.std()
. Just use the ddof
parameter:
np.std(x, ddof=1)
You can read more about it in the docs