I have asked a question very similar to this so I will mention the previous solutions at the end, I have a website that calculates π with the client's CPU while storing it on a server, so far I've got:
'701.766.448.388' points inside the circle, and '893.547.800.000' in total, these numbers are calculated using this code. (working example at: https://jsfiddle.net/d47zwvh5/2/)
let inside = 0;
let size = 500;
for (let i = 0; i < iterations; i++) {
var Xpos = Math.random() * size;
var Ypos = Math.random() * size;
var dist = Math.hypot(Xpos - size / 2, Ypos - size / 2);
if (dist < size / 2) {
inside++;
}
}
The problem
(4 * 701.766.448.388) / 893.547.800.000 = 3,141483638
This is the result we get, which is correct until the fourth digit, 4 should be 5.
Previous problems:
Disclamer
It might just be that I've reached a limit but this demonstration used 1 million points and got 3.16. considering I've got about 900 billion I think it could be more precisely.
I do understand that if I want to calculate π this isn't the right way to go about it, but I just want to make sure that everything is right so I was hoping anyone could spot something wrong or do I just need more 'dots'.
EDIT: There are quite a few mentions about how unrealistic the numbers where, these mentions where correct and I have now updated them to be correct.
You could easily estimate what kind of error (error bars) you should get, that's the beauty of the Monte Carlo. For this, you have to compute second momentum and estimate variance and std.deviation. Good thing is that collected value would be the same as what you collect for mean, because you just added up 1 after 1 after 1.
Then you could get estimation of the simulation sigma, and error bars for desired value. Sorry, I don't know enough Javascript, so code here is in C#:
using System;
namespace Pi
{
class Program
{
static void Main(string[] args)
{
ulong N = 1_000_000_000UL; // number of samples
var rng = new Random(312345); // RNG
ulong v = 0UL; // collecting mean values here
ulong v2 = 0UL; // collecting squares, should be the same as mean
for (ulong k = 0; k != N; ++k) {
double x = rng.NextDouble();
double y = rng.NextDouble();
var r = (x * x + y * y < 1.0) ? 1UL : 0UL;
v += r;
v2 += r * r;
}
var mean = (double)v / (double)N;
var varc = ((double)v2 / (double)N - mean * mean ) * ((double)N/(N-1UL)); // variance
var stdd = Math.Sqrt(varc); // std.dev, should be sqrt(Pi/4 (1-Pi/4))
var errr = stdd / Math.Sqrt(N);
Console.WriteLine($"Mean = {mean}, StdDev = {stdd}, Err = {errr}");
mean *= 4.0;
errr *= 4.0;
Console.WriteLine($"PI (1 sigma) = {mean - 1.0 * errr}...{mean + 1.0 * errr}");
Console.WriteLine($"PI (2 sigma) = {mean - 2.0 * errr}...{mean + 2.0 * errr}");
Console.WriteLine($"PI (3 sigma) = {mean - 3.0 * errr}...{mean + 3.0 * errr}");
}
}
}
After 109 samples I've got
Mean = 0.785405665, StdDev = 0.410540627166729, Err = 1.29824345388086E-05
PI (1 sigma) = 3.14157073026184...3.14167458973816
PI (2 sigma) = 3.14151880052369...3.14172651947631
PI (3 sigma) = 3.14146687078553...3.14177844921447
which looks about right. It is easy to see that in ideal case variance would be equal to (Pi/4)*(1-Pi/4). It is really not necessary to compute v2
, just set it to v
after simulation.
I, frankly, don't know why you're getting not what's expected. Precision loss in summation might be the answer, or what I suspect, you simulation is not producing independent samples due to seeding and overlapping sequences (so actual N is a lot lower than 900 trillion).
But using this method you control error and check how computation is going.
UPDATE
I've plugged in your numbers to show that you're clearly underestimating the value. Code
N = 893_547_800_000UL;
v = 701_766_448_388UL;
v2 = v;
var mean = (double)v / (double)N;
var varc = ((double)v2 / (double)N - mean * mean ) * ((double)N/(N-1UL));
var stdd = Math.Sqrt(varc); // should be sqrt(Pi/4 (1-Pi/4))
var errr = stdd / Math.Sqrt(N);
Console.WriteLine($"Mean = {mean}, StdDev = {stdd}, Err = {errr}");
mean *= 4.0;
errr *= 4.0;
Console.WriteLine($"PI (1 sigma) = {mean - 1.0 * errr}...{mean + 1.0 * errr}");
Console.WriteLine($"PI (2 sigma) = {mean - 2.0 * errr}...{mean + 2.0 * errr}");
Console.WriteLine($"PI (3 sigma) = {mean - 3.0 * errr}...{mean + 3.0 * errr}");
And output
Mean = 0.785370909522692, StdDev = 0.410564786603016, Err = 4.34332975349809E-07
PI (1 sigma) = 3.14148190075886...3.14148537542267
PI (2 sigma) = 3.14148016342696...3.14148711275457
PI (3 sigma) = 3.14147842609506...3.14148885008647
So, clearly you have problem somewhere (code? accuracy lost in representation? accuracy lost in summation? repeated/non-independent sampling?)