I have an algorithm for parallel computing a certain integral. This solution gives very good time acceleration if you use multiple threads. And the more threads the faster the calculation. I tested up to -N4 and the acceleration factor reached 8. Ie, the launch of the program on 4 cores is the calculation of the integral 8 times faster than the launch of this program on 1 core. But I want to add a rule for estimating the error of Runge. Since now, in order to increase the accuracy of the calculation of the integral, it is necessary to increase N. Which indicates how many parts we need to break our original segment. How can i do this?
import Data.Time
import System.Environment
import Data.Massiv.Array as A
main = do
begT <- getCurrentTime
putStrLn $ show $ integrateA 100000 f 0.00005 10000
endT <- getCurrentTime
putStrLn $ init $ show $ diffUTCTime endT begT
f :: Double -> Double
f x = sin x * cos x*x*x
integrateA :: Int -> (Double -> Double) -> Double -> Double -> Double
integrateA n f a b =
let step = (b - a) / fromIntegral n
sz = size segments - 1
segments = computeAs P $ A.map f (enumFromStepN Par a step (Sz (n + 1)))
area y0 y1 = step * (y0 + y1) / 2
areas = A.zipWith area (extract' 0 sz segments) (extract' 1 sz segments)
in A.sum areas
You don't need to change anything in the already supplied integral estimator in order to add precision to it using Runge rule. Something like that, I think:
-- | Returns estimated integral up to a precision, or value estimated at max
-- number of steps
rungeRule ::
Int -- ^ Maximum number of steps as an upper bound, to prevent unbounded computation
-> Double -- ^ ε -- precision
-> Int -- ^ Starting value of @n@
-> Double -- ^ Θ -- ^ Either 1/3 for trapezoidal and midpoint or 1/15 for Simpson's
-> (Int -> Double) -- ^ Integral estimator
-> Either Double (Int, Double)
rungeRule nMax epsilon n0 theta integralEstimator =
go (integralEstimator n0) (2 * n0)
where
go prevEstimate n
| n >= nMax = Left prevEstimate
| theta * abs (curEstimate - prevEstimate) < epsilon =
Right (n, curEstimate)
| otherwise = go curEstimate (2 * n)
where
curEstimate = integralEstimator n
trapezoidal ::
Double -- ^ ε -- precision
-> (Double -> Double) -- ^ f(x) - function to integrate
-> Double -- ^ a - from
-> Double -- ^ b - to
-> Either Double (Int, Double)
trapezoidal epsilon f a b =
rungeRule 100000 epsilon 10 (1 / 3) (\n -> integrateA n f a b)
If we run it, we get promising results:
λ> trapezoidal 0.0005 (\x -> x * x) 10 20
Right (640,2333.333740234375)
λ> trapezoidal 0.00005 (\x -> x * x) 10 20
Right (2560,2333.3333587646484)
λ> trapezoidal 0.00000005 (\x -> x * x) 10 20
Right (81920,2333.3333333581686)
λ> trapezoidal 0.000000005 (\x -> x * x) 10 20
Left 2333.3333333581686
Side note:
Your function f
the way you wrote it suggests that:
you expect: f x = (sin x) * (cos (x*x*x))
where in reality it is: f x = (sin x) * (cos x) * x * x
Edit:
Solution presented above is general enough to work for all integral approximation rules. But there is some duplicate work happens at each iteration of Runge rule, in case of trapezoidal rule half of the elements are being recomputed each time, which I saw as a potential optimization. What comes next is a bit more advance usage of massiv
, as such I am not gonna be able to elaborate on how it work, except the fact that the segments
array passed around is used to access values computed at a previous step.
trapezoidalMemoized ::
Int
-> Array P Ix1 Double
-> (Double -> Double)
-> Double
-> Double
-> (Double, Array P Ix1 Double)
trapezoidalMemoized n prevSegments f a b =
let step = (b - a) / fromIntegral n
sz = size segments - 1
curSegments =
fmap f (enumFromStepN Seq (a + step) (2 * step) (Sz (n `div` 2)))
segments =
computeAs P $
makeLoadArrayS (Sz (n + 1)) 0 $ \w -> do
A.iforM_ prevSegments $ \i e -> w (i * 2) e
A.iforM_ curSegments $ \i e -> w (i * 2 + 1) e
area y0 y1 = step * (y0 + y1) / 2
areas = A.zipWith area segments (extract' 1 sz segments)
in (A.sum areas, segments)
trapezoidalRungeMemo ::
Double -- ^ ε -- precision
-> (Double -> Double) -- ^ f(x) - function to integrate
-> Double -- ^ a - from
-> Double -- ^ b - to
-> Either Double (Int, Double)
trapezoidalRungeMemo epsilon f a b = go initEstimate initSegments 4
where
(initEstimate, initSegments) =
trapezoidalMemoized 2 (A.fromList Seq [f a, f b]) f a b
nMax = 131072 -- 2 ^ 17
theta = 1 / 3
go prevEstimate prevSegments n
| n >= nMax = Left prevEstimate
| theta * abs (curEstimate - prevEstimate) < epsilon =
Right (n, curEstimate)
| otherwise = go curEstimate curSegments (2 * n)
where
(curEstimate, curSegments) =
trapezoidalMemoized n prevSegments f a b
And making it parallelizable is even more involved:
-- Requires additional import: `Data.Massiv.Array.Unsafe`
trapezoidalMemoizedPar ::
Int
-> Array P Ix1 Double
-> (Double -> Double)
-> Double
-> Double
-> (Double, Array P Ix1 Double)
trapezoidalMemoizedPar n prevSegments f a b =
let step = (b - a) / fromIntegral n
sz = size segments - 1
curSegments =
fmap f (enumFromStepN Seq (a + step) (2 * step) (Sz (n `div` 2)))
segments =
computeAs P $
unsafeMakeLoadArray Par (Sz (n + 1)) Nothing $ \scheduler _ w -> do
splitLinearlyWith_
scheduler
(unSz (size prevSegments))
(unsafeLinearIndex prevSegments) $ \i e -> w (i * 2) e
splitLinearlyWith_
scheduler
(unSz (size curSegments))
(unsafeLinearIndex curSegments) $ \i e -> w (i * 2 + 1) e
area y0 y1 = step * (y0 + y1) / 2
areas = A.zipWith area segments (extract' 1 sz segments)
in (A.sum areas, segments)