I'm trying to check for convergence using the midpoint method as a numerical method. I can get a table of data but I'm not sure how to graph it or add a log scale. Here is my code so far. Any help is much appreciated.
from math import exp,pi
import numpy as np
import matplotlib.pyplot as plt
def midpoint(f,a,b,n):
h=float( (b-a)/n)
result=0
for i in range(n):
result+=f((a+ h/2.0)+ i*h)
result*= h
return result
#g= lambda y: (y**3)/(exp(y)-1)
g=lambda y: ((exp(-y)*y**3)/(1-exp(-y)))
a=0
b=1000
print( 'n midpoint')
for i in range(1,20):
n=2**i
m=midpoint(g,a,b,n)
print('%7d %.16f' %(n,m))
"""r=(pi**4/15)
plt.plot(n,m,label="numberical")
plt.plot(n,r,label="analytical")
plt.xlabel('n')
plt.ylabel('intergral')
plt.legend()
plt.show()"""
"""plt.semilogx(n,m)
plt.show()"""
I assume you want to plot the approximation rather than error vs interval. Key changes to make are:
1) n
and m
should be array or list;
2) Since the exact result is a constant which corresponds to a horizontal line, it should be plotted from (xmin, r)
to (xmax, r)
;
3) Use ax.set_xscale('log')
to get log scaled x axis.
The complete code is as follow. I made a few more changes, trying to use numpy
to increase readability.
from math import exp,pi
import numpy as np
import matplotlib.pyplot as plt
def midpoint(f,a,b,n):
h = float((b - a) / n)
result = 0
for i in range(n):
result += f((a + h / 2.0) + i * h)
result *= h
return result
g = lambda y: ((exp(-y) * y ** 3) / (1 - exp(-y)))
a = 0
b = 1000
n = 2 ** np.arange(1, 20)
f = np.vectorize(midpoint)
m = f(g, a, b, n)
r = (pi**4/15)
plt.plot(n, m, label="numberical")
ax = plt.gca()
ax.set_xscale('log')
ax.set_ylim(auto=False)
xmin, xmax = plt.xlim()
plt.plot([xmin, xmax], [r, r], label="analytical")
plt.xlabel('n')
plt.ylabel('intergral')
plt.legend()
plt.show()