I am using evalf
and subs
for evaluating an expression.
result_1 = (1/x).evalf(subs={x: 3.0}, n=25)
result_2 = (1/x).subs(x, 3.0).evalf(25)
After evaluating the expression 1/x
using evalf first the result_1
is approxiamately equal to 0.3333333333333333333333333
(with the precision set to 25). But the result_2
is approxiamately equals to 0.3333333333333333148296163
.
I want to know how this works behid the scenes.
I would be greatful for any information and links to resources.
Let's focus on this line of code:
result_1 = (1/x).evalf(subs={x: 3.0}, n=25)
{x: 3.0}
gets processed by evalf
, which creates a new dictionary where all float numbers will have some specified precision. Essentially, the new dictionary looks like: {x: Float(3.0, precision=29)}
. Note that I used precision=29
. This is not an error: as of sympy 1.12, evalf
increases the precision by 4 units.
Then, this dictionary is substituted into the expression, like this: (1/x).subs({x: Float(3.0, precision=29)})
: this trigger an evaluation resulting in a new floating point number with precision n=29. Finally, evalf
evaluates the result up to the user specified precision, in this case n=25.
Your second line of code:
result_2 = (1/x).subs(x, 3.0).evalf(25)
Here, subs
is going to sympify the number 3.0 to a Float(3.0, precision=15)
(default precision). This number gets substituted in the expression, which triggers an evaluation, resulting in a new Float number with precision=15. Finally, evalf
evaluates that number up to precision n=25.
The main difference is that the first line of code produces an exact result up to the specified precision, n=25, whereas the second number is just some approximation, because the initial evaluation was performed with precision=15.
Reference: source code, specifically sympy/core/eval.py