I'm using Z3's python API to do some linear real arithmetic. I have encountered a situation where reals very close to zero are somehow converted to 1.0/0.0. This in turn causes a floating point exception inside the C++ part of Z3.
For instance I have the following Python program:
from z3 import *
s = Solver()
s.add(0.00001 * Real("a") + 0.00001 * Real("b") > 0.0)
print(s.to_smt2())
result = s.check()
print(result)
print(s.model())
which produces the following output:
; benchmark generated from python API
(set-info :status unknown)
(declare-fun b () Real)
(declare-fun a () Real)
(assert
(let ((?x10 (+ (* (/ 1.0 0.0) a) (* (/ 1.0 0.0) b))))
(> ?x10 0.0)))
(check-sat)
Floating point exception (core dumped)
If I instead replace line 3 with
#s.add(Q(1,100000) * Real("a") + Q(1, 100000) * Real("b") > 0.0)
I get my expected output.
Does anyone have an explanation and a way how I can get this to work by using the normal Python floats?
So as it turns out, Z3 does not take the float as is, but instead converts it into a string and then tries to parse that string. For small numbers Pythons str() defaults to scientific notation, which can apparently not be properly parsed by Z3. Using "{:.20f}".format(my_small_float)
or similar explicit conversions solved my problem.