I have started to study SICP and used repl.it for code exercises. Now I want to write code locally. I have installed mit-scheme app and tried to move my code from repl.it to my computer.
But when I try to run the program that calculates the square root I get the very strange output with VERY BIG NUMBER as a result:
1 ]=> (define (square x) (* x x))
;Value: square
1 ]=> (define (abs x) (if (<= x 0) (- x) x))
;Value: abs
1 ]=> (define (average x y) (/ (+ x y) 2))
;Value: average
1 ]=> (define (improve guess x) (average guess (/ x guess)))
;Value: improve
1 ]=> (define (good-enough? new_guess old_guess)
(< (abs (- new_guess old_guess)) 0.000000000000001))
;Value: good-enough?
1 ]=> (define (sqrt-iter guess x)
(define new_guess (improve guess x))
(if (good-enough? new_guess guess)
new_guess
(sqrt-iter new_guess x)))
;Value: sqrt-iter
1 ]=> (define (sqrt x) (sqrt-iter 1 x))
;Value: sqrt
1 ]=> (sqrt 16)
;Value: 271050543121377825343773346473727756780989953/67762635780343597914988263490310774732975168
1 ]=>
End of input stream reached.
Here is the source code of the program, it works great at the repl.it:
(define (square x) (* x x))
(define (abs x)
(if (<= x 0) (- x) x))
(define (average x y)
(/ (+ x y) 2))
(define (improve guess x)
(average guess (/ x guess)))
(define (good-enough? new_guess old_guess)
(< (abs (- new_guess old_guess)) 0.000000000000001))
(define (sqrt-iter guess x)
(define new_guess (improve guess x))
(if (good-enough? new_guess guess)
new_guess
(sqrt-iter new_guess x)))
(define (sqrt x) (sqrt-iter 1 x))
(sqrt 16)
Note: OS: MacOS Catalina, mit-scheme app version - 10.1.11
How can I fix this bug?
It's not a big number, in fact it's just a rational number close to 4.0, take a careful look (I added extra spaces for clarity):
271050543121377825343773346473727756780989953 / 67762635780343597914988263490310774732975168
You'll get the desired result by switching to inexact arithmetic, just make your initial guess
a decimal value:
(define (sqrt x) (sqrt-iter 1.0 x))
Now it works as expected:
(sqrt 16)
=> 4.0
See the classic article What Every Computer Scientist Should Know About Floating-Point Arithmetic for more details of why this is happening.