I have used many recursive functions now but still have trouble getting my head around how such a function exactly works (i'm familiar with the second line (i.e. | n==0 = 1
) but am not so familiar with the final line (i.e. | n>0 = fac (n-1) * n
)).
fac :: Int -> Int
fac n
| n==0 = 1
| n>0 = fac (n-1) * n
Recursive algorithms are very closely linked to mathematical induction. Perhaps studying one will help you better understand the other.
You need to keep two key principles in mind when using recursion:
The Inductive Step is often the most difficult piece, because it assumes that everything it relies upon has already been computed correctly. Making this leap of faith can be difficult (at least it took me a while to get the hang of it), but it is only because we've got preconditions on our functions; those preconditions (in this case, that n
is a non-negative integer) must be specified so that the inductive step and base case are always true.
The Base Case is also sometimes difficult: say, you know that the factorial N!
is N * (N-1)!
, but how exactly do you handle the first step on the ladder? (In this case, it is easy, define 0! := 1
. This explicit definition provides you with a way to terminate the recursive application of your Inductive Step.)
You can see your type specification and guard patterns in this function are providing the preconditions that guarantee the Inductive Step can be used over and over again until it reaches the Base Case, n == 0
. If the preconditions can't be met, recursive application of the Inductive Step would fail to reach the Base Case, and your computation would never terminate. (Well, it would when it runs out of memory. :)
One complicating factor, especially with functional programming languages, is the very strong desire to re-write all 'simple' recursive functions, as you have here, with variants that use Tail Calls or Tail Recursion.
Because this function calls itself, and then performs another operation on the result, you can build a call-chain like this:
fac 3 3 * fac 2
fac 2 2 * fac 1
fac 1 1 * fac 0
fac 0 1
fac 1 1
fac 2 2
fac 3 6
That deep call stack takes up memory; but a compiler that notices that a function doesn't change any state after making a recursive call can optimize away the recursive calls. These kinds of functions typically pass along an accumulator argument. A fellow stacker has a very nice example: Tail Recursion in Haskell
factorial 1 c = c
factorial k c = factorial (k-1) (c*k)
This very complicated change :) means that the previous call chain is turned into this:
fac 3 1 fac 2 3
fac 2 3 fac 1 6
fac 1 6 6
(The nesting is there just for show; the runtime system wouldn't actually store details of the execution on the stack.)
This runs in constant memory, regardless of the value of n
, and thus this optimization can convert 'impossible' algorithms into 'possible' algorithms. You'll see this kind of technique used extensively in functional programming, much as you'd see char *
frequently in C programming or yield
frequently in Ruby programming.