Recursion and reduction order in the lambda calculus
Welcome back. Midterm took half the class today. Now, to finish lambda calculus.
Quick recap
You can do β-reduction onto lambda calculus expressions, ex.
(λx.E) M β=> E[M/x]
And not all expressions have normal form, ex.
(λx . x x)(λx . x x) β=> (λx . x x)(λx . x x)
Expressing recursion
So how do you write useful programs in the lambda calculus? Obviously, you need to be able to loop or iterate. And as with functional languages, you do that through recursion.
But how do you express recursion in the lambda calculus? By using the Y combinator, alson known as the fixpoint combinator (where “combinator” is really just another word for “function”).
The definition of the Y combinator is:
Yf <=> f(Yf)
In short, the Y
operator finds the fixed point of f
, where if you pass the fixed point back into f
, you get the same value back.
How do you use this to implement recursion? By making that “fixed point” value a function.
Example: factorial
Y (λf. λx. if (= x 0) 1
(* x (f (- x 1)))
Let’s give a name to this whole expression. Call it fac
.
Suppose we want to apply fac
to 3
. You can do the following steps:
Y (λf. λx. ...) 3 <=> (λf. λx ...)(Y (λ...))
^ fac 3
β<=> if (= 3 0) 1
(* 3 (Y (λ...))(- 3 1))
^ This is just fac
β<=> * 3 ((Y (λ...)) 2)
^ fac 2
To make it clearer, try to do this yourself: run through the β-reduction for this function, and see how you get the same function back. We’ll go over this in recitation.
What is Y
?
We’ve implemented recursive functions by… defining a recursive function, Y
. Can we stop cheating, and define Y
as an expression in the lambda calculus?
As it turns out, yes:
(λh. (λx. h (x x))(λx. h (x x)))
Let’s demonstrate how this works by applying Y
to f
:
(λh. (λx. h (x x))(λx. h (x x))) f
β<=> (λx. f (x x)) (λx. f (x x))
At this point we use the second function as the first’s parameter:
β<=> f (λx. f (x x)) (λx. f (x x))
But this equivalent to f (Y f)
.
Reduction order
Take this expression:
(λx. + x x)(+ 3 1)
We have two choices here: either reduce the argument through δ-reduction first:
δ=> (λx. + x x) 4
β=> + 4 4
δ=> 8
Or pass the argument into the function with β-reduction first:
β=> + (+ 3 1) (+ 3 1)
δ=> + 4 (+ 3 1)
δ=> + 4 4
δ=> 8
Note that you end up with 4 steps here as opposed to 3.
Most programming languages perform like the first example: evaluate the arguments first, then pass it in. Lazy languages, like Haskell, don’t: they only spend the time to evaluate arguments if they absolutely have to.
It’s also worth noticing that the second example is pretty much pass-by-name in a functional setting.
But two questions:
-
Are there any functions where doing these reductions in a different order would produce different results?
-
If you’re worried about having an infinite computation, like our example above, is it better to choose one order or another?
Recall redexes, from last week. In short, they’re an expression that you can apply β-reduction or δ-reduction to.
And here’s another term: reduction order. It’s (quite obviously) the order in which redexes are chosen for reduction.
There are two common reduction orders:
-
Normal Order Reduction: Always choose the leftmost, outermost redex to reduce.
-
Applicative Order Reduction: Always choose the leftmost, innermost redex to reduce.
In our example, the first reduction was in applicative order, and the second was the normal order. In other words, applicative order says to evaluate the arguments first, and normal order says to evaluate the function first.
We’ve seen where applicative order is more efficient. Are there any cases where normal order is more efficient?
Sure. If a function doesn’t need its parameter, why evaluate it?
(λx. 3) (+ 4 2)
Note that x
isn’t used in the body of the function. Why bother evaluating (+ 4 2)
then?
Here’s one that’s even worse:
(λx. 3)((λx. x x)(λx. x x))
This one will not terminate if you evaluate it in applicative order. Normal order reduction, however, would terminate immediately.
Are there any examples where applicative order terminates but normal order doesn’t?
We only have three minutes left. In short: the answer is no. Normal order will always terminate if any sequence will terminate. And that’s the answer to our second question.
And to answer our first question, no. If both reduction orders terminate, they will always have the same result.
These are two important theorems:
Church-Rosser Theorem 1: Two terminating reduction orders will always result in the same normal form.
Church-Rosser Theorem 2: If any reduction sequence leads to a normal form, then normal-order reduction will also lead to a normal form.
You can intuitively understand the first theorem if you think of functional languages: evaluating the arguments can’t possibly affect the body of the function (though that’s not the case in imperative languages).
That’s all for today. Midterms back next week.