# The Gift of Inconsistency

Computer science is a fascinating and maddening thing. Even the most seemingly-esoteric topics turn out to be fundamental. Go out to the fringes of CS and you find yourself smack in the middle of Philosophy. You try to understand a single point and you suddenly find yourself embracing everything.

So this post comes from that infinitely-recursive rabbit hole of wikipedia topics I’ve been falling down for the last few weeks, and you can probably expect a few more like this one; I’ll try to keep focused.

We begin (as do All Good Things) with the Untyped Lambda Calculus:

## The System

The lambda calculus, invented by Alonzo Church, is a formal system for modeling functional computation and logic. It has an extraordinarily simple set of semantics, has no notion of data as a separate idea, and yet (as was proven later) is Turing complete. It serves as the skeleton and sinew of the Lisp and Scheme programming languages.

At its invention it seems that the system sat right at the crossroads of formal logic, mathematics and computational theory and could embrace all three fields. But it quickly became apparent that there were apparent flaws with the Lambda Calculus as a universal system.

## The Paradox

In 1935 it was shown that the lambda calculus was *logically inconsistent* by
the
Kleene–Rosser paradox. The
“paradox” is an apparently self-contradictory lambda expression, and goes like
this:

We have a lambda calculus expression we will call `'k'`

, which takes an
argument, applies it to to itself, and negates (the ¬ symbol) the result (for
some undefined, logical definition of “negate”).

```
k = λx.( ¬ (x x))
```

Okay, no problem. But look what happens when we apply the expression `'k'`

to
itself:

```
k k = λx.( ¬ (x x)) k k = ¬ (k k)
```

(**Edit 2/8/2012:** thanks to Evan H. for pointing out I had one too many ‘k’s above)

After we reduce the expression, we seem to be saying that `(k k)`

is equal to
the opposite of itself, a self-contradictory statement.

To address this apparent shortcoming of his system, Church went on to define a typed lambda calculus. This new system differentiated between function types and simple terms in order to disallow these kinds of problematic expressions.

But as with many paradoxes, this one was really an issue of perspective…

## The Foundation

Church’s new “Simply Typed Lambda Calculus” had an interesting property that the untyped lambda calculus didn’t have: the normalization property.

This more restricted version of the Lambda Calculus was “strongly
normalizing”, meaning that any expression could be reduced to some *normal
form* (i.e. a form that cannot be reduced further) through some sequence of
rewrite/reduction steps. In programming terms, this means that every
expression in the *typed* lambda calculus is *guaranteed to terminate*.

That sounds pretty damn useful until you realize that this interesting
property also makes Church’s new typed version of his system *not Turing
complete*!

To prove this is so, consider this: imagine you encoded a program in the typed
lambda calculus; it searched for (and halted) when it found an integer that
was greater than two, and was *not* the sum of two primes.

Merely *being able to write* such a function would imply the function halted,
thus disproving
Goldbach’s conjecture, one of the
great open problems of mathematics. All without even needing to evaluate said
expression! The function referred to is of course not expressible in the typed
lambda calculus.

It turns out that an apparent logical contradiction was actual the essential secret to computation.

### Embracing Inconsistency

When you think of the LC as a model of computation, rather than a framework for logical assertions, the Kleene–Rosser paradox becomes what we might call the “Kleene–Rosser useless function”.

Let’s go ahead and express a nearly identical paradox in haskell:

```
k :: (Num a) => a
k = negate k
```

Haskell has no notion of mutability, so what we are saying here is “k is the negation of itself”. No logical fallacies or great existential questions here, just an infinite loop (and not a very useful one)!

Let’s express another similar “paradox”, one we can actually *use*:

```
babble :: [ String ]
babble = "blah" : babble
```

How can a list be equal to itself with an extra element added? Doesn’t that
imply a paradox? Well I know the runtime isn’t swayed by that argument,
producing an infinite stream of “blah"s. We can even *use* this stream because
of haskell’s lazy evaluation strategy.

But note: it isn’t laziness that lets us get away with defining paradoxes; it just allows us to make use of some more blatant paradoxical expressions without them blowing up in our faces as soon as we call them.

### Conclusion and Further Thoughts

Interestingly, there are quite a number of non-turing complete programming languages (or language subsets) that have been created both for theoretical purposes and for practical use. Here are a few. They seem to owe their existence and usefulness to an environment of turing complete systems.

So does this mean that formal logic is Turing Incomplete? Let me know your thoughts, and please let me know if I harmed any knowledge in the making of this post.