Parker Glynn-Adey

The Hardware and Software of Theoretical Machines

Posted in Uncategorized by pgadey on 2015/08/01

The Hardware and Software of Theoretical Machines

(This blog entry is a written summary of an pair of talks given at Canada Math Camp on 2015-07-28 and 2015-07-30. The official notes are on pgadey.ca.)

Computers are all around us. This fact is so self-evident that writing it down, on a computer no less, is slightly embarassing. And yet, very few people ever talk about exactly what a computer is. Whenever it comes up one feels like St. Augustine, who said: “What is a computer? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.”

This talk is an informal introduction to the formalizations of computation adopted by Alonzo Church and Alan Turing, the two Great Definers of Computation. We’ll also say something about Charles Babbage because he too was awesome.

The fact that there is now a formal and intuitively satisfying definition of computation can be interpreted as a major accomplishment of human culture. Many people worked on the problem, and it took an incredibly long time to get straight. There was a great deal of philosophical and mathematical debate about the problem. And yet, looking back on intellectual history One gets the sense that the formalization of computation could have, bud did not, come about centuries earlier.

It is an amusing speculative exercise to think of what would have happened to Western culture if the ancient Greeks had accidentally discovered computation while trying to formalize the proofs in Euclid’s Elements. Or, if Charles Babbage had decided to try to build the simplest possible computer instead of a machine for producing tables.

0.1. Hardware (Turing Machines)

This material was inspired by the work of Corrado Böhm.

\displaystyle \begin{array}{|cccc} 	\hline		 	0 |& 0 |& 0 |& \dots \\ 	\hline 	\uparrow \end{array}

Consider a machine with the following specifications: There is an infitine tape which consists of cells which contain a natural number or zero. Each cell on the tape is initialized to zero. There is a head ({\uparrow}) which can move left or right one cell on the tape. The head may also add or subtract one from the contents of the cell above the head.

We define a programming language for this machine:

  • + : Add one to the cell under the head.
  • : Subtract one from the cell under the head.
  • > : Move the head one space to the right.
  • < : Move the head one space to the left.
  • ( : If the value under the head is non-zero then proceed to the next instruction. Otherwise, jump to the matching right bracket.
  • ) : Jump to the matching left bracket.

Exercise 1 Assume {a \neq 0}. Write code which will cause the machine to go from the first to second state depicted below.

\displaystyle \begin{array}{|cccc} 		\hline		 		a |& 0 |& b |& \dots \\ 		\hline 		\uparrow 	\end{array} \quad \leadsto \quad 	\begin{array}{|cccc} 		\hline		 		a |& 0 |& a+b |& \dots \\ 		\hline 		 & & \uparrow & 	\end{array}

Exercise 2 Assume {a,b,c \neq 0}.

\displaystyle \begin{array}{|cccccccc} 		\hline		 		a |& 0 |& b |& 0 |& c |& 0 |& \dots \\ 		\hline 		\uparrow & & & & & &\\ 	\end{array} 	\quad \leadsto \quad 	\begin{array}{|cccccccc} 		\hline 		a |& 0 |& b + a|& 0 |& d + c |& 0 |& \dots \\ 		\hline 		\uparrow & & & & & &\\ 	\end{array}

Exercise 3 Given a polynomial {p(x) = p_0 + p_1 x + \dots + p_n x^n} with coefficients in the integers Devise a machine and program for computing {p(1), p(2), p(3),} etc.

The remarkable thing about this exercise, which consists entirely of planning out a Difference Engine, is that it is incredibly easy to solve once you have abstracted away the ‘mere technicality’ of building a mechanism. The whole notion of a computer becomes much clearer once we stop thinking about any particular physical machine. This is, in my view, a prime example of Pólya’s insightful heuristic: “If you can’t solve a problem, solve a simpler problem”.

Once a new view on things has been established, old problems seem simple. The utility of this definition was, and continues to be, incredible. Certain long standing famous problems in logic simply crumbled when it was made. Turing’s definition of computation is so shockingly crisp that it feels like a discovery more than an invention. When I first learned it, and really appreciated it, the definition surprised me in the same way that the Galois theory approach to Euclidean constructability surprised me.

0.2. Software ({\lambda}-calculus)

“And now for something completely different.” The remarkable thing about the modern definition of computation is its seeming ubiquity. After hearing the Turing machine definition, one gets the sense that almost anything can compute. However, Church’s formalisation of computability as {\lambda}-compubility makes it feel like a very sleek, rarified, algebraic, thing. Turing-computability reminds me of a clunky old loom in comparison. And yet, remarkably, they are the same. Computer scientists and logicians have proven many interesting results but the Church-Turing theorem stands out as a first milestone; it seems, and will probably always seem, like a monumental intellectual synthesis.

The two formalisations still linger with us today in the broad scale division of computer programming languages in to imperative and functional languages. With an mind towards the history of linguistics one can say, in a whimsical sense, that Turing discovered proto-Imperative and Church discovered proto-Functional, the two great ancestor languages of (most) modern programming languages.

Church’s model of computation, {\lambda}-calculus, makes plain that all computation can be derived from an appropriate notation for functions. Everything important in {\lambda}-calculus consists of functions operating on each other according to simple algebraic-syntactic rules.

In some deep sense, {\lambda}-calculus must be Turing complete for the exact same reason that the word problem in groups must be undecidable. The complexity generated by even very simple syntactic rule is undecidable.

The fundamental stuff of {\lambda}-calculus consists of “anonymous functions”. All functions are written: {\lambda \mathtt{<name>} : \mathtt{<exp>}} and function application is written {\mathtt{<exp>} \mathtt{<exp>}}. As for syntactic sugar we are given the mere dose: {\mathtt{<exp>}=(\mathtt{<exp>})}.

Remark 1 When this talk was given to highschool students there was, of necessity, a lot of discussion about functions.

It was helpful to remember that the notation “{\mathtt{<name>} \mapsto \mathtt{<exp>}}” is as bad as “{\lambda : \mathtt{<name>} \mapsto \mathtt{<exp>}}” but has the added appeal of being widely used.

Exercise 4 Figure out an appropriate definition of computation in {\lambda}-calculus.

Exercise 5 Define the Church numerals and successor function.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: