Computer Science Asked on November 28, 2021
I can very clearly see "why" mu-recursion is a universal model of computation, i.e. why the Church-Turing thesis — that any physically computable algorithm can be executed with mu-recursion — holds for mu-recursion. It reflects exactly the type of algorithms that I can carry out with my own brain.
I cannot see an analogous intuition for understanding why the Turing machine can execute any physically computable algorithm — i.e. how did Turing "see" that the Turing machine was a good definition to use? Is there a good way to "imagine" the algorithms I perform in terms of the Turing machine, as opposed to general recursion as I am used to?
Imagine you are performing a computation by hand with a pencil and a stack of paper. [1] There is a limit on how many pieces of information you can keep in working memory at a time (sometimes claimed to be seven plus or minus two). So when you can't keep everything in your head, you write some of it down on a sheet of paper. And when you fill up a sheet, you put it in a pile for later reference and pull out another sheet. But there is a limit on how many sheets of paper you can look at at a time, too; you will have to flip between sheets as you work.
Turing machines are an abstraction of this idea of local computation. A Turing machine can write down as much auxiliary information as it wants, but it can only look at a finite amount of it at a time. A Turing machine head is like your brain's working memory—it can only store so much stuff before it has to write it down somewhere to avoid forgetting it.
The Church-Turing thesis says that any physically realizable computation does not require any "essentially nonlocal" operations. That is, any physically realizable computation can be broken down into a series of steps, each of which operates on $O(1)$ bits of information; there is no primitive operation that requires, say $O(n)$ arguments and cannot be reduced to operations with fewer arguments. [2] Or: anything you can compute in the real world can be computed given an unlimited stack of pencils and paper.
[1] Recall that the word "computer" in Turing's time referred to a human profession!
[2] A primitive operation that accepts an unbounded number of arguments is exactly what the oracle in an oracle Turing machine provides—hence why oracle machines can be more powerful than Turing machines.
Answered by Aaron Rotenberg on November 28, 2021
I'll try to recount the history I understand of this but I'm not an expert on the history of mathematics. I think the early history of this issue is key. I'd also like to point out that I think I have some details wrong but the big picture right. I'd appreciate corrections and/or citations from people.
So, our story starts off with Hilbert's problems. A lot of Hilbert's problems deal with computation. Funny enough, at this time, there was no definition of what "computation" was! There was no mathematical model of what a computer was.
Many attempts were made. For a bit people wondered if primitive recursion might be the ticket but we found mechanically computable functions that were not primitive recursive like the Ackerman function. Still we knew that the primitive recursive functions could be computed with a physical machine so we at least had that.
Eventually Alonzo Church proposed the lambda calculus as a universal model of computation. Church started a correspondence with Kurt Godel (of incompleteness theorem fame). Godel did not believe that the lambda calculus was a universal model of computation. Godel proposed an alternative definition which was essentially the mu-recursive functions which I believe at the time he just called "the recursive functions". Godel had defined a set of functions from natural numbers to natural numbers dubbed "the recursive functions". Alonzo made a bet with Godel that he could prove the two models equivalent. After some correspondence Alonzo produced a proof. Godel's reaction was not to accept that both were valid models but to instead assume his recursive functions were not a sufficient model. It's hard to say what the intuition of these two giants was telling them at the time. They neither had our insight into computation but they were also geniuses who had studied such issues deeply, I don't really feel I can place myself in their shoes frankly. Clearly each had an intuition that their models of computation were "the" model of computation but this intuition didn't pass for mathematics.
Along came Allen Turing who produced the Turing machine model. I've heard that this model, aside from being described like a machine, was meant to be a model of how a human would do computation on paper. Regardless the important point is that there was a written down philosophical argument and intuition for why Turing machines captured the notion of computation. Turing proved that Turing machines were equivalent to these models (presumably by proving Turing machines equivalent to the lambda calculus given that Alonzo was Allen's advisor roughly around this period).
This proof, that all three models of computation, were equivalent, together with the collective intuitions of why these models of computation are complete, finally convinced Godel. These three, really just via some letters between each other, all had different intuitions. For many, including Godel and Turing, Turing machines were the most obviously complete model. At some point in these discussions however, Godel certainly found mu-recursive functions to be intuitively universal. I'm not sure any one ever thought the lambda calculus was intuitively universal but Alonzo and Kleene seemed to gain this intuition via working with Church encodings and other ways of computing, gained this intuition (I'm not sure when the fixed point combinator was discovered but this seems somewhat critical to me?).
So frankly, I'm not sure there's a great way to answer your question. Intuition on these is clearly different for everyone. I find Turing machines mostly intuitive but also I think the lambda calculus is intuitively universal after being shown the fixed point combinator and many examples of its use. You're just different from me and that's ok!
My best description of the intuition for Turing machines (I've forgotten my original source for this framing but it is not mine): Computation used to be something you did with a pencil. You'd write symbols down in an orderly way on a piece of paper, maybe cross some things out, write some new things down etc... Any computable function can be carried out by a human with a finite amount of paper essentially. Paper is of course 2D but what matters is the relative locations of the symbols and that they're unique so a Turing machine should be able to simplify it self and only use a 1D peice of paper. Surely anything you can write in 2D paper, you can translated to a 1D setting right? Now in this 1D setting you move though different steps of the algorithm, look at different parts of the tape to determine what step to do next, and then you eventually write a symbol down and move on to the next step. Turing machines capture exactly this kind of behavior. Steps are really just like states, moving the read-write head along the tape is like scanning your eyes across the paper, and writing a symbol down at a location is no different than you using a pen or pencil. There is clearly a machine that could do this and it seems intuitive that any algorithm you could carry out on pen and paper as a human could be captured by such a machine. I can't say weather this was the idea Turing had in mind but maybe it helps? Turing's intuition for this might be stated in his original paper if your curious (I've never actually read it).
Answered by Jake on November 28, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP