Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
User Journal

Journal Nyarly's Journal: ToC: Lesson 4 Turing Machines 2

Recap: we've spent three lessons building up the ideas of strings and languages and recogition devices and whatnot. Finally we've gotten up to the idea of Finite State Machines, and all their glory. But there's till something missing.

There's still an entire class of languages we can even begin to recognize with anything we've seen so far. Consider a language defined as those strings containing the same number each of three symbols. We'll even make it easy: all of the first symbol will come first, then all the second, then the third. There just isn't a FSM that can recognize a string in that language. If I said "up to ten of each character" (or even "up to a million"), you could build the machine - it amounts to a chain of states that count across the first symbol (e.g."A"), and at the first occurance of the second character, you have a series of states to count for the correct number of the second and then third symbols ("B and C"). If you get to the end of the B-C chain, accept the string, otherwise reject it. Well and good, but if I don't guarantee an upper limit, you need an infinite number of states. Can't we do better than an Infinite State Machine?

Yes, we can. It's called a Turing Machine. "Turing" as in Alan Turing, as in Turing Test, UNIVAC, the Church-Turing thesis, and cracking Enigma during the Second World War. And later harassed by a grateful Britain to the point of suicide.

But leaving that aside for the moment, let's talk about what a Turing Machine is. Take a Finite State Machine. Now take a memory structure. The classic description is a tape, like a long strip of paper, composed of an infinite number of cells into each of which a single symbol can be written. It has a first cell, but no last cell. There's a head that can move left one cell, move right one cell, read the symbol on the tape at its position, or write a symbol to the tape at its position. If it tries to go left from the first cell, it remains where it is. Easy. Now, we attach the tape to the FSM with the following simple rules:

  • Instead of using just the next symbol in the string, the FSM uses the symbol under the head on the tape.
  • Consequently, rather than feeding the string into the FSM, we put it on the tape, one symbol per cell.
  • In addition to changing state, the machine either outputs a symbol to moves the head left or right.

See: simple. Now we can accept our language of like numbers of A's, B's and C's with what amounts to eight states. Quick overview: if you read an A, replace it with a D, and go right until you find a B, which you replace with a D, and then go right until you find a C, which you replace with a D, and then go left until you're back at the beginning. Go right over D's until you find an A and repeat, unless you get to the end of the string, in which case accept the string. If at any time you read a wrong character (an empty cell when you were looking for a C, or a C instead B, or whatever), reject the string. If Slashdot let me do tables, I'd do up the states and outputs, but it doesn't so actually representing the machine is an exercise for the reader.

Notice, as an interesting characteristic of a Turing Machine, that not only can it accept or reject a string, it can not halt at all. This is new, and a necessary result of a Turing Machine's ability to deal with what we'll call recursive languages. The difference is that while a FSM always halts at the end of it's input (and acceptance depended on whether that halt was in an accepting state or not), a Turing machine has an infinite number of ways to never halt. Consider a state where if it reads an A it writes an A. Result: it never halts, reading and writing A's forever. Or if could scan right until it finds an A, and then scan left until it reaches the beginning of the tape. Result: it sweeps back and forth ad infinitum.

There is, in fact, and entire class of languages based on this property. Consider the idea of semidecision, where a Turing Machine accepts a word into a language by halting if it entered as input, but otherwise does not halt. A language for which there is a Turing Machine that halts for it is called recursively enumerable

When you come down to it, Turing Machines are the big deal in this subject of discourse. In fact, the single most important result of any Theory of Computation course deals with Turing Machines. And, that result, and all it's depressing implications, applies to the very machine you're reading this article on, and every other computer, too. But we'll discuss that next lesson.

This discussion has been archived. No new comments can be posted.

ToC: Lesson 4 Turing Machines

Comments Filter:
  • Someone's trying to have real signal here in Slashdot. Scary.

    Seriously, do you think it's all that good to have Turing machines be the centerpiece of computation? Why not Church's equivalent lambdas, which are easier to keep in mind? Here's a link [weblogs.com] that explains a lot of this.

    I'm actually asking in earnest. I'm reading Sipser's intro to computation (have any other recommendations on books?), and I'm not absolutely sure whether Turing machines are more economical to make. In fact, one can make Turing machines with that make Church's lambdas go fast.
    • Yeah, I'm trying for signal. Much good may it do me.

      I think it's mostly a Newton/Liebnitz thing. Turing got his Machine popularized far further that Church's Lambda. Church is probably lucky to have his name on the Thesis. Not that Church is any less of a mathemetician or computer scientist (although we all have our favorite science heros (stupid Tesla, stupid coil)) just that Turing's idea got latched onto more.

      On the other hand, I don't know of any language that includes the keyword "machine", but I know several that denote function defines as lambda(). Honestly, if you get anywhere in actual computer science (rather than exclaiming "neat, I've got my BS, let's go code something") you wind up learning lambda calculus, primarily I think because it's easier to include in papers.

      On the other hand, I'm not sure about your contentions of the economics and speed of Turing Machines and lambda calculus. I don't think anyone honestly wants to build a Turing Machine. They're mental constructs, so a cycle of a Turing Machine takes as much or as little time as you want, and they cost nearly nothing to build (paper and pencil costs, mainly).

      As far as ease of use goes, some people have an easier time considering how a Turing machine will behave that how lambdas will. On the other hand, my experience is that lambdas are much easier to reason about, which explains their popularity in the upper eschalons of the CompSci Ivory Tower.

      Incidentally, I'd be honored to have your comments about future lessons in this journal.

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...