Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
User Journal

Journal Nyarly's Journal: ToC: Lesson 5 A Ragged End

We've come a ways. If all of your prior computing experience had been bashing out perl scripts, I hope you've learned something. If you hold a graduate CS degree, I hope you've had a good chuckle. It's my intention to tie up this Theory of Computation thing for a little while in an effort to pursue a couple of other topics in this journal. I do intend to come back to my Laughing Hyena impersonation in due time. Probably with algorithms next.

But there's still a little bit more to explore here before I tie it off like a punctured femoral artery. First of all, let's talk about Turing Machine's emulating one another. It's possible to build a Turing Machine such that I can give it an input which will cause it to behave as if it were an entirely different Turing Machine given a certain input.

More importantly, the Machine that I emulate can have functionality utterly different from a basic Turing Machine. You can add multiple tapes, two directional tapes (i.e. an infinite tape that the machine starts at the "middle" of), registers, RAM, what have you to a Turing Machine, and I can produce a standard states-and-one-ended-tape machine that will accept as imput a series of symbols that encodes both your super-fancy machine and it's input, and it'll come up with the same result (even if that result is a semidecision). The only cost is that my machine might take more cycles to compute than yours will. But since Turing Machines are all in the mind, those cycles can all take an arbitrarily small time, so it doesn't really matter. Especially since, with one exception, any bell or whistle we can add to a Turing Machine reduces the number of cycles to decide an input only linearly. Now, "linearly" might sound like a lot (seeing as how "half the time" is a linear reduction), but in the big picture, it's not all that much. I'm going to leave that there in the hopes that it will spark some sort of debate or comment.

That said, here are the two big things to take away from any discourse on the Theory of Computation: Number one is the Halting Problem (yippee!). The Halting Problem runs essentially like this. Say I have a Turing Machine, and I want to know whether it will halt given a particular input. Not too unlikely, not even unreasonable. Might not want to give it input that it won't stop with, right? Whatever, the point is, can you construct a Turing Machine that will decide whether a given Turing Machine M will halt if given input P? And the answer is no. This is a problem that a Turing Machine cannot solve. Why? Well, consider how you'd solve it. Build a Turing Machine that will emulate M for input P. Then let it run, and if it halts, M would, and if it doesn't, M wouldn't, right? The question is, when do you give up? When has our deciding machine "not halted?" There are more rigorous ways to demonstrate this, but it has always come down to "when is the answer no?" for me. This then, is the Turing Halting Problem, and it's impact is immense. (Of course, it isn't hard at all to argue that the Halting Problem is merely a restatement of the Godel Incompleteness Theorem, whose impact is still more frightening.) What we've just admitted is that a Turing Machine (of which every chip ever manufactured is merely a limited electronic analogue) has a problem that it cannot solve (and there are other infinitely many problems which are analogous) - there are limits to what we can do with a computer. Of course, there are limits on how large some transfinite numbers are, so it's not too bad.

Number two is the idea of NP-completeness. Consider the set of languages that a Turing Machine can decide in a reasonable number of cycles - say the square of the number of elements in the input string. Or even the cube, or any arbitrary integer power. That's a lot of problems. We'll call that set P (which stands for polynomial - i.e. the number of elements in a string to some power (yes, plus or minus the number of elements to a smaller power, but "these are not the terms you're looking for")).

Now, remember how I alluded to the fact that there was one kind of modification you could make to a Turing Machine that would improve its speed better than linearly? Well, it's pretty radical. A basic Turing Machine can have only one state at a time. Which makes sense, but what if a Turing Machine could be in more than one state at a time? Instead, it would have a list of its current states and their tapes. It's screwy, but there are languages it can decide very rapidly, by trying various possibilities concurrently. This class of Turing Machine is called Non-Deterministic. Now consider for just a moment the languages that a Nondeterministic Turing Machine can decide in polynomial time. We'll call this set NP. We can easily emulate Nondeterministic Turing Machines on deterministic ones, but for there are problems for which the emulation will take an exponential number of cycles! This is bad - exponential means that you raise some arbitrary constant to the power of the number of cells in the input string. You can do worse, but I don't know any problems for which the best algorithm is factorial time.

It's possible that P = NP though. There is a subset of NP known as the NP-complete languages. In order for a language (call it L) to be NP-complete, every other NP language needs to be convertable to L in polynomial time by a deterministic Turing Machine. If we can every show that any NP-complete language is in P, then P = NP. It's every computer scientist's secret wet dream that this is true though, so the fact that no one has proven that any NP-complete language is in P doesn't bode well. But, no one has proven that P != NP, so maybe some brilliant mind will figure out proof one way or another.

Okay, kids, it's been fun. On to other things.

This discussion has been archived. No new comments can be posted.

ToC: Lesson 5 A Ragged End

Comments Filter:

"If it ain't broke, don't fix it." - Bert Lantz

Working...