Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
User Journal

Journal Journal: Technology triumphing over scarcity, part 2

Comments regarding my last journal entry tipped me over the mental edge into something I thought was kind of novel. I was going to reply in thread, but the reponse got out of hand so:

Except the matter duplicator argument does bring up the essential difference between paying for food or furniture and paying for software or music. We have money - in fact we have economics - to solve the problem of scarcity: that not everyone can have as much of everything as they want. But that's not true for music or software. I can give it away without losing it. (<hippy>Like love...</hippy>) Digitally stored data is like that; I can make perfect copies, and at almost no cost.

Ultimately then, it is not like walking into a music store and stealing the CD. The CD represents an investment of resources that has a cost. The music had a cost to produce, but no further cost to make copies of. The CD is scarce but not the music.

But, because the producers of music and software participate in an economy of scarcity, and persumably depend on it for food and shelter, they're forced to present a non-scarce product as if it were scarce. Part of that process is manufacturing its scarcity - a process that may actually cost most than producing the music or software itself. Distant recollection suggests that "anti-piracy" efforts, which manufacture the scarcity of all music or all software (perhaps only of a single producer) certainly costs more than the production of a single unit of music or software.

Ultimately, though, I see three alternatives that we as a culture that includes producers of non-scarce goods have:

  1. Tolerate this manufacture of scarcity. After all, musicians, actors and programmers gotta eat, even if producers and managers are gonna eat more.
  2. Refuse to tolerate the manufacture of scarcity and collectively acknowledge that, in a situation where food and shelter are scarce, music, movies and software are sucker's games. This also means accepting that as industries these professions will disolve, leaving only talented amatures. Some might argue this is a good thing, but most would also want to see The Two Towers this Christmas.
  3. Refuse to tolerate a manufacture of scarcity and devise some other method by which these industries might be maintained, and at their present levels. For instance, there was a long standing tradition of software contracts, where programmer service was for sale, not the software itself. On the other hand, this tradition predates the ubiquity of the personal computer; the service was still scarce. I for one can't think of a realistic alternative means of support.

In the present situation, we live with alternative one, while we long for number three. Honestly, as a society, we tolerate the fiction that music is scarce almost out of politeness. Perhaps the annoyance we feel is not that of the righteous against The Man. We aren't rebels or pirates for believing that "information wants to be free" or that we have rights to fair use, or even that maybe we should have the right to free distribute music. I think the annoyance we feel is more akin to that we might direct at the extremely rude cripple, or the underqualified minority candidate suing for employment discrimination. The data production industries are like that. They are abusing a societal politeness regarding their handicap, the politeness of pretending that their goods are like any other. A politeness codified as copyright law.

User Journal

Journal Journal: Technology triumphing over scarcity 2

Due to a link in a sig, I was recently exposed to what I think is a weak, although superficially compelling argument regarding copyright, and patent laws. It's an EFF article, so it's, of course, a bit to the left of center.

Here goes my restatement:

Hypothesize that sometime in the near future, we invent a device that can duplicate entire objects. By so doing, we would effectively eliminate the basis of conventional economics: scarcity. Once anyone can get whatever they want as cheaply as breathing, there's no longer cause to establish economic systems to control the distribution of resources.

One might argue, though, that manufacturers of every stripe would fight to have such a technology quashed, or at least limited so that it wouldn't affect their industry. This observation stems by analogy to the efforts of the RI and MP double-A (which looks confusing, but it's how I like to say it) to limit devices that can duplicate their products perfectly.

This seems to be an extraordinarily weak argument against the behavior of the MPAA and RIAA, since we cannot eat music or live in movies. While the scarcity of "content" is largely artificial, removing music and movies from an economy based on the notion of scarcity removes, to a degree, the producers and distributers of those goods from being able to participate in that economy. While food and shelter are still scarce (in the economic sense of the term), it seems unfair to admit that a thing has value, but remove it from an economics of scarcity.

On the other hand, if food and shelter were not scarce, if they were as available as dirt or air or water, the basis of that economy is destroyed, and there's no reason to squabble over money. Why kill yourself for a buck when you don't need it?

It's this fundamental difference between a matter duplicator (or, as an example, the Matter Pipe in Diamond Age) and a CD burner that weakens this argument to the point that it's effectively worthless. I begin to think it shouldn't have been included in an EFF article at all, since by extension, it weakens their position.

I don't want to set up straw man just to knock him down. My views on property rights in general, and IP in particular are in constant flux; I'm certainly not standing with the copyright cartels. What I'd like to be able to see is a clear path from a matter duplicator to a CDRW, or a crucial similarity between food and music. Essentially, a flaw in my reasoning here, because except for that glaring flaw, it's a very compelling argument.

Fair reference: the article is here. And the sig was grumpygrodyguy's.

User Journal

Journal Journal: Philosophy track 23

I've been think about a synthesis of a couple of ideas which I think is valid, and it's implications.

Idea No. 1: Complexity theory, and self-organizing systems. This is big, and fascinating and slippery. I feel like I don't know the half of the basic premise, but it matches up with a number of ideas that occured to me in vacuo, which is always a good way to get me to latch onto an idea. I'm applying this mainly as it suggests that the contention that there is Order implies that there is an Orderer is false, that Order can arise from very subtle effects in initial conditions (or even just previous conditions.)

Idea No. 2 is that of N degrees of seperation. That everyone in the world can be connected to everyone else by means of some number of association. It's kind of goofy, but what I'm really trying to get at is that every fact of our society is the result of the interrelation of 6 billion people, and that patterns in that interrelation result in every observable charcteristic of anything one might call society.

Really, I don't think #2 works without #1, although I think it's observable on it's own. Every editorial tries to find the reasons behind this or that trend. How often does political conversation turn to cause and effect? And aren't the causes almost always in terms of structures of people? Very rarely do other causes make their way into such discussions. We argue that the rise is sexuality in popular media has to do with, say, the women's liberation movement or pressure from advertisers, rather than spectra of light, new broadcasting technologies, hormonal levels in the atmosphere.

But self-organization explains why we trends and cycles in social situations. Picture the human relation graph as a having a state - a very complicated state, but a single state none the less. I'm not entirely sure what contributes to the state, but I tend to think we all observe parts of it. What else do we mean by "the state of the World?" I do think that human wills influence it, though.

Now, the synthesis completed: Credo that this state is complex, and self-orgnazing (or else it would have either devolved into a simple trend, or collapsed - I tend to think that the threat of nuclear war is one form of collapse that could occur), and so very sensitive to it's previous states. Therefore a single human will, properly informed and acting correctly, could influence the whole state in a significant way. The problem, then, is to determine what actions will lead to what results. I suspect that while the simple answers of political activism, etc. do have an effect, they may not be the most effective. I also suspect that a perfect calculation may be outside of the scope of the human mind (at least mine) and that of modern hardware, but that some set of rules might be deduced. What those rules might be, or the result of their discovery would be, I can't speculate yet.

User Journal

Journal Journal: ToC: An Appendix 1

Two things that terrify and fascinate me that relate to ToC:
  1. Goedel Incompleteness. Basically, Goedel demonstrated that any logical structure either cannot discuss it's own consistancy or it can demonstrate it's own inconsistancy. This upsets me because of all that I've invested in the virtues of logic and reason. At least arthmetic is consistent.
  2. Chaitin's Construction. Often called Chaitin's Constant and designated by a capital omega, this is one scary number. It's the probability that a given Turing Machine will halt for any given string. But we already decided that the Halting Problem was undecidable, right? Well, omega is uncomputable. And random, even though it's constructed in terms of a polynomial. The best you can get is that the first N digits are ones (if you express it in binary). The N+1st digit might be zero. Or not. It disturbs me greatly that you can't prove that for any given Turing machine, you can't prove that omega is not an endless string of ones - which would be unity. With the Halting Problem, at least you had the solace that some string would cause the machine to halt.

(and it really bugs me that Slashdot eats the O-umlaut element in "Goedel")

User Journal

Journal Journal: ToC: Lesson 5 A Ragged End

We've come a ways. If all of your prior computing experience had been bashing out perl scripts, I hope you've learned something. If you hold a graduate CS degree, I hope you've had a good chuckle. It's my intention to tie up this Theory of Computation thing for a little while in an effort to pursue a couple of other topics in this journal. I do intend to come back to my Laughing Hyena impersonation in due time. Probably with algorithms next.

But there's still a little bit more to explore here before I tie it off like a punctured femoral artery. First of all, let's talk about Turing Machine's emulating one another. It's possible to build a Turing Machine such that I can give it an input which will cause it to behave as if it were an entirely different Turing Machine given a certain input.

More importantly, the Machine that I emulate can have functionality utterly different from a basic Turing Machine. You can add multiple tapes, two directional tapes (i.e. an infinite tape that the machine starts at the "middle" of), registers, RAM, what have you to a Turing Machine, and I can produce a standard states-and-one-ended-tape machine that will accept as imput a series of symbols that encodes both your super-fancy machine and it's input, and it'll come up with the same result (even if that result is a semidecision). The only cost is that my machine might take more cycles to compute than yours will. But since Turing Machines are all in the mind, those cycles can all take an arbitrarily small time, so it doesn't really matter. Especially since, with one exception, any bell or whistle we can add to a Turing Machine reduces the number of cycles to decide an input only linearly. Now, "linearly" might sound like a lot (seeing as how "half the time" is a linear reduction), but in the big picture, it's not all that much. I'm going to leave that there in the hopes that it will spark some sort of debate or comment.

That said, here are the two big things to take away from any discourse on the Theory of Computation: Number one is the Halting Problem (yippee!). The Halting Problem runs essentially like this. Say I have a Turing Machine, and I want to know whether it will halt given a particular input. Not too unlikely, not even unreasonable. Might not want to give it input that it won't stop with, right? Whatever, the point is, can you construct a Turing Machine that will decide whether a given Turing Machine M will halt if given input P? And the answer is no. This is a problem that a Turing Machine cannot solve. Why? Well, consider how you'd solve it. Build a Turing Machine that will emulate M for input P. Then let it run, and if it halts, M would, and if it doesn't, M wouldn't, right? The question is, when do you give up? When has our deciding machine "not halted?" There are more rigorous ways to demonstrate this, but it has always come down to "when is the answer no?" for me. This then, is the Turing Halting Problem, and it's impact is immense. (Of course, it isn't hard at all to argue that the Halting Problem is merely a restatement of the Godel Incompleteness Theorem, whose impact is still more frightening.) What we've just admitted is that a Turing Machine (of which every chip ever manufactured is merely a limited electronic analogue) has a problem that it cannot solve (and there are other infinitely many problems which are analogous) - there are limits to what we can do with a computer. Of course, there are limits on how large some transfinite numbers are, so it's not too bad.

Number two is the idea of NP-completeness. Consider the set of languages that a Turing Machine can decide in a reasonable number of cycles - say the square of the number of elements in the input string. Or even the cube, or any arbitrary integer power. That's a lot of problems. We'll call that set P (which stands for polynomial - i.e. the number of elements in a string to some power (yes, plus or minus the number of elements to a smaller power, but "these are not the terms you're looking for")).

Now, remember how I alluded to the fact that there was one kind of modification you could make to a Turing Machine that would improve its speed better than linearly? Well, it's pretty radical. A basic Turing Machine can have only one state at a time. Which makes sense, but what if a Turing Machine could be in more than one state at a time? Instead, it would have a list of its current states and their tapes. It's screwy, but there are languages it can decide very rapidly, by trying various possibilities concurrently. This class of Turing Machine is called Non-Deterministic. Now consider for just a moment the languages that a Nondeterministic Turing Machine can decide in polynomial time. We'll call this set NP. We can easily emulate Nondeterministic Turing Machines on deterministic ones, but for there are problems for which the emulation will take an exponential number of cycles! This is bad - exponential means that you raise some arbitrary constant to the power of the number of cells in the input string. You can do worse, but I don't know any problems for which the best algorithm is factorial time.

It's possible that P = NP though. There is a subset of NP known as the NP-complete languages. In order for a language (call it L) to be NP-complete, every other NP language needs to be convertable to L in polynomial time by a deterministic Turing Machine. If we can every show that any NP-complete language is in P, then P = NP. It's every computer scientist's secret wet dream that this is true though, so the fact that no one has proven that any NP-complete language is in P doesn't bode well. But, no one has proven that P != NP, so maybe some brilliant mind will figure out proof one way or another.

Okay, kids, it's been fun. On to other things.

User Journal

Journal Journal: ToC: Lesson 4 Turing Machines 2

Recap: we've spent three lessons building up the ideas of strings and languages and recogition devices and whatnot. Finally we've gotten up to the idea of Finite State Machines, and all their glory. But there's till something missing.

There's still an entire class of languages we can even begin to recognize with anything we've seen so far. Consider a language defined as those strings containing the same number each of three symbols. We'll even make it easy: all of the first symbol will come first, then all the second, then the third. There just isn't a FSM that can recognize a string in that language. If I said "up to ten of each character" (or even "up to a million"), you could build the machine - it amounts to a chain of states that count across the first symbol (e.g."A"), and at the first occurance of the second character, you have a series of states to count for the correct number of the second and then third symbols ("B and C"). If you get to the end of the B-C chain, accept the string, otherwise reject it. Well and good, but if I don't guarantee an upper limit, you need an infinite number of states. Can't we do better than an Infinite State Machine?

Yes, we can. It's called a Turing Machine. "Turing" as in Alan Turing, as in Turing Test, UNIVAC, the Church-Turing thesis, and cracking Enigma during the Second World War. And later harassed by a grateful Britain to the point of suicide.

But leaving that aside for the moment, let's talk about what a Turing Machine is. Take a Finite State Machine. Now take a memory structure. The classic description is a tape, like a long strip of paper, composed of an infinite number of cells into each of which a single symbol can be written. It has a first cell, but no last cell. There's a head that can move left one cell, move right one cell, read the symbol on the tape at its position, or write a symbol to the tape at its position. If it tries to go left from the first cell, it remains where it is. Easy. Now, we attach the tape to the FSM with the following simple rules:

  • Instead of using just the next symbol in the string, the FSM uses the symbol under the head on the tape.
  • Consequently, rather than feeding the string into the FSM, we put it on the tape, one symbol per cell.
  • In addition to changing state, the machine either outputs a symbol to moves the head left or right.

See: simple. Now we can accept our language of like numbers of A's, B's and C's with what amounts to eight states. Quick overview: if you read an A, replace it with a D, and go right until you find a B, which you replace with a D, and then go right until you find a C, which you replace with a D, and then go left until you're back at the beginning. Go right over D's until you find an A and repeat, unless you get to the end of the string, in which case accept the string. If at any time you read a wrong character (an empty cell when you were looking for a C, or a C instead B, or whatever), reject the string. If Slashdot let me do tables, I'd do up the states and outputs, but it doesn't so actually representing the machine is an exercise for the reader.

Notice, as an interesting characteristic of a Turing Machine, that not only can it accept or reject a string, it can not halt at all. This is new, and a necessary result of a Turing Machine's ability to deal with what we'll call recursive languages. The difference is that while a FSM always halts at the end of it's input (and acceptance depended on whether that halt was in an accepting state or not), a Turing machine has an infinite number of ways to never halt. Consider a state where if it reads an A it writes an A. Result: it never halts, reading and writing A's forever. Or if could scan right until it finds an A, and then scan left until it reaches the beginning of the tape. Result: it sweeps back and forth ad infinitum.

There is, in fact, and entire class of languages based on this property. Consider the idea of semidecision, where a Turing Machine accepts a word into a language by halting if it entered as input, but otherwise does not halt. A language for which there is a Turing Machine that halts for it is called recursively enumerable

When you come down to it, Turing Machines are the big deal in this subject of discourse. In fact, the single most important result of any Theory of Computation course deals with Turing Machines. And, that result, and all it's depressing implications, applies to the very machine you're reading this article on, and every other computer, too. But we'll discuss that next lesson.

User Journal

Journal Journal: A brief aside

This has nothing at all to do with Theory of Computation. I just learned something new, and I'm terribly impressed and amazed at how simple and correct the right answer can be sometime. I'm also thoroughly digusted at how much a complete geek I've actually become.

HSRP. Hot Standby Router Protocol. This is not only how all networking should be, it's how everything should be.

Keeping in the pedantic theme of this journal, I'll aside for a quick explaination. Two or more routers need to back up for each other. But, since they're the default router for all sorts of not very advanced nodes, they need to do this is a very clever way. There's no way you're going to go into every Wintel box on your network and somehow get them to understand that two routers are their default. Not that it can't be done, but it's a real pain. And actually, sometimes it can't be done. Besides, every right thinking person agrees that if at all possible, a thing should be done once and no more.

So, what HSRP is, see, is this agreement between the routers, that they'll all pretend to be one router, with one of them being the active router. But if the main one goes down, three seconds later another router in the group can take over, and the hosts using that router are clueless.

Such simplicity. So neatly orchestrated. This is why people pay Cisco the big bucks.

User Journal

Journal Journal: ToC: Lesson 3 State Machines 1

(There I was thinking that no one was interested. Thanks for the emails. Next lesson begins immediately.)

Next step. We've got the idea of a language being a subset of all the possible strings given a specific alphabet. All right. And we can neatly describe a language as being all the strings accepted by a regular expression, which is a really nifty shorthand, but for real fun we need to move on to state machines.

There's two ways to explain state machines. One is geared for the mathematically inclined - a state machine is a 4-tuple that consists of a directed graph, a set of subsets of an alphabet that are associated with the edges in the graph, a single node from the graph, and a subset of nodes; the single node is the beginning state and the subset are the ending states. The other way is figurative, and I'm really thinking a diagram would be great right now. Here's a link: a state machine.

I'm not perfectly thrilled with this example because it's not using the notation that I like, and it implies that each edge can only be used for one symbol, but I'll deal. Just keep in mind that you'll sometimes see different ways of marking the begin and end states, and that you can use a comma seperated list of symbols on an edge (one of the arrows between circles) and it's okay.

Okay, great, that's a state machine. Here's what it's for. First stop: instead of nodes, or circles or arrows, a state machine consists of states (circles or nodes) and edges (arrows). Basically, any string "drives" a state machine. If the last symbol in a string leaves the machine in a end state, then it's accepted. If the string runs out when we're not in a end state, or includes a wrong symbol, then it's not accepted. Simple.

But what do I mean by "drives?" Basically, follow these steps:

  1. Start in the beginning state. Put your finger on it if you want to.
  2. Start at the beginning of the string. Put your other finger on it if you want.
  3. Look at the current symbol in the string, and compare it to the symbols on the edges leading from your current state.
    • If the symbol is there, update your current state to the node on the other end of the edge.
    • If it's not there, stop. The string is notaccepted.
  4. Move to the next symbol in the string. If there aren't any more, you're done. Now:
    • If the current state is an end state, the string is accepted.
    • If the current state isn't an end state, the string is not accepted.
  5. Repeat from step 3.

What say you to an example? Try using the sample state machine (remember the link?) to check if the following strings are in it's language:

  1. VXXXVS
  2. TPTXX
  3. VXXXVPXVGPS
  4. TTXXXPS
  5. TPPPPPTXXXXXVPS

Bonus question: how many strings does our sample machine accept?

The other thing to notice is that with not too much ingenuity, we can represent a regex with a state machine and vice versa. Just for fun, compare the sample machine to ((TP*T)|(V)(X*VP?)?)|(VV)S. Or try drawing a machine to accept 'abb*cow(end)?'. You can kind of see how regex and state machines are pretty equivalent, eh? Good.

Okay, state machines handled. Strictly speaking we could have a state machine with an infinite number of states, but there are many different reasons not to like the idea. Besides, they're cool enough with a finite number of states, which is where we get Finite State Machines, or FSM. By loosening the definitions or how an FSM works, we can describe all sorts of useful stuff, like networking stacks and burglar alarms.

If I were you, though, I wouldn't be satisfied with burglar alarms as justification for all this malarky about languages that aren't C# or Scheme anything useful. In fact, I'd probably be crying "What in the name of Alan Turing is all this garbage good for?"

Which is a very good question. See you next lesson.

User Journal

Journal Journal: ToC: Lesson 2 Accepting Languages.

Okay, so, you've read lesson 1, or you somehow understand how strings and languages are defined in this context. And there was something of a cliffhanger last lesson about accepting strings and languages.

Strictly speaking, a language is really the set of strings accepted by some means. But you could also define a means to accept a specific language of strings. Since usually the means are defined by the language it accepts, it seems to make more sense to me to define them in that order, but it really comes down to a toss-up.

"By some means" isn't very descriptive, and its the filling in of that blank that I'll spend most of this entry. I'm going to start with the idea of regular expressions, for two reasons. First, it's much easier for me to explain and illustrate a regular expression in a text based format than a state machine (the other common option), and I'm sure there's some who will be pleasantly surprised to see where regex (as they're usually called) come from.

Quick aside (for the regex literate, everyone else, skip on): this may not be exactly what you expect of regex (or is the plural regexes? regexi?) Specifically, these regex are anchored, and they aren't intended for replacement, so don't worry about /1 - /9 and whatnot.

Okay, a regex can accept a language of exactly one string by listing that string. So if the regex is "cow", then it accepts the language {cow} exactly. Simple, no?

Too simple. What if I want to accept {cow, chickens}? The first operation of regex is called alternation. Put two strings together with a vertical bar between them, like so: "cow|chickens" and the resulting regex accepts either element. In this case {cow,chickens}. Quick note: {cow,chickens,cowchickens} isn't accepted. Alternation is exclusive.

It seems like this would get really tedious really quickly. So far, are we really any better off than listing the accepted strings? Next few regex operands have to do with repetition.

Follow a single symbol with a '?', and it becomes optional; it can occur zero or once.
example: a?b accepts {ab, b}.

Follow a single symbol with a '*', and it can occur any number of times, including none.
example: a*b accepts {b, ab, aab, aaab, ...} (the language is infinite.)

Follow a single symbol with something like {n,m} means "this symbol can occur at least n times, and at most m times." {n} means "this symbol can occur exactly n times". {0,inf} (where by "inf" I mean the infinity symbol) is equivalent to *, and {0,1} is equivalent to ?.
example:a{3,4}b accepts {aaab, aaaab}.

The operand . stands for any symbol. We don't care what it is, just accept it.
example:.* accepts the language that consists of every possible string such that no string is a prefix of any other.

Last operand of importance: ( and ) are used for grouping. Surround anything with ( ) and it can be treated as a single symbol for purposes of the above operands.
example: (a|b)*(cow|chicken)?end. This accepts any string of a and b, possibly a "cow" or "chicken", so long as it ends with the three symbols e n d.

That being said, regex usually gets used to define languages, in the form "the language accepted by (foo|bar){2,17}a*b."

Quick quiz: how many strings are in the language accepted by a*?

What we really want to accept languages with are State Machines.

User Journal

Journal Journal: ToC: Lesson 1 Strings and Languages

The basis of one approach to the Theory of Computation is the idea of strings. Strings are nothing more or less that a series of symbols from a set of symbols called an alphabet. For examples, it's common to use predefined alphabets, like the Latin or Greek alphabets, or sets of numbers, like the set of natural numbers less than 10.

Quick aside one: For the less mathed out there, a set is just any group without duplicates. Your intuitive understanding of an alphabet is probably enough to grasp this one firmly. Consider: how useful would an alphabet be if there was more than one "a" in it? Do you mean cat or cat?

Less quick aside two: Okay, what exactly is the point of a Theory of Computation? Why would anyone care? Ask yourself this question: are there any problems that a computer can't solve? Answer: yes. But it's not the "what's at the end of an infinite series of divisions" or anything so Doctor Who as that. There's math that solves those problems perfectly, and computers are quite good at math. So, what is there that a computer can't do? That's the question that a theory of computation should solve. Along the way, we also start to see hints of things that a computer can do, but not very well, and how to quantify how well a computer can solve a problem. Which is important, because how else do you improve your solution? And without that sort of reasoning, we would never, ever, be able to play Counterstrike (the other CS).

Okay, back to strings. A string is a series of symbols from an alphabet. Well an good. What can we do with strings? Well, mostly we assemble them into languages. Drop immediately the notion that I mean C++, Python, Java, or Befunge. A language is a set of strings with this requirement: if a string X is in a language, then no prefix of X is in the language. So, a "language" { a, ab } isn't really a language, because a is a prefix of ab. (On the other hand, { b, ab } is legal. Suffixes are okay, just not prefixes.)

Okay, so now we have languages. What do we do with languages? We accept them.

Actual accepting of languages we'll talk about later. But one might be asking "why?" at this point. Okay, alphabets make strings make languages, and that's great and all, but I'm sure I'm begging the question "Is this leading somewhere other than Nyarly getting his pedantic rocks off?" Well, it comes down to, this guy Alan Turing suggested about sixty years ago that accepting languages was equivalent to solving problems. Or least they were strongly related. It's complicated, and we've got a ways to go, but that's basically what he said, and why it's hard to screw up a theory of comp. course too much. Then the government for which he'd won a war had him imprisioned as a homosexual, but that's another, also fascinating, story.

That's all for this week. Comments welcome.

User Journal

Journal Journal: Theory of Computation

As an Anonymous Coward replied to me, "NP completeness, Turing Machine, ect[sic]..." gets mentioned a lot on /. But what this coward was brave enough to admit was that he didn't really know anything about these things, although he was interested in CS matters.

Frankly, I think that many Slashdotters speak with a limited understanding of computer science, and to respond to that, I'd like to start using my own understanding as a springboard for a better understanding in general.

I'm leaving comments enabled so that criticism of my "lessons" might help to improve their content.

First lesson begins next journal entry.

Slashdot Top Deals

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...