Bringing Back the PDP8 372
Anne Thwacks writes " Andrew Grillet has decided that the Digial PDP8 - the first ever minicomputer, will rise from the dead.
He is calling it the PDQ8. Sure others have done software emulations, and even hardware clones, but he is not just building a hardware clone, but trying to revive the whole idea of 12 bit computers!"
TRS80 (Score:5, Funny)
Re:TRS80 (Score:2, Interesting)
what for (Score:3, Insightful)
Re:what for (Score:3, Informative)
Re:what for (Score:4, Informative)
Re:what for (Score:5, Funny)
KW == "kilowords" (Score:4, Informative)
Back then, the size of core memory was generally measured in machine words, thus in the case of a 12-bit machine like the PDP/8 with 32 k-words, the core would be: 32 x 12 bits == 384 k-bits, or 48 k-bytes.
Re:what for (Score:4, Informative)
We never distributed it because paging performance without the hardware hack was very bad (every CDF instruction needed to be trapped and mapped in software) and the hardware hack was only developed for the 8/E and piggy-backed on one of the system boards destructively (i.e. once modified your 8/E wouldn't run without our hardware).
But it was used internally by the company I developed it for until about ten years ago.
Re:what for (Score:5, Funny)
In the bottom of a box somewhere in my basement, I've still got the BASIC source code for the Star Trek game we used to run on our high school's PDP-8. For each player's turn, it printed out the map of the current galactic sector along with any Klingon ships on the line printer.
It's funny, I remember when we played that game we felt like we had godlike control over a mysterious and powerful machine. Now when I play computer games, I mostly feel like a twitching moron.
Re:what for (Score:4, Funny)
I remember when we played that game we felt like we had godlike control over a mysterious and powerful machine.
That must've been a nice feeling, I bet. In these days, the mysterious and powerful machine has god-like control over you!
Six copies of Windows (Score:5, Funny)
Re:what for (Score:2)
I could of course be wrong, but I really think these PDP's ran UNIX. RT11 seems unlikely, since that was a PDP11 OS. Either some V[something] or a BSD unix ran ot it, I am almost positive.
Please do correct me if I'm wrong of course. It's been a while, ya know :)
Re:what for (Score:3, Informative)
The PDP-8 never ran anything remotely resembling Unix. The very first version of Unix ran on the PDP-1/7/9/15 18-bit family (a PDP-7 IIRC). The architecture of this family was similar in many respects to the PDP-8 and indeed preceeded the PDP-5/8 family. You can think of the PDP-8 as being scaled down from the earlier family's 18 bits to 12 bits. To make it cheaper, of course.
The original Unix-written-in-C ran on the PDP-11 (the PDP 1/7/9/15 family version was written in assembly, IIRC). The first BSD version of Unix was written for the VAX family
An idea (Score:3, Insightful)
Re:An idea (Score:3, Insightful)
A '53 vette can still drive on the roads, use normal gas from the pump, and hit speeds upwards of 100mph on the highway.
The PDP-8 simply isn't capable of competing with any current hardware; most Palms are more powerful.
ENIAC (Score:3, Funny)
Re:ENIAC (Score:2, Interesting)
I read this, then scrolled down to the bottom, and my quote of the moment said:
"Just think, with VLSI we can have 100 ENIACS on a chip!" -- Alan Perlis
Re:ENIAC (Score:3, Funny)
no, that idea sucks.
Re:ENIAC (Score:2)
no, that idea sucks.
Bag it already...
Re:ENIAC (Score:2)
has survived, to make a simulator for it?
ENIAC On A Chip (Score:2)
Steve
Calling all Electrical/Computer Engineers (Score:5, Informative)
Well, if you'd like, you can follow this design [jaywalk.co.uk] of an FGPA implementation of the original PDP-8 computer!
If you've used Verilog (a hardware design programming language), like I have, you can even download all the code [jaywalk.co.uk]!
12 bits (Score:5, Interesting)
I found that, where you are not primarily handling ASCII, 12 bits was a very good size.
Maybe someone would enlighten the rest of us on why a certain bit size is better than another, and why we currently use 8/16/32/64, instead of 12/24/48/96 ?
Re:12 bits (Score:5, Interesting)
>> and why we currently use 8/16/32/64, instead of 12/24/48/96 ?
Because powers of 2 are easier to work with in binary.
It's like English Measurements (Score:2)
At the time when things pretty much fell into the currently accepted pattern, word sizes that were even powers of two happened to be convenient:
2^8: enough to hold the complete latin alphabet
2^16: Enough bits to handle the entire address space of a 1980s microcomputer
2^32: Enough to handle almost any day to day integer calculation
It's like the inch-foot-yard-furlong-mile of English measurements. These are well suited to the kind of day-to-day measurements that people make, as inconvenient as they are for calculation. Eight, sixteen and thirty two bit words and addresses were big enough in the 1980s, but not so big as to be wasteful.
It's interesting to speculate how things might have been different had the industry settled on twelve and twenty four bit word sizes. It may have been more convenient for people with non-latin alphabets, although not as commodious as a sixteen bit per charcter system. And a lot of effort was wasted in the nineties with the limitations of sixteen bits address spaces (the segmented memory architecture, near and far pointers etc). The need for a larger flat, memory space might have been staved off for several years until 16MB chunks were too small.
I don't know if that's a good thing or bad thing. We might be struggling with a 24-48 bit conversion today instead of happily using our P4s and Athlons and waiting for the high end users to hash out the 32-64 bit conversion.
Re:12 bits (Score:3, Informative)
Re:12 bits (Score:3, Insightful)
This DOES make a difference. More important than the space savings is the ability to know that a pattern of bits could not encode any illegal instructions that had to be tested for.
Re:12 bits (Score:3, Interesting)
1 2 4 8 16 32 64 128 256 512
So I suppose 12 could be
1 3 6 12 24
maybe
1 2 3 6 12 24
anyway
12 is good because
12/6 = 2
12/4 = 3
12/3 = 4
12/2 = 6
Re:12 bits (Score:2, Informative)
Character Codes (Score:3, Informative)
If you look at old assemblers and compilers, the limit on the length of a symbol/variable name is often the number of characters that could be squeezed into a single machine word.
Because, as Tom Lehrer, formerly of MIT noted. . . (Score:3, Funny)
KFG
Because, as Tom Lehrer, formerly of Harvard, noted (Score:2)
Tom L. has always been a Haavaadite, not from MIT, as in the "Harvard Fight Song," and the lines, "These are the only ones of which the news has come to Haavaad/And there may be many others, but they haven't been discovaad." (Rhotic-lossy dialects bother me, since I speak one of the few English dialects that's fully rhotic.) I imagine it matters to some people (probably they go to Harvard).
Also, as far as I know, he's still there, although long since an Emeritus.
Re:Because, as Tom Lehrer, formerly of MIT noted. (Score:2)
Re:12 bits (Score:5, Informative)
This article [sigmaxi.org] explains why base-3 systems are actually a lot better than base-2 from a theoretical perspective, but that it was much easier to design hardware in base-2, so base-2 became the de-facto standard. Nowadays we could probably fab base-3 hardware fairly easily, but it's not worth doing so with all the base-2 hardware already in existance.
As for 16/32/64 instead of 12/24/48, it's just one of those things. IBM's earlier AS/400s ran on 48-bit processors (now they are 64-bit). 96-bit floating point is an IEEE standard. And do you know why file permissions in Unix are rwxrwxrwx? It's because they borrowed that idea from another operating system designed for 9-bit bytes and a 36-bit processor.
Why we use base 2 instead of base 3 (Score:5, Informative)
The threshold voltage for transistors is somewhere under 0.2-0.3V usually (depening on the technology & lots of other parameters). So, you absolutely need a 0.6V supply. (0-0.3 = "0", 0.3-0.6 = "1".) Unfortunately, even with Vdd=1V, you'll get voltage drops happening throughout the chip ("IR drops" - as in I=current times R=resistance) so that the 1V may only look like 0.8V to some parts of your circuit.
From the above discussion, it should be obvious that there really isn't room to shoehorn in a third voltage level. Also, a nice feature of CMOS design is that when a gate is sitting in a "0" or a "1" state, it is drawing no (well, negligible) power. Power is only dissipated while a value is switching from a 0/1 or vice versa. Off hand, I can't think of a way to do that with a third logic-value. Consider drawing even a tiny amount of current while a gate is sitting at logic "2" (or whatever you want to call the 3rd value). 1mA (milliAmp) times 1 million transistors on a chip = 1000 Amps. That chip's going to get a little hot!
Ok, so you've probably got at least two questions, which I will try to answer in advance. If you've got other questions - I'll just let someone else tackle those.
Q1) Why don't we just use a higher Vdd (supply voltage)?
A1) If you're using smaller transistor widths, you simply can't. When you use a really thin gate (i.e. 0.1um) on a transistor, the breakdown voltage of the gate is reduced. If you use a higher voltage, the transistor melts. (You could use larger transistors, but that kind of defeats the whole purpose! We make transistors smaller because we can fit more on a chip, and they operate faster and use less power.)
Q2) Can't we lower the threshold voltage?
A2) Yes, to some extent. (It's not always easy.) But we don't want to. Even when a transistor is "off", there is still a very small amount of leakage current flowing through it. If you reduce the Vth, you also increase the amount of leakage current. In older technologies, this hasn't been much of a problem, because the leakage current was so small in comparison to the dynamic power consumption. But as we are putting more and more transistors on a chip, the leakage power consumption in modern chips can easily add up to 30%-40% of the total power consumption. There's also another reason. If you did that, you would be lowering your noise margin. And you don't really have much control over the noise (which is why it's called that). If you reduce noise margins too much, you'll find it almost impossible to create a circuit that actually functions reliably.
Well, I hope that satisfies some of you (and doesn't get the rest of you too upset). VLSI circuit fabrication is a really neat field. Some of the tricks that are being used these days to fabricate that chip sitting in your computer and get it running at 2GHz (or aren't they up to 3GHz now?) are quite amazing - they're doing their best to cheat physics! Using a ternary counting system to build computers may have a lot of nice theoretical properties, but I can't see it displacing binary any time soon, except possibly in some really specialized applications. (There are always exceptions.)
That's my $0.03 worth. (Hey, I typed a lot. I think that's worth at least $0.01 extra. Maybe $0.025?.) Any errors in the above are mine, but I won't admit it.
Re:Why we use base 2 instead of base 3 (Score:4, Informative)
It could be accomplished (fully static CMOS, no steady state current to maintain a 3rd logic level) with a second power supply, and circuitry designed to connect the output to either Vss, Vdd or Vmm (m for middle, for lack of any other name.. hmm) Brian Hayes's flawed assumption [sigmaxi.org] is that circuit complexity increases linearily with the number of logic levels. He writes "An obvious strategy is to minimize the product of these two quantities", refering to the radix and number of symbols to represent a number... but he just pulled that out of a hat. The required circuit complexity is not linear function of the radix, and a realistic model would quickly prove that binary is the most efficient. A fully static ternary output requires a minimum of four transistors, whereas binary requires only two.
That chip's going to get a little hot!
With a static CMOS circuit designed this way, power consumption would be approx 0.5 * C * f * V^2 (as it is in normal binary circuit). C will probably increase somewhat, as nearly twice as many transitors would be needed per circuit, yet fewer trits are needed that bits for the representing the same numerical range, so the increase in C probably wouldn't be by a factor of two. Presumably f (the clock frequency) would stay the same (well... I'll get to that...), and V stays the same (50% of transitions in binary are full supply voltage, in ternary 33% are full voltage and 33% are half voltage). Power comsumption would probably be similar.
Saddly, f probably won't stay the same. C gets larger on each signal, and when driving to half voltages, the transistors that would connect to the Vmm supply get only half the effective gate voltage applied. So doubling the load and cutting the drive significantly is really going to hurt the circuit's speed.
Dynamic logic tricks (pre-charged busses) and bicmos circuits add another interesting dimension that's too complex to worry about, though it'd be important for any microprocessor.
But power consumption isn't likely to be a problem.
Getting back to the old PDP-8, as I recall it was a binary machine. The motivation behind 12 bits was that 6 bits was ideal to represent both upper and lower case characters and plenty of symbols, and 12 bits (two chars) was plenty for useful math. I don't recall the popularity of 6/12 bit systems having anything to do with base-3 signaling.
Re:Why we use base 2 instead of base 3 (Score:3, Informative)
In the paragraph just above figure 2:
Scroll back up to figure 1, and the third paragraph up is where the assumption is first made:
So what he's saying is that the "cost" of a representing a number is the cost per digit multiplied by the number of digits required. But he makes the assumption that the cost of each digit is a linear relationship with the radix, which is simply not true in almost any system (certainly not in electonic circuitry nor in telephone menu systems).
Speaking mathematically, r is the radix, and w is the number of digits (or symbols, words, or whatever you call them) required using that radix. The cost is F(r) * w, where F(r) is some model for the cost to implement that radix.
The words "An obvious strategy" are plain wrong. It's not obvious at all. It's simple-minded and ignorant. It's devoid of any anaylsis or thought about any real system. Even from a purely theoretical standpoint, it's academically dishonest to gloss over this critically important point rather than write "r * w" instead of "w * F(r)" and state the assumption of a linearly increasing cost per digit as the radix changes.
Well, maybe that's a bit strong. Who am I to judge what's academically proper. But the paper clearly begins by saying:
The general arguement that base-3 is actually superior for computer arithematic is also quite evident in the "Trit by Trit by Trit" section (just below figure 1). I'll avoid quoting much of it, but at the conclusion he writes:
Now the rhetorical question "Why did base 3 fail to catch on?" is answered by postulating (not even any real knowledge) that way-back-then it was too tricky to design and base-2 gained so much momentum and became so well established that base-3 never caught on. Notice how he concludes with the words "overwhelmed any small theoretical advantage of other bases", reaffirming once again the standpoint the base-3 has some advantage, if small, over base-2, theoretically speaking. He's clearly talking about implementation of circuitry.
The ugly truth is that rolled up in the theoretical advantage of base-3 for circuitry is that assumption that the "cost" is "r times w" (r for radix, w for number of digits). Any engineer can tell you that cost has units of dollars, and r and w are both unitless quantities. To compute the cost of using a particular number system, you need to use a function (above I called it "F(r)") that transforms the abstract number "r" into the cost of implementing that radix. The unitless number of possible digits needs to be turned unto a quantity in units of dollars (or some other currency) before it can be multiplied by "w" to obtain the cost of implementing that radix.
DecMate word processor (Score:2)
The 3rd party service man at a VAX site I worked at in the early 90's had a PDP-8 (W/ RK05's!) still under contract in a big milling machine at a local heavy manufacturor.
Re:12 bits (Score:2)
As to why 12 bits is good: simply 8 bits is a bit small for an awful lot of cases, so if you use an 8-bit fundamental word length, you are often into doubleword operations. Practically, a 12 bit word length, values +/2048 or 0-4095, seems to work out useful in a lot oc control applications. When the extra bits were expensive, the saving between 12 and 16 is relevant.
First machine I worked on was 18 bits - all 8K words of it. Enough to run a very simple Fortran compiler. From paper tape, naturally.
Re:12 bits (Score:2, Interesting)
Back in the day there were two schools of thought: the 8 bit byte and the 9 bit byte. The 9 bit byte school represented the same data as the 8 bit byte school (0 to ff), and used the extra bit for parity.
8 bit bytes led to 16 bit words etc...
9 bit bytes led to 18 bit words etc...
My reaction to the old time was to ask WTH would anyone do this?
His response was to go into a discussion of 36 bit computing. "Ya see son. When you have 32 bits you can divide those bits evenly 6 ways:
32 single bits,
16 2-bit groups,
8 4-bit groups,
4 8-bit groups,
2 16-bit groups,
1 32-bit group.
With a 36 bit system you can divide those bits evenly 9 ways
36 single bits
18 2-bit groups
12 3-bit groups
9 4-bit groups
6 6-bit groups
4 9-bit groups
3 12-bit groups
2 18-bit groups
1 36 bit group
using 36 bits give you more flexibility in addressing."
He went on to tell be stories about how they explioted the advantages of 36 bit computing back when he had worked at Compuserve and how sad he was that 36 bit systems had died.
It could very well be that 3 12-bit bytes are used to make up a 36 bit word...
Re:Maybe... (Score:2)
Well, with 6 bits you can have the capital letters and some punctuation.
With 12 bits, you can have all the European alphabets, and Russian and (some) Japanese.
With 24 bits, you can have Unicode. (No, you can't have Unicode in 16 bits. It's grown to 21 bits already (allthough only 2^20 code positions are actually in the code).)
To conclude: 12 bits, 4096 characters, is a very good size for a rudimentary global character set. 24 bits lets you use all of Unicode (21 bits) and is a much better fit than 32 bits.
This is great news :) (Score:4, Interesting)
On the subject of PDP8's, I was surprised to hear that they were used in communications in Hong Kong up until at least 1999 for a number of financial institutions. I worked with an old computer technician who earned a fortune maintaining these beasts. I wonder if they are still being used in HK after the Chinese reclaimation?.....
Re:This is great news :) (Score:2)
...
I wonder if they are still being used in HK after the Chinese reclaimation?
Possession of Hong Kong went back to mainland China in 1997, so I would say yes, they were in use for at least two years after the changeover. (I don't know if they are still in use, though.)
Slashdoted already (Score:2)
Why? (Score:5, Interesting)
So, after reading the article, I am still trying to figure out....Why revive the idea of 12 bit computers? Other than nostalgia (which is why people still drive Studebakers, old Ferraris and old Porsche's I suppose), what is the point?
Re:Why? (Score:5, Funny)
Re:Why? (Score:2)
Some people, (myself obviously included) think that the pre-gas shortage cars in the US were much better looking cars. The current models seem to still be recovering from the 80's and 90's ideas of "Box on Wheels" and "Wind Tunnel Styling". I wouldn't own a new Porsche.
I think the idea with the PDP however, is not nostalgia. It was (and is) a very stable platform. Many companies either just left them, or are still using them. We just left the VAX platform not too long ago, despite having used Alphas concurrently for ages.
After all, if it works, why upgrade?
-WS
Re:Why? (Score:4, Funny)
I'm rather sure it is easier to get laid in a vintage Studebaker [studebakercars.com] than a Ferrari [supercars.net].
More leg room and all.
Re:Why? (Score:2)
People care about even stranger things than that. Just yesterday, I saw a group of Edsel fans driving their cars in the Doodah Parade [pasadenadoodahparade.com]. When I thought that was goofy enough, the people standing next to me pulled out their pictures from a trip to a Corvair fan get together (The Great Western Fan Belt Toss & Swap Meet [hemmings.com]). ISTR that there are even Trabant fan clubs, of all things. Name a piece of obsolete technology, and there's a good chance that some people will be fanatically devoted to it.
Warning! Cheesy movie reference... (Score:3, Funny)
Paper Tape Reader! (Score:2, Interesting)
None of those airy-fairy magnetic tape drives in those days on a machine like that! The front panel switches were might handy too for such esoteric operations such as booting the thing.
replica computers ... its logical innit? (Score:3, Interesting)
all power to him!
Interesting (Score:3, Insightful)
It would be fun to play around with something cool like that, just for the sheer ability to say "Hey, y'all watch this!" (Oops, better watch that there accent, ya rekcon?) It would especially nice to have a C compiler or something to develop apps for it, again just for the coolness factor.
With a twelve-bit computer, what is the address space, anyway? Something like 2048 words? Suprisingly, you can actually do a lot with that if you code it tightly. No, can't do weather map rendering too well or anything like that, but I bet you could pull off a stripped down version of NetHack or something...
Coolness, regardless. :-)
Address Space (Score:2)
Somewhere I have an old DEC PDP-8 handbook. They released a native FORTRAN compiler for the PDP-8. It just shows what you can do with clever coding and lots of overlays.
Re:Interesting (Score:2)
You speak heresy, Grasshopper (Score:3)
King Arthur: Noble FORTRAN compiler, although you are a dead language. .
FORTRAN compiler: I'm not dead yet sire.
King Arthur: Although you are a mortally wounded language. . .
FORTRAN compiler: Actually sire I'm feeling a bit of all right.
Again, C compiler indeed. Gag my PDP-8 with a spoon. ( Actually, that would be 'anatomically' possible)
Here's an interesting little page on the history of the PDP-8 OS's and languages:
http://www.cs.uiowa.edu/~jones/pdp8/history.htm
And here's an interesting computer history page with several FORTRAN links ( as well as UNIX and C links):
http://www.fortran-2000.com/ArnaudRecipes/CompM
C compiler. . . phbbbbbt!
KFG
Just how big *were* these things? (Score:4, Informative)
The interesting part is that they posted high resolution images of their setup, which includes PDP-8 microcomputers!
The image: http://www.ems-synthi.demon.co.uk/studiopz.gif [demon.co.uk]
The PDP-8s:
Left side - Teletype for PDP8
Left bay - PDP8/L Computer ("Leo") 4K x 12 bits (=6K bytes) 1.3 s cycle (0.77MHz), 32K Hard Disk Store
Center left bay - PDP8/S Computer
12 bit is best for the US patriot (Score:5, Funny)
Think on it, power of two is a far to simplistic and dare I say it European system for the patriotic American. In Europe they use metres, kilometers, grams and kilgrams. All this regulation of structure around a number like 10 is typical of Europeans. Americans use sensible systems like 14 pounds (abbreviated sensibly to lbs as pounds clearly contains the letter l) to a Stone and 16 Ounces (again with a sensible abbreviation of oz) to a pound. Who needs these ridiculous regimented European systems that dicate that everything must follow a sensible pattern?
Patriotic Americans arise. 12 bits to a byte, 7 bytes to a word, 13 words to a sentence and 1764 bits to a chain.
Re:12 bit is best for the US patriot (Score:2, Insightful)
Why do you think the americans use imperial?
Because we used it first.
Re:12 bit is best for the US patriot (Score:2)
Re:12 bit is best for the US patriot (Score:2)
There's a site about it here [leo-computers.org.uk].
Re:12 bit is best for the US patriot (Score:3, Informative)
And of course we inherited the whole system. As I recall, the "lb." abbreviation has something to do with the French "livre", and also led to the the "pound sterling" symbol, that fancy-schmancy "L" that featured so prominently on the Commodore keyboards of yore.
As for word width, well, there's nothing especially holy about multiples of 8. CDC used to make machines with a 60-bit word, because they mostly dealt with numbers, not text manipulation, and big fat words like that allow for big fat numbers, although storing an ASCII file in 60 bit words would be clumsy as hell (As a side note, I used to work with the CP1600, which was a real 16-bit machine. There was no way to address a byte, although there was an 8-bit shift so that you could pack ASCII into words to save space and slow down annoyingly fast programs.)
Re:12 bit is best for the US patriot (Score:3, Informative)
Which is interesting because the word Sterling comes from starling, which meant "small star" in mediaeval English - it was the symbol on the coin for the unit of currency. So the currency symbol should probably really be a *.
Lb is from "livre" (French for pound) and dollar comes from "taler", an old German currency.
taler (Score:2)
Re:Daler (Score:3, Informative)
Re:12 bit is best for the US patriot (Score:2)
I must respectfully disagree.
While a measurement system based on powers of 2 seems fairly straightforward and intuitive to your average techno-nerd, it is still incomprehensible to the average Patriotic American. In fact, I would venture to say that 1024 bytes in a kilobyte is every bit as opaque as our beloved, patriotic, English units.
What Gives? (Score:4, Funny)
On a related note, I'm going to be designing my own 32 bit system. It's going to be pretty cool having an asthetically pleasing case, and run most all of the common software out there, but make the operating system run on top of bsd. Then I'll make really high-end systems, and education type systems, and laptops.
Now I'm 95% of the way done with this whole project so I've hired an advertising firm to come up with some commercials. I figure I'll show joe average sixpack switching from the normal x86 windows machine, to my machine, I'll call them 'Switch-Ads'.
My proprietary systems will never run on anything else, and you will be forced^H^H^H^H^H^H encouraged to only buy via our website.
I'll call them MOC's ... and the company will be named Orange.
What for? (Score:3, Funny)
Good genes! (Score:2)
"My mother was a Fortran programmer using computers that looked like this [picture of an ancient IBM 608-series supercomputer]"
Text based games (Score:5, Interesting)
XYZZY (Score:4, Funny)
And of course
"with your bare hands?"
yes
you stand amazed as the dragon lies dead at your feet.
Bugger graphics you can't beat a maze of twisty passages, all different.... or was it a twisty maze of different passages.
W95 and DOS will not expire at the end of the year (Score:3, Interesting)
W95/98, on the other hand, will actually expire some years in the future. I discovered this on a reinstall that went bad. Windows simply refused to install. Having a Gateway at the time I called tech support and the issue was tracked down to a buggy BIOS (gotta watch for those updates) that had reset my system clock to a future time.
"Ah, there's your problem. Windows has a 30 year time bomb built in so it thinks it's expired."
Ummmmmm, good to know. I guess that's how long we've got to port all our favorite W95/98 games to Linux ( or maybe Plan 9).
KFG
Re:W95 and DOS will not expire at the end of the y (Score:2)
Much better things will be available by then.
Re:W95 and DOS will not expire at the end of the y (Score:2, Insightful)
a PDP8 was my home machine in 1976 (Score:3, Interesting)
It was fairly easy to program for - I wrote a simple cross-assembler on a Dec-10 that would print out my assembler source with machine code (in octal). For short programs, it was fairly quick to enter the programs in octal. Since the Intercept Jr. was all CMOS, the programs would stay in memory as long as I wanted without runnng down the batteries.
Really, it was very cool, and fun.
-Mark
What a memory... (Score:2, Interesting)
Why cling to the past? (Score:2, Insightful)
On the other hand. . . (Score:2)
If nothing else the guy is obviously having fun. Believe it or not when you get to be Eleventy years old your own hobby just might turn out to be keeping 16 bit Intel stuff alive.
I mean like, why ride skateboards when we have bicycles now? Why, because you *want* to.
KFG
Didn't these things have selectable word sizes? (Score:4, Interesting)
Re:Didn't these things have selectable word sizes? (Score:5, Informative)
It's a strange dream. The only 8's that had a knob on the front were PDP 8/e, 8/f, and 8/m and they all shared basically the same front panel design. The knob selected the register that would display on the front panel. It had no effect at all on the operational mode of the machine.
Only 12 bits? (Score:2)
DeCastro was right, this 12 bit nonsense will never go anywhere.
Resurecting old hardware designs (Score:4, Insightful)
Has anyone actually done that? Has anyone actually taken say, a Tandy Color Computer 3's hardware and boosted it up to something approaching our current standards? I'm not talking emulation on a x86 platform. I mean fully working with a processor with a native OS.
Those architecture are so simple, with kernels so small you could print the hex binary out on a couple of pages. Imaging how fast an accounting package would be on a 1 gHz, or even a 200 mHz.
I know this maybe off topic, but if someone could resurrect a 12-bit system to a more modern standard, why not other system. DOS [drdos.com] is still viable [freedos.org] in certain circumstances, why not these platforms.
Think about an 8-bit controller with a serial connection, flash memory, and a RCA video out jack that is based on a C64. There is a TON of documentation for programming on something like that. Linux guruâ(TM)s could use C/C++ and Windows users could use Commodore BASIC.
Oh well thatâ(TM)s just my ramblings.
Re:Resurecting old hardware designs (Score:2)
Then one day they replaced it with a PIII700 system, 128mb ram the works. Still ran this IBM Dos OS, but the end of week took seconds. Literally, we could no longer get away with a 4 hour break!
The same supermarket is still using this Dos OS on p4s now, so i wonder what the speed is like now.
Well, not to put too fine a point on it, but . . . (Score:2)
On the other hand the venerable Z80 not only has never gone out of production but is being updated just as you suggest:
http://216.239.39.100/search?q=cache:vPeP4Ne7p1
There are an awful lot of uses for small, fast, cool running, general purpose and cheap as penny candy chips.
"Charles Luther" in "Runaway" understood this full well when he used 8088's to power his nefarious robotic killing machines.
KFG
Not Bad for its day (Score:2, Informative)
Why bring it back? Why not? It may not ever be used for much, however who says all the cool computers have to be uber-machines? This next comment isn't meant to start a flame war, but I'd like to see some of today's bloatware folks try and make a program of any substance work on one of those puppies. I've seen some code from folks used to huge addressable and virtual memories and YIKES !
PDP8? How about a PDP11??? (Score:2, Interesting)
I jsut had to reboot one this morning...
Chris
PIC is a bit more practical (Score:4, Interesting)
Bruce
Reminds me of... (Score:3, Funny)
"I built the castle in the swamp. They said it would fall over but I did it anyways. Sure enough, it did. I build a second castle and that one fell into the swamp as well. But the third castle stayed."
Looks like he needs another iteration.
Re:Reminds me of... (Score:2)
Less rough, though still offtopic quote:
When I first came here, this was all swamp. Everyone said I was daft to build a castle on a swamp, but I built it all the same, just to show them. It sank into the swamp. So I built a second one. And that one sank into the swamp. So I built a third. That burned down, fell over, and then sank into the swamp. But the fourth one stayed up. And that's what you're going to get, Son, the strongest castle in all of England!
(No longer to-the-word-)But I just want to sing: ... And no singing!
I blew up a PDP8 - and survived (Score:3, Interesting)
I'm old enough to have done an electronics project building a joystick interface for a PDP8 as an undergraduate. I spent ages soldering TTL chips and after a few weeks plugged the card in, to a strong smell of fish and burning insulation. It wasn't my fault, the slot in the edge connector was too wide, and every single connector on the backplane had shorted to every other. It was 6 months to get the machine repaired, so someone figured out they could take out the power transformer, scrape off the burnt mess, figure out how many primary and secondary turns were needed on the transformer, then wind them on using a reel of wire and a lathe. They go the machine going, someone else filled the board slot with epoxy and cut a new slot. My project was saved! A few weeks later i reached round the back of a PDP8 to unplug a power connector, grabbed the live pin, but was saved because my arm was earthed to the PDP8 case. I love that machine, I still have the instruction set on a sheet of paper.
Just use the history simulator (Score:3, Informative)
I loved the PDP-8 but I'm not convinced... (Score:4, Interesting)
If C is "high-level assembly language," then the PDP-11 is "a computer that directly implements C."
To my surprise, though, I didn't really find that a lot was gained. Programming a PDP-11 didn't really FEEL much easier or more powerful than programming a PDP-8. And it was amazing how much every program expanded in size. It's been said that the PDP-8 instruction set was the most core-efficient ever devised, and I'd believe that.
On the other hand, when I tried programming a 6502, which on the face of it doesn't SEEM that much more restricted than a PDP-8, I just about went bananas.
Having said all that, I'm still not sure I see the point. The sweet design for a computer has to depend on the economics of the hardware around it. Who cares? Even IF the "core-efficiency" thing were true, and even IF you could use standard RAM with a 12-bit processor and not waste any bits, and even IF it turned out that the PDP-8 design were, say, 30% faster and used 30% less RAM for a given program than x86... how could it matter?
If the Alpha, which really WAS a superior design, wasn't superior enough to overcome Intel marketing, customer inertia, and only the normal amount of mismanagement, how can a PDP-8 be anything more than a curiosity?
Re:I loved the PDP-8 but I'm not convinced... (Score:4, Insightful)
Actually it's fair to say that C was developed as a "high level assembly language" for the PDP-11, in other words you've got it slightly backwards. The postfix "++" and prefix "--" operators correspond to the PDP-11's autoindexing mode and when applied to a dereferenced pointer map directly to "(Rn)++" (once the pointer's been moved to a register.
I doubt C would have these constructs if the PDP-11 didn't provide the corresponding register mode.
As far as the PDP-8 being perhaps the most core-efficient design ever, speaking as someone who once developed system software for the PDP-8 and afterwards compilers for the PDP-11, yes, I'd say you're right.
As long as you could fit program and data into 4096 12 bits words, that is. If your program could fit into 4096 12 bit words accessing data in the remaining 28KW was relatively easy due to the semantics of the CDF instruction. But once your code itself outgrew the first 4096 words things got bad in a hurry, because cross-bank subroutine calls using the CIF instruction were fairly expensive.
Gordon Bell designed both the PDP-8 and the PDP-11, and they were designed with different goals in mind. The PDP-8 was designed to be programmed in assembly code - the page and memory bank addressing structure made the development of efficient compilers impossible (it's not an accident that no system programming language like C was never implemented for the PDP-8 architecture).
The PDP-11, on the other hand, was the first minicomputer designed with the compiler writer in mind. The instruction set was very easy to generate code for, much easier than for many mainframe machines that in those days still often had a single accumulator and some auxillary special-purpose registers. The PDP-11's clean, general-purpose register design and (relatively) orthogonal instruction set made compiler writers like myself almost faint in anticipitory pleasure when the design was first announced.
While Gordon Bell designed the PDP-8 and PDP-11, the original engineering plans for the PDP-8 are signed by DeCastro, who did the implementation. He submitted a rival design for DEC's 16 bit minicomputer that was no where near as clean or compiler-writer-friendly as Bell's PDP-11 design.
When the PDP-11 design was chosen, DeCastro left and started Data General, and his 16-bit design became the oft-loathed Nova.
CDC's 12-bit PIC design was much inferior to the PDP-8's, IMO
More power to 'im (Score:2)
I only mention this because I hope someone who does have the requisite electronics skills will email me so we can join forces.
As for the earlier post to the effect of "what is it good for?", I can only say that it's fun to do, and old computers are good for the same things they were good for when they were new. One may as well ask what a 1965 Mustang is good for.
PDP-8 was not the first minicomputer (Score:2)
The PDP-8's distinction was to be the first mass-produced minicomputer.
Call it the QED8 (Score:2)
Odd way to go about it (Score:2)
http://www.sparetimegizmos.com/Hardware/SBC6120
The only point I could see in using FPGAs would be if you were trying to rectreate an early MSI PDP-8 (these things existed before the single-chip microprocessor).
PDP-8 microprocessors (Score:4, Interesting)
For some industrial control jobs, something like a PDP-8 or PDP-11 is in many ways ideal because you can see everything that goes on. It is actually possible for one person to understand the hardware, the microcode, and every single bit of the software. For me, that is the great pleasure of small embedded designs. I really think it would be good to have a teaching tool for CS that actually meant that the student could do a project and have a complete overview of the entire thing in this way. I'm far from knocking progress, but there are comments on this thread that are a bit about the kind of alienation we have now between hardware and software - most people have no real idea at all what the hardware does, and use terms like "cache" without even stopping to think about what is going on. So yes, let's have someone build an understandable modern PDP-8. It's less weird than the RCA1802 and easier to get your head around than the 8080.
Re:But For What? (Score:5, Informative)
Re:But For What? (Score:2)
These PDP8's are still in service, in Ontario, in nuclear power generation - and will be for some time to come.