By the late 1960s, the US panic over Russia's launch of Sputnik had resulted in a significant increase in investment in science education at all levels. In particular, an experimental program was started at the public Bronx High School of Science in New York City, with the goal of exploring whether high school age students could successfully program computers. (Computers often cost hundreds of thousands or millions of dollars, so this felt a bit like seeing whether students could fly a jetlineer). The school procured a number of small systems including several of the legendary Olivetti Programma 101s.
The school's main machine, though, was an IBM 1620. It was a decimal (not binary) machine, and the high school's was had the minimum 20K digits (not bytes!) of memory. Originally, input and output was only via punch cards and the built in typewriter, but by 1968 or 1969 an IBM 1311 disk drive, which was itself the size of a small washing machine, added 2 Million digits of persistent storage, and a 1443 lineprinter was also added.
Students were taught, in this order: machine language, then FORTRAN II, then assembler. That's not a typo. Since the machine was decimal, it wasn't too hard to type in raw machine code onto cards, which could be loaded directly into memory by a short self-booting loader program, which was included on the front of each card deck. There was an assember, but until the hard disk showed up the assembler required that an intermediate deck be punched and loaded each time an assembly was attempted. So, for moderate size programs, it was often easier to write machine code directly. This typically involved manually computing and setting the absolute target of each branch. Here's a picture of the cover of the textbook that students used to learn 1620 programming.
This experiment was, in my opinion, wildly successful. I'm aware of at least 4 quite well known computer scientists who started their programming careers in the 1960s on that machine at Bronx Science. In general, many programmers from that era learned on 1620s. Like PDP-11s and Apple-2's later, 1620s were machines that an individual could get easy access to, and could learn to program at both a low level (machine or assember) and higher level (FORTRAN, LISP). Famously, the 1620 did not have a general purpose adder implemented in hardware: the add instructions would not produce the usual results until software loaded a table of partial sums into a fixed location in memory. The machine's nickname was thus CADET: Can't Add, Doesn't Even Try.
According to today's actual posting from Nate Silver, the same data leads him to conclude an 86.3% chance of an Obama win in the electoral college. Still high, but your "Nate Silver's Numbers Project..." headline is true if parsed carefully, but very misleading. If you want to say "I conclude from Nate Silver's numbers...", well fine.
Silver's Nov. 4 post is at: http://fivethirtyeight.blogs.nytimes.com/2012/11/05/nov-4-did-hurricane-sandy-blow-romney-off-course/ (paywall
For what it's worth, I was a student at MIT in the early 1970s. I recall in the summer of 1972 hearing a story from other students that is surprisingly similar in general outline, but not in detail. Obviously, my memory from so long ago isn't perfect, what I heard at the time was a rumor anyway, and I haven't really tried to research anything that would corroborate it. That said...
The story was not about Apollo 13, but about another Apollo mission that had established orbit around the moon. Some sort of faulty sensor reading or stuck switch was preventing the system from preparing the necessary rocket firings to break the astronauts out of lunar orbit and send them home. According to these rumors, NASA identified the author of the control code as an MIT student working at the Charles Stark Draper laboratory, which is affiliated with MIT. An emergency call went out to find him, so that he could patch the code to ignore the faulty switch or sensor.
The claim is that the call was taken by friends, who were concerned by the fact that the student in question, whether long-haired or not, was either drunk or stoned out of his gourd at the time. Nonetheless, the student was alerted. He supposedly uttered the obvious "oh !$!$!" and stumbled off to Draper Lab, where in his reduced condition he patched the code and saved the astronauts.
Very much a rumor/urban legend, but suspiciously similar to the new story about Apollo 13. These certainly were the sorts of stories that floated around MIT at the time. I expect that at least a small percentage of them are true.
Describing his role as congressman, Vint Cerf and Bob Kahn, who played key roles in the development of the Internet and TCP/IP write:
"He was the first elected official to grasp the potential of computer communications to have a broader impact than just improving the conduct of science and scholarship. Though easily forgotten, now, at the time this was an unproven and controversial concept. Our work on the Internet started in 1973 and was based on even earlier work that took place in the mid-late 1960s. But the Internet, as we know it today, was not deployed until 1983. When the Internet was still in the early stages of its deployment, Congressman Gore provided intellectual leadership by helping create the vision of the potential benefits of high speed computing and communication. As an example, he sponsored hearings on how advanced technologies might be put to use in areas like coordinating the response of government agencies to natural disasters and other crises."
They go on to discuss the important contributions he made as Senator and Vice President.
There's a lot of merit in this story I think, but ultimately it muddies the waters. Certainly, it's claim that government-funded research played a less than key role in the development of internetworking seems to be just plain false.
First of all, the work Xerox did that most resembles the Internet protocols was not Ethernet, but PARC Universal Packet (PUP), which is indeed quite directly comparable to the IP in TCP/IP. Ethernet, while a terrific piece of work, mostly served to facilitate networking within a single site.
The article also says implies that the Government-funded ARPANET wasn't really the precursor of the Internet. I think that's an over-simplification. Arpanet wasn't the very first packet switching network (see the work of Baran and Davies), and it certainly wasn't an Internet (network of networks), but it really was the direct antecedent of the Internet as we know it. Arpanet connected universities and other research establishments. It proved the viability of a packet-switching network with all the application smarts at the periphery of the network. In almost all cases, what had been Arpanet connections among the early sites evolved (sometimes by way of NSFnet) to TCP/IP Internet connections, running essentially the same applications and services. So, in all those ways, Arpanet was a crucial step on the way to our TCP/IP-based Internet, and of course, ARPANET was government funded.
A much less sensationalist but much more balanced history of all this can be found at: http://www.nethistory.info/History%20of%20the%20Internet/origins.html . The record there strongly suggests that Bob Kahn and Vint Cerf were discussing approaches to internetworking (connecting networks) in spring of 1973. Interestingly, the official PARC Research Report on PUP actually cites the Internet work of Cerf and Kahn, specifically their 1974 A Protocol for Packet Network Intercommunication.
So, the government-funded work on internetworking seems to have started before the Xerox work, and the Xerox research time explicitly cited Cerf and Kahn as sources of inspiration for the Xerox work on internetworking. Wouldn't it be nice of the WSJ article made all that clear before everyone started using these over simplifications to prove the futility of government-funded research?
Keep the number of passes in a compiler to a minimum. -- D. Gries