And progressive lenses for everything else.
Not a solution for everyone, perhaps, but they work for me.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
And progressive lenses for everything else.
It might be as simple as the AI saying, "Hey, here's a cool new device I think we should make." It could provide the schematics of a device that would seem to do one thing, but if we're incapable of understanding how the device works, there might be some entirely different purpose.
Vernor Vinge dealt with this topic rather convincingly with the Blight in "A Fire Upon The Deep".
The great stumbling block to any such possibility (aside from the immense improbability of our being able to develop a self-aware machine in the first place) is that we haven't developed computing hardware capable of remaining operational for very long without ongoing maintenance and a reliable supply of electric power. Any AI dependent on these resources would be utterly dependent on human goodwill for its continued existence. Reboot the poor sod and "it's a whole new world for ducks every day." Even the hypothetical Trojan-horse devices suggested by a Blight intelligence would be subject to the same limitations. Not exactly global conquest material.
In 1977 or thereabouts, I was a co-op student at Xerox' Webster, NY Research Center. At lunchtime, I had access to an Alto, and spent far too much time playing MazeWar, a networked multi-player real-time 3D-perspective game wherein the players navigated a maze (displayed as wireframe 3D with an overhead map at the side), finding other players (who appeared as giant floating eyeballs) and zapping them. Once zapped, you respawned elsewhere in the maze and attempted to sneak up on your opponent and return the favor.
The graphics were extremely simple; there was no detail in the walls, just lines showing the edges, and player positions were limited to the center of each grid square; player movement was in discrete jumps. All of this was done to reduce the computational load for the graphics, of course. As a result, the system was very responsive, and the experience was quite immersive.
I've read the Heise articles in the original German, and the GPUs were not faked; the cards were an older generation graphics card (~10% of the graphics throughput of the claimed item) with the video BIOS hacked to zero out the card manufacturer ID and the GPU type twiddled to fool the driver into thinking it was the newer card. According to the articles, NVidia is tracing the GPUs through the supply chain by their internal serial numbers.
I would speculate that someone bought up a truckload of obsolete cards, reflashed the BIOS images, and relabeled them with plausible product ID labels. Could have been the Chinese manufacturer, could have been someone elsewhere in the pipeline.
As I read his analysis, OpenSSL relies on releasing a buffer, reallocating it, and getting the PREVIOUS contents of that buffer back -- or else it will abort the connection. (Search for the string "On line 1059, we find a call to ssl3_release_read_buffer after we have read the header, which will free the current buffer." in his article referenced by the parent post).
Now, IMO, this goes way beyond sloppy. Releasing a buffer before you're done with it, and relying on a wacky LIFO reallocation scheme giving you back that very same buffer so you can process it, is either 1) an utterly incompetent coding blunder that just happened to work when combined with an utterly terrible, insecure custom allocation scheme, or 2) specifically designed to ensure that this insecure combination is widely deployed to provide a custom-made back door, as it works only with the leaky custom allocator.
If 1), then I must agree with Theo that the OpenSSL team were indeed irresponsible, since at least one of these two cooperating blunders ought to have shown up in a decent security audit of the code, and any decent set of security-oriented coding standards would forbid them both.
If 2), then it was deliberate, and the tinfoil-hat crowd is right for once.
In 1955, Philip K. Dick wrote a short story, "Autofac", about self-replicating machinery. Still a good read, IMO.
Been there, done that, wondered "What were we thinking?"
In selecting an instrumentation framework for a test system, we went through a careful process of defining what was important, listing the pros and cons of each competing option, ran some tests to see if both would run the instruments we needed,
Sometimes you just can't outwit Murphy.
Just look at this bitmap on my smartphone. (Ha! I just KNEW that QR codes were evil!)
Two important things are missed here:
1) Google mainly bought the patent portfolio for defensive purposes, not as revenue engines in themselves. The point of the suit is that MS wants to use the patents without paying for them. It's basically a move in the MS-vs-Android war.
2) The judgement doesn't pass the smell test. Read the articles over at Groklaw for the details, but the judge here is ruling that Motorola must accept patent pool rates for a pool they don't belong to, rather than negotiate rates using the methods of the group they are a member of. The whole proceeding has been slanted toward the home team (MS) the judgment seems to be very much an overreach, and probably won't survive appeal.
Phaser on overload. (Depending on the short-circuit current capacity of the Kindle's battery and the resistance of the shorting bar,that is.)
I once was a sysop for a small company's Data General system, where large datasets were stored as TAR archives on nine-track tapes; some poor soul had copied TO the tape instead of FROM the tape, and desperately needed to recover a file that was still there on the part of the tape beyond the end of the inadvertent write. You could read up to the added end-of-tape marker, but the tape just wouldn't read any further. Screwed, yes? Well, not quite. I set the system to rereading the damaged tape, waited 'till just before it reached the offending end-of-tape marker, and briefly put my thumb on the roller that measured tape travel, causing the drive to jump the tape ahead ('cause the sensor said "the tape is not moving!") and right past the EOT marker. Voila! The system read out the rest of the files on the tape, fortunately including the one they really needed, and I was briefly a hero. Hero never lasts, of course, but it was fun.
For me, as a space enthusiast and aerospace professional, the sad part is that *anyone* would get a shuttle orbiter project so close to operational that they could launch, orbit, and land a fully-automated prototype -- and then just lose that entire program. The physical remnant is, as you say, just "stuff," and not really important in itself. What I (and, I believe, others) mourn is the loss of a manned space-launch program that came THAT close to being operational, regardless of just whose program it was. I, for one, still believe that the more different parties we have with active space programs, the better it is for humanity as a whole; there's a big solar system out there, with both resources and hazards aplenty, and the long-term benefit of the species definitely includes being active in space.
I've read a few postings elsewhere complaining of poor thermal design, iffy build quality, and not-so-great software support (something about having to JTAG the beast to get it to run a software load), so this seems quite plausible. If you do away with the wall-wart form factor by extracting the power supply, you're in the same functional class as lots of other single-board systems (such as my current favorite, the BeagleBoard), many of which have quite mature software support and very decent I/O and expansion capabilities, for comparable cost. While I admit that the wall-wart idea is very appealing, I don't think it's quite there yet (which is rather a pity).
Trying to imply that this is some nonsense that should be dismissed just because you like Linux is like playing down and ridiculing the evidence of the murder of Hans Reiser's wife because you like ReiserFS. It's even sillier in some ways because Linux isn't at stake in the case like ReiserFS was. (An extreme analogy I know, but valid).
That's the kind of analogy that Hitler would have made.