Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Sci-Fi

William Gibson Gives Up on the Future 352

Tinkle writes "Sci-fi novelist William Gibson has given up trying to predict the future — because he says it's become far too difficult. In an interview with silicon.com, Gibson explains why his latest book is set in the recent past. 'We hit a point somewhere in the mid-18th century where we started doing what we think of technology today and it started changing things for us, changing society. Since World War II it's going literally exponential and what we are experiencing now is the real vertigo of that — we have no idea at all now where we are going." "Will global warming catch up with us? Is that irreparable? Will technological civilization collapse? There seems to be some possibility of that over the next 30 or 40 years or will we do some Verner Vinge singularity trick and suddenly become capable of everything and everything will be cool and the geek rapture will arrive? That's a possibility too.'"
This discussion has been archived. No new comments can be posted.

William Gibson Gives Up on the Future

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Monday August 06, 2007 @04:50PM (#20134377) Journal
    There's two things I'd like to mention after reading this interview. First, let's give the original credit of a technology explosion or singularity to I. J. Good [wikipedia.org] and his quote:

    Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
    I think that predates Verner Vinge but he certainly never built it into a story like Vinge.

    Second, I would like to point out that every non-fiction book or movie I have read requires some degree of suspension of disbelief. Whether I'm watching Remains of the Day or Demolition Man, I need to look past illogical or non-scientific aspects of the movies. Does this detract from the story? Some would say yes, I would say only a little bit. I am very forgiving in literature. I have read many old Stanislaw Lem novels and the complex emotions the robots display is impossible--the physics of the robots are even more impossible. But Lem's stories are still great, given I can get past a robot with no energy input survives millions of years in space.

    So although I have not read William Gibson's works, I ask him not to give up on writing. You will have another good idea and you will write another book about it. Just wait for it to come.

    As for this idea of technology actually achieving this event horizon described by Good or Gibson or Vinge, I don't think that it's achievable. I can't prove it won't happen just like you can't prove it will happen. All I will say is that I don't even know where to begin. I would start with digesting the world wide web & developing a logic and reasoning engine to decide which statements are true and which are fact and which are neither. When it would be done, it may be 'more intelligent' than I but not 'more intelligent' than the sum of all human knowledge.

    I think there will always be a "???" in the game plan to make an artificially intelligent robot that functions intelligently on a human level or higher. I just don't see a way around it. That doesn't mean we should ever stop writing about it though.

    Sci-fi is fun, not something that is completely scientifically accurate--it just is a lot more fun when you explore the gray areas we don't understand or theorize about. Enjoy it while you can!
  • by morgan_greywolf ( 835522 ) on Monday August 06, 2007 @04:57PM (#20134459) Homepage Journal
    Seriously. Were it not for willing suspension of disbelief, the entire genre of sci-fi would not even be viable. What's scientifically accurate about sci-fi universes like Star Trek, Star Wars, Stargate, B5, or even Eureka? Nothing. The point is, who cares? Sci-fi is about the story, not about the science.
  • Not so hard, really (Score:5, Interesting)

    by pieterh ( 196118 ) on Monday August 06, 2007 @04:59PM (#20134499) Homepage
    It's pretty easy to predict the future. The hard part is the timing.

    Anyhow, here goes:

    - most of the world gets online and fully integrated into the digital revolution
    - wireless networks everywhere
    - more and more services get online
    - large-screen video conferencing in every living room
    - digital glasses that overlay the real world with maps, wikipedia pages, everything
    - facial recognition for *everyone* you meet, pops up their wikipedia page
    - no more queues at the post office - every interaction with the state will go online
    - movies will, eventually die, and be replaced with something like scripted video games
    - virtual worlds will become a major front-end to the internet
    - rising energy costs will define how we use transport
    - poorer nations will be strongest adopters of ecological technologies
    - we'll see 'fabricators', able to make any product out of a digital design
    - the *AA will crack down on design sharers
    - cities will reject the automobile and become a lot nicer places to live in
    - pharmaceutics will go digital and we'll be exchanging digital drug designs
    - some bright kid will hack a drug fab to produce artificial life
    - the church and the *AA will crack down on DNA design sharers
    - the country as a notion will die and be replaced with the online community
    - big, big changes in political structures

    Etc.
  • by backslashdot ( 95548 ) on Monday August 06, 2007 @05:04PM (#20134539)
    You know a lot of people in the world live as though airplanes, cars, televisions, and the light bulb were not even invented yet. So even if someday someone invents cool stuff, there will always be a segment of the world to which those things may as well have never been invented. The computer I am typing this to you on is science fiction to them.

    So, can we use our existing technology to provide decent preventative health, transportation, and clean water for everyone? It requires no inventing. No new technology. Their governments just need to allow entrepreneurs build a bunch of solar or nuclear power plants to desalinate the water and power heavy construction equipment (currently most third world governments don't allow entrepreneurs to compete against eh state owned corrupt utility companies).
  • always be a "???" (Score:5, Interesting)

    by wurp ( 51446 ) on Monday August 06, 2007 @05:07PM (#20134583) Homepage
    1. Use a combination of surgical examination, dissection of dead tissue, and MRI and other dynamic techniques to produce a model of the physics of a human brain
    2. Wait until Moore's law puts a computer within your price range that is capable of running that model at faster than 1 model second per real second
    3. Implement it

    You now have a machine that is slightly more intelligent than a human. Add in the fact that you can fully oxygenate all tissues, remove waste products, control neurochemicals, and dissipate (virtual) heat with no regard for physical laws, and I'd say it's quite a bit beyond human intelligence.
  • by scribblej ( 195445 ) on Monday August 06, 2007 @05:12PM (#20134627)
    Your post was thoughtful and well-written, as well as insightful. I'm almost embarassed to be replying with humor.

    So although I have not read William Gibson's works, I ask him not to give up on writing. You will have another good idea and you will write another book about it. Just wait for it to come.

    I'd like to suggest that if you HAD read his books, you'd ask him to please put down the pen and do something else.

    He had one great idea, and when he was younger, his writing style was beautiful and articulate, like some crazy poetry. But as time has worn on, he has moved further from brilliant concepts and fantastic conceptualizations, and closer to being "just another sci-fi author."

    Neuromancer was an excellent read. The stories in Burning Chrome, genius. I'd even give im points on Count Zero and Mona Lisa Overdrive.

    After that, he went to crap. I still give him credit for being a brilliant man, a good writer, whom a lot of people enjoy. But I don't think that anyone, even his current fans, would argue that after his first set of books, "something changed."

  • by bobetov ( 448774 ) on Monday August 06, 2007 @05:31PM (#20134851) Homepage
    In regards to your skepticism regarding the singularity, I'd like to point out that it doesn't require super-smart machines to happen.

    The requirement for the singularity is simply that we reach a point where we can achieve, in some manner, an intelligence of 1.01 times the human norm, and that that intelligence can repeat the trick. Certainly, machine intelligences should allow this, but it is also possible we will devise ways to improve our own mental functioning, or a way to aggregate normal human intelligence such that the total is greater than any one mind could comprehend.

    There are, in short, a number of paths to exponentiating intelligence. To argue that such is impossible is not supportable - we have only one example of a human-caliber mind, and all indications are that we are not in any way an end point of evolution. If mother nature can get to homo sapiens through genetic darts and dice, it seems decidedly improbable that we won't be able to do better with a guided approach, once we master the required genetics and so forth.

    Now, I have major doubts about the *pace* of this change, and of when it will kick in, but it seems unlikely that anything short of a planet-wide catastrophe could stop it from happening *eventually*.
  • by Lemmy Caution ( 8378 ) on Monday August 06, 2007 @05:44PM (#20134993) Homepage
    In that case, it could be said that hard science fiction has become almost impossible. Conjectures about future technologies are as hard as WG says, and any given writer is going to have to face the likelihood that their conjectures get shown as flawed very quickly. Scientific accuracy is hard enough for scientists now: a physicist will probably not have the ability to recognize biological impossibilities; a geneticist will botch sociology and economics. Yet a comprlling story will have value even if the science is flawed.
  • by Dr. Spork ( 142693 ) on Monday August 06, 2007 @05:59PM (#20135143)
    It's silly of someone so smart to claim that technology has been driving social change since the mid 18th century, because it was it was less than a century later that Marx put forward the view that technology is the only driver of ideology and social change ever. He didn't call it "technology," he called it "means of production," but we recognize what it is. Seemed pretty radical to some people then; funny he now seems so right.

    Of course, Marx was different in this way: He did make one prediction about the future whose means of production were unknown to him: he thought there would be a people's revolution in which people would take control of the technology developed in the capitalist era, because of the inevitable resort to artificial scarcity that the capitalist system will increasingly have to turn to. Scarcity will need to be artificial because technology will be able to meet all the basic and many of the advanced needs of everyone in the world. Capitalism doesn't work in situations of plentitude, so there is no market for breathable air (yet). So the artificial scarcity that Capitalists will need to create will eventually get so ridiculous that people will just depose them. As far as futurism goes, I think this outline is aging rather well.

    And by the way, this is much closer to what Marx actually said than what most "Communists" claim he said. The Marx I read never advocated a revolution, resource distibution, or any of that other socialist stuff. He was a dialectician who thought that history has an inner logic and moves forward inevitably. Pleading with people doesn't move history; technology moves history. He argued pretty forcefully that Capitalism isn't the final system, but not because he was trying to stir up a revolution. It was just to convince people that it can't last, that, like every earlier technological/ideological era, it will be undone by the tools it eventually creates. So if Capitalism creates automatic strawberry harvesters because Mexicans get too expensive, and intelligent robots and fusion powerplants and workerless factories, it will eventually make the gear of it's own demise. Marx repeatedly extolled Capitalism for being so damn good at producing new technology in the most efficient way possible. It was Lenin, not Marx, who thought that a society can leap past all the stages of industrial and post-industrial capitalism and start a revolution with just an ideological vanguard. Obviously, that didn't work out. Marx was clear that technology drives ideology and not the other way around.

  • Re:always be a "???" (Score:5, Interesting)

    by Surt ( 22457 ) on Monday August 06, 2007 @06:08PM (#20135241) Homepage Journal
    My estimate is based on direct experience using Neuron:
    http://neuron.duke.edu/ [duke.edu]

    And attempting to model everything we know about the chemical processes. That said, there are 2 dimensions of performance issues:

    1) Neuron is not as fast as it could be, because a lot of the work being done is at an interpretive level.
    2) It's likely we don't know all we need to about the chemistry.

    I assume those 2 issues are roughly a draw, and that in order to eventually simulate a human brain, there will be improvements in the simulator software eventually, but those will trade off against the necessity of more detailed simulations.

    In any case, 50 years for the computer power to simulate a human brain is a decent bet.
  • by Doc Ruby ( 173196 ) on Monday August 06, 2007 @07:08PM (#20135979) Homepage Journal
    Gibson rewrote SF future with his revolutionary _Neuromancer_. But each subsequent book shone a little less intensely, and all in the reflected brightness of Neuromancer. _Mona Lisa Overdrive_ is really recommendable only to fans of _Neuromancer_, and _Virtual Light_ is often best left unrecommended, so as not to spoil the "trilogy". Even _Idoru_, which was good, was just an overlong novella, like part of a "Director's Cut" of _Neuromancer_.

    I've enjoyed Gibson's books since they were first published. And I've enjoyed asking him questions when he's given readings. But I haven't considered Gibson an expert on "the future", even his own that he writes about, in almost 20 years. That's a lot of past to make up for a futurist.

    Now Neal Stephenson, Gibson's literary heir: he's still got a plausible future machine running upstairs.
  • by reverseengineer ( 580922 ) on Monday August 06, 2007 @07:35PM (#20136273)
    Gibson and his predictions fare a lot better in the more recent Pattern Recognition. (I personally think his writing style has actually improved over time as well). There's a lot he gets right about marketing and media in the near future (which would be around now, I guess), and for a book where the September 11th attacks are critical to the plot, the narrative has held up pretty well, particularly in comparison to the certain Big Important Novels which tried to make them the framing device for this generation's White Noise or The Tin Drum.

    Of course, comparing Pattern Recognition to something like Neuromancer is really the key to what Gibson is arguing about science fiction. Being speculative about technology far ahead of the present is naturally a recipe for failure. I didn't start reading books like Neuromancer and Snow Crash until about 2000 or so, and while I enjoyed them immensely, most of their predictions had long since become laughable. The authors of cyberpunk novels in the 1980s and early 1990s correctly sensed that the relationship between humans and computers was on the cusp of major change, but virtually all of them put their money down on sophisticated AIs and immersive virtual realities which haven't come to pass. As Gibson notes in his interview, "If I were a smart 12-year-old picking up Neuromancer for the first time today I'd get about 20 pages in and I'd think 'Ahhaa I've got it - what happened to all the cell phones? This is a high-tech future in which cellular telephony has been banned'."

    Now, some of this, I think, just happened to be bad timing- no one writing in 1987 could be expected to accurately forecast 2007. However, rather being outstripped by a vertical asymptote of progess as the technological singularity idea suggests,the collapse of the Soviet bloc and the creation of the Web in particular represent "jump discontinuities" in the timeline. Earlier today, I was reading about Arthur C. Clarke's Space Odyssey series on Wikipedia. The political and technological changes which occurred in between the releases of novels in 1968, 1982, 1987, and 1997 were so great as to cause Clarke to state that each work in the series is on a seperate timeline (2061 still has the USSR around in its title year, while in 3001 it fell back in 1991).

    I think that even if we don't have a Singularity, we will still have events of such significance every few years which alter the course of history in ways that will only be obvious in hindsight and which will make speculation further than a couple years ahead very difficult indeed. And I suppose if we truly are on the run up to a Singularity, it won't be too long before predicting further than a couple days into the future becomes a fool's errand So, Mr. Gibson has a point. However, I'd suggest that's just part of the fun of science fiction- books from the 60s suggesting we'd be living in space in the year 2000 but using computers the size of houses, books from the early 1990s about computer hackers of the early 21st century as virtual reality ninjas. In these best examples of these, the story is entertaining enough that it didn't matter that the visions of the future (now the present) didn't pan out.

  • Drugs? (Score:3, Interesting)

    by PhoenixOne ( 674466 ) on Monday August 06, 2007 @07:36PM (#20136285)

    Having heard Gibson talk about his past, I get the feeling that the reason his writing style changed so much since Neuromancer is because his life got better. It's harder to write about how completely shitty the world is when you can't truly believe it.

    While I miss reading the old Gibson, I wouldn't want him to go back to that place.

  • by tmortn ( 630092 ) on Monday August 06, 2007 @08:08PM (#20136553) Homepage
    I have to disagree about cities and cars. For the most part you would not have to raze them. Simply getting cars off the streets leaves you a very nice, seriously over engineered infrastructure of right of ways (over engineered when used for pedestrian traffic) to re-purpose. You would still need some sort of delivery system, or perhaps shunt truck traffic into the wee hours via a core set of routes(lot of it already is anyway) and develop some kind of pedestrian friendly mass transit solution like a hop on hop off light rail/street car concept... perhaps even Heinleinish moving sidewalk kind of system.

    In such a system with roads available in large part for pedestrian traffic, a Segway style device might actually have some of the impact it was hyped to be capable of providing. A 20 mile range Segway, and weather shielded roadways not crowded with cell phone chatting soccer moms in SUV's could be pretty slick for an alternative City transit system. Hell, just ditching full sized cars for golf carts (max) would do a lot.

    The hard part about re-placing cars isn't current infrastructure if you ask me. It is convincing people to give up a well sheltered door to door load carrying conveyance that works on their schedule. You have to maintain the same freedom of travel for a similar cost... be it through Rentals for distance driving or better long distance travel options that are not insanely expensive when compared to that of a car. The more expensive owning and operating a car is the more likely this is to occur. Look at cities like New York and London. They have high use mass transit systems because it is insanely expensive to operate a car there for very little gain over using the mass transit options. Parking alone can cost more than car ownership in many other locales.
     
  • Re:Well, crap! (Score:3, Interesting)

    by Surt ( 22457 ) on Monday August 06, 2007 @08:17PM (#20136639) Homepage Journal
    Assuming we cannot exit the universe, or alter its physical laws.
  • by Jeremy_Bee ( 1064620 ) on Monday August 06, 2007 @08:53PM (#20136889)

    In that case, it could be said that hard science fiction has become almost impossible. Conjectures about future technologies are as hard as (William Gibson) says, and any given writer is going to have to face the likelihood that their conjectures get shown as flawed very quickly.
    No offense but this sounds like nonsense to me.

    Science fiction is no more impossible by these standards than it ever was. If you read sci-fi from the 50's and 60's they got some of it right and huge amounts of it completely wrong. I would venture to guess that science fiction today will have about the same ratio of accuracy some 50 or 60 years hence.

    Also, despite his fame and fortune, William Gibson is one of the last person to be talking about predicting the future. Anyone really familiar with science fiction and Gibson's novels can tell you that other than a few buzzwords and the general tone of his one and only original novel, nothing Gibson has written about has actually come true. The metaphorical "cyberspace" (there's the buzz-word [smirk]), in his first novel if not really anything like what actually became cyberspace except in very general, symbolic outlines. And all of his further novels are just regurgitations of the same stuff.

    "Real" science fiction, (the original science fiction), is about science and the future in a concrete sense and it's based in social and historical themes. The idea is to base a story in a "real" or possible future society. The "other" kind of sci-fi, the stuff that has been popular since about 1980 or so and has become mainstream in our culture, has nothing to do with the future or with science. Despite the trappings of ray-guns and spaceships for instance, Star Wars is essentially a medieval drama about empire and heroic rebellion. Same goes for the vast majority of TV sci-fi.

    These are not science fiction stories, they are War stories (now called "action" movies), romantic dramas, and sitcoms that just happen to take place in some cheesy spaceship. Gibson actually wrote some real science fiction with that first book, but it's been severely overplayed and overexposed.

    He has been trading on it's success ever since IMO.
  • by fyngyrz ( 762201 ) * on Monday August 06, 2007 @09:21PM (#20137091) Homepage Journal

    The premise here is wrong. Hard SF is not limited to technology that *will* come, it is about technology that *could* come because the science, at the time is is written (and that is a very important issue) is plausible as far as is known. It has nothing to do with the ideas "coming true", though that's not to say they could not.

    Suspension of disbelief is easier in stories written this way; and contrary to the above assertion, in good hard SF, the technology doesn't serve the role of the main story, carrying the characters as an incidental; the technology can almost fade away, leaving the story to be the main theme because the technology isn't so crazy.

    Can there be good, accurate ideas in hard SF? Sure. We have seen them over and over. Frederick Pohl predicted today's convergence of cell phone, PDA, browser and so on with a great deal of accuracy in "The Age of the Pussyfoot." Niven and Pournelle did a great "asteroid hits earth" novel; Gibson himself did some very intriguing speculation along the lines of interfaces, scientifically plausible but requiring considerably more horsepower than was available at the time of his writing (but not now.) Gregory Benford, James P Hogan, Asimov, Blish, Clarke, and a host of others have all dipped their hand into the "hard" SF bowl and pulled out shining fruits no one had ever thought of before, all while writing great, engaging stories about a huge variety of things.

    I read both types with equal, but different, pleasure. I enjoy the flight of fancy that comes with the idea of FTL drive; I also enjoy the tweak I get from a lesser technology that I actually might live to see if things go that way. But if the story doesn't bring interesting plot lines, significant character development, thought-provoking social comment, reasons for the major technological developments being posited... odds are I'll put it down and never pick it up again.

    The idea that an SF story would be devalued if the predicted technology didn't materialize or if later science narrows the hard SF window such that it could not materialize is ludicrous; on the contrary, an honest window into what people really thought was possible at any point in time has its own magnificent charm.

  • by Brickwall ( 985910 ) on Monday August 06, 2007 @10:57PM (#20137899)
    If you want to see Gibson's roots, read Dashiell Hammett. Gibson is like an eery echo of him, and I say that as a Gibson fan.

    I've read pretty much everything written by both authors, and love them both, but this is not a comparison I would have made. I would be sincerely interested if you would elaborate.

    My dad was a big sci-fi fan, and I read his back copies of "Analog" and "Astounding" pulps in the early 60's. My mom worked as a librarian, and so we got advanced access to all the new, good SF as it came out. I especially enjoyed Judith Merrill's "The Year's Best SF" anthologies, which introduced me to authors such as Fred Pohl, Philip K. Dick, and Fritz Leiber. (I was also lucky to attend the engineering school at the University of Toronto; directly across from the engineering building was Merrill's "Spaced Out Library", which was the most complete selection of SF works I had ever seen. Many a happy lunch hour was spent there!)

    I like Gibson, not because he's some techno-visionary, but because he's an exquisite writer. Fritz Leiber's "Gonna Roll Some Bones" is about a boy whose co-ordination is so good, he can throw rock chips back into place in the rock - there's some serious suspension of disbelief required here! - but the beauty of the story is in Leiber's prose, not the premise. Virtually everything Philip K. Dick wrote seemed completely implausible 40 years ago, but the stories were still fascinating reads. (When you consider that "Blade Runner", "Total Recall" and "Minority Report" were all based on Dick's works, it appears that Hollywood can better transform his stories to the screen than those of other SF writers. I offer the movie versions of "Neuromancer" and "Starship Troopers" as evidence.)

    I also find it interesting that Neal Stephenson has also gone back in time, with the "Diamond Age", and his "Baroque Cycle" (which I'm plowing through at the moment; Mom's passed away, and I'm too cheap to buy the hardcovers). I'm half expecting him to do some novels based on the Renaissance next.

  • by Jonathan ( 5011 ) on Monday August 06, 2007 @11:08PM (#20137989) Homepage
    If you have magical hacking tools that let you visualize hacking as manipulating a physical object, then you're wasting time with an interface that spends time interpreting data in a human-recognizable way that could've been spent just handling the intrusion. It's a waste of cycles that could be used to do something useful.

    In the early 1990s, that's what they said about object-oriented programming -- that it was a cute idea, but any real world problem would be better solved using efficient C (not C++) programming. And even that was an advance from the 1980s, when even C was seen as a waste and programs were often written in assembly language. The point is, as computers get more powerful, it's okay to waste some cycles on the human.
  • by fyngyrz ( 762201 ) * on Monday August 06, 2007 @11:56PM (#20138261) Homepage Journal
    I say that it does. The social sciences are important too.

    Not to hackers; not to technologists; not to users. It's an abstract, and an expensive one (look how crappy the Windows UI is trying to be everything to everyone; look how crippled linux is by being unwilling to create a standard GUI; look how crippled OSX was by pretending mice only needed one button. Complexity and abstraction aren't bad things and can be done very well.

    If you have magical hacking tools that let you visualize hacking as manipulating a physical object, then you're wasting time with an interface that spends time interpreting data in a human-recognizable way that could've been spent just handling the intrusion. It's a waste of cycles that could be used to do something useful.

    Nonsense. The more dimensions you can manipulate at once, the more complex a user input you can provide. Up to the limits of your ability to handle complex motions. As a musician and a programmer for over four decades, I didn't perceive Gibson's ideas as unlikely or overwhelming or impossible at all. Raising the level of art required? Plausible. The next generation would simply rise to meet the challenge. Watch them learn video games if you don't know what I mean.

    For instance, the Mac gives you one mouse button. You can, while doing graphics, move the mouse XY and press the button, -a, +a. A better mouse gives you two buttons. Now you can move the mouse and provide four different modifiers: -a-b, -a+b, +a-b, +a+b. Take a tablet with a couple buttons. now we have motion, -a-b, -a+b, +a-b, +a+b, and pressure. Now take an interface that gives you visual objects to manipulate in the air a'la Gibson's speculation: You can move your left hand XYZ, going from a square space to a cubic one, you can move your right hand XYZ, doubling your cubed space, and because you now have Z, the number of "buttons" you can create with stabbing motions, not to mention the sweeps and other motions you can make, have multiplied hugely. Create graphics metaphors for things to manipulate that use models of geometrics or anything else you like, and you are way into interface excellence. You can't seem to see this; but that doesn't degrade the idea at all.

    And who in their right minds wouldn't put safety locks on mind-machine interfaces to prevent any sort of direct damage? Doesn't the military specialize in built-in deadly force from claymores to infrared sighting technologies and stand off weapons? Aren't they using radar to backtrack incoming mortar rounds? Why would you NOT want these things if you have something to protect? And if the world is on the net, from the military to the governments to the corporations, then you DO have something to protect. Sure, there will be the same mommy-madness to protect you from yourself, force you to wear seat belts, take away your right to use a full power deck, but that doesn't mean there wouldn't or couldn't be such things. It is science fiction, not social fiction.

    And what military or government or corporation would not want serious deterrents to entry when the world is virtual? The only reason my own home's entries are not actual man-traps is the law that says I can't protect my own property with deadly force. Otherwise, as a programmer and an engineer, I'd have something quite clever — and quite deadly. After having had a couple of vehicles stolen, I'm all for deadly force there, too. Scientifically, it's all good. Socially - yes, mommies rule. For now.

    If an invention requires a complete suspension of disbelief about human nature to be plausible, then it's fundamentally illogical and thus bad science.

    Yeah, but if something requires YOU to suspend, but not ME to suspend, then it's just you with the problem. :-)

    Methinks you would read better literature if you didn't discount the human element entirely in your favored stories.

    Right, right. :-)

  • by Elemenope ( 905108 ) on Tuesday August 07, 2007 @08:42AM (#20140583)

    Well, you make an interesting point. I think what has changed most is the rate of social acceptance and civilization-wide implementation of new technologies, and the attendant acceleration of social, legal, political, and psychological changes. From the Watt Steam Engine (first engine concept robust enough to pull significant loads) to Blenkinsop's Steam Rail car was forty years, and significant commercial implementation was another twenty-five. Even with that amount of lead time, the impacts on society and government were immense. Now compare the time-lapse between the first practical personal computer to commercial implementation of the Internet, and the gap is one-quarter the size. And the Internet is shaping up to be as much if not more transformative.

  • by EgoWumpus ( 638704 ) on Tuesday August 07, 2007 @09:28AM (#20140989)

    There are a number of true future predictions you can make. For instance "The future will be dissimilar to some significant number of predictions we make." It's simply a matter of having a prediction whose verbiage is inclusive enough.

    But that aside, they are doing amazing things with longevity these days; I think that betting your money on not dying is about as wise as deciding that the Atlantic Ocean would never be crossed would have been in the days of Columbus. Physically speaking there is little known reason for people to die. Why can't they replace their body forever? It looks more and more like we are biologically built to die - because evolution 'designed' us, and evolution is notoriously defective. Until we can scientifically show there is good cause to believe people have to eventually die, from a biophysical aspect, I think that the prediction of "we'll all die" holds as much water as "we'll never fly".

  • by fyngyrz ( 762201 ) * on Tuesday August 07, 2007 @04:34PM (#20146939) Homepage Journal

    Except that it's utterly bat**** insane to provide a handy GUI for disabling your own security device to outside users -- especially a handy GUI for authentication or debugging.

    What? You think targets provide the interface to hack them? That's not how it works, not even today. Programs are compact bundles of executable code and data. Sometimes encrypted, usually not. Programs are the ultimate models of terseness, because each machine instruction represents an action by the processor. There is no "interface" to the code provided in the program or data itself. Interfaces for hacking, for instance a debugger / disassembler, are separate things, created by people who understand completely that the goal is to get into the code, and therefore they provide the graphic and other UI elements you need to do that in the most efficacious manner the authors of the debugger / disassembler can come up with - it has nothing to do with what the authors of the program being attacked had in mind, planned for, or provided except in that whatever anti-hacking they might have put in, the hacking software needs to have a counter for. If that interface took on a 3D metaphor, that's just a detail, though an interesting one and an efficiency issue for the hacker. You're completely confused about the demarcations between the roles of who is providing what interface, what code, what data, what functionality - that's why you can't understand what is being described. If the target was a corporation's site, the hacking interface wouldn't be provided by them, it'd be provided by your deck, even if the corporation defined the "normal" interface for end users. So a hacking deck, or a deck running hacking software could easily have any interface imaginable, whatever seemed to work. This is why your objections are pointless.

    Hacking doesn't work that way.

    Wrong. Hacking works any way that it works, from the utmost simplistic approach (futzing with a URL or entering data and/or command strings not specified as valid) to actually hacking the binary of the software with complete control over what machine instructions are changing, and how, and taking into account any self-validation / checksum type protection as you work. UI, again, is a matter of approach, not a matter of results. Any tool that increases the speed of visualization of the task at hand and your ability to get in there and make changes is feasible, presuming you have the computer power to pull it off. What do you think a progress bar is? It's an abstraction of a lot of things going on, letting you know things are running, how much has been done, and giving you a quick visual estimate of how much there is yet to go. This is an extreme abstraction of, for instance, how far through a dictionary attack one may have progressed. Other abstractions that could work rather than a bar might be size, shape, color, words, animations of other processes that go from start to finish (eating a sandwich, filling a bucket, hammering a nail) and so on. A 16-sided ball could be a tool for hex digit input. A 20 sided ball might be useful in due-decimal work. Etc.

    Well, yes, but that's not how hacking in cyberspace works in the cyberpunk genre. It's always presented as being more like lock-picking than being a script kiddie.

    If the full solution to a problem is known to be available in canned form, the smart thing is to use it. You may have been the "canner", or you may not. That doesn't make you a script kiddie; that makes you competent. If the lock needs picking, then you pick. If picking doesn't work, you may want to get out the C4 or simply abscond with the entire dataset in unbroken form so as to approach it at your leisure. Every time you presume that things work "just this way" you miss the entire point of hacking. I write a program, I create X to attempt to make it secure; the hacker approaches, and comes up with Y to defeat my X. Hac

Never test for an error condition you don't know how to handle. -- Steinbach

Working...