Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Anyone read any of Keith Laumer's Bolo books? (Score 1) 317

To give a quick rundown, Keith Laumer came up with the idea of "Bolos", which are essentially oversized, insanely-armed sentient tanks. There's a number of novels and short stories (I recommend Road to Damascus, which I think might even be available for free on the web) all of varying quality.

The running theme through the novels is the machines exemplifying military virtue, while the humans often screw things up and generally fear their own creations. I find them refreshing because it's nice to read a book where our creations aren't interested in stomping our faces with a metal-shod boot.

It gives some interesting ideas on the autonomous battle-robot theme.

  - A war-bot is not really capable of mercy, which I think is the main point a lot of people are making so far.

  - A war-bot is not really capable of war-CRIMES, either. It's not going to care that you blew up its buddy.

  - The lack of self-preservation instinct can prevent rash action. You can program a robot to TAKE a shot or two before reacting.

  - The other side of the coin is that once the robot decides to engage you, it's on. If you pull a gun in front of a squad of soldiers, accidentally or deliberately, they MIGHT let you have a second or two to reconsider your actions and surrender. With a robot, you're paste.

  - You have to consider the difference between TRULY autonomous robots and robots working in conjunction with humans. I'm talking about the difference between a squad of bots operating alone and bots assisting/protecting human soldiers. I'd call the latter far more dangerous, and not because I'm worried about friendly fire. A squad of humans and their bots patrolling a nasty neighbourhood in Baghdad or Kandahar is going to WANT their metal buddies dialed up to "Paranoid" so that they can react to suicide bombers fast enough to avoid getting hurt themselves. That combination can do far more damage than a squad of humans or of robots deployed alone. You get all the bad decision-making of humans combined with the threat-reaction and firepower of machines.

It should probably be mentioned that we already have an example of a semi-autonomous battle robot... the Phalanx CIWS. Flaws with it aside, that's an example of a machine that decides when and what to kill. I even got the impression from a Navy guy I knew that sometimes the people it protected were a bit scared of it... if you wandered into its engagement envelope you were dead, no ifs or maybes. A Slashdotter familiar with the thing might have some interesting perspective.

I apologize for the semi-rambling of my post. It's early.

Slashdot Top Deals

Work is the crab grass in the lawn of life. -- Schulz

Working...