We need Ender to fly a group of the UAVs, without him it just won't be right. When will Locke and Demosthenes start blogging this?
Ender is old tech: human (even if child) commanders (remotely) controlling human armies. The new tech is semi-autonomous: humans remotely providing guidance to semi-autonomous swarms of unmanned systems. I think Ender's Game would have made a lot more sense if they had been remotely providing commands to teams of manned and unmanned systems. The lack of robotics in Orson Scott Card's Sci-Fi series is puzzling. BTW, Ender's Game was "required reading" for a DARPA program I was on at one time.
While useful, isn't this just a larger drone with it's parts connected by signals rather than wires? Sure, it's got ablative resilience (one of three drones can go boom and you still have the rest of the formation), and more payload (more drones to cary stuff), but there doesn't seem to be any capacity for communication beyond holding formation and relaying orders from the human controller.
To answer this question adequately would take a much longer lesson in systems theory than I have time for. You touch on some of the aspects of why multiple systems are better than a single system. Swarms don't just "hold formation" they self-organize in the most efficient manner to accomplish complex functions with whatever resources are available. But there is plenty of literature about why cooperating systems are always greater than the sum of its parts. May I suggest a good introductory video lecture by Scott Page on the Logic of Diversity as a taste of what is possible: http://fora.tv/2007/03/14/Power_of_Diversity
...
Remember the parable in Terminator:
The Terminator: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
Sarah Connor: Skynet fights back.
Step one, remove humans.
Step two, machine learns.
Step three, it does something we didn't expect it to do...
Obviously we don't go from one to three without some magic. That being said, why are we even considering approaching step one? And in this case, I think this is exactly what we're doing. We're inching slowly towards allowing the machine to pull the trigger.
I'm afraid the Terminator hype actually detracts from an important point that you are making. If you knew anything about the state of computer science today you would feel quite secure in knowing that "step 2" ain't going to happen for a very long time (step 3 always happens even today
Still there are a number of very important people who are concerned about how much autonomy we might eventually give to a robotic entity with lethal capabilities and there is legitimate scientific discussions happening today. Asimov's Robotic Laws seemed to make a lot of sense before we had armed Reapers roaming the skies. Since I work in the defense industry I am less concerned about what our own government will do in this regard than I am with rogue nations and terrorists groups. People who don't work in the defense industry have no clue how conservative the military is when it comes to giving up human control! Although reading all the FUD posts on this article give me a chance to chuckle, I recognize that this technology can be used for causing harm. The swarming technology described is not easy to replicate, but is ultimately based on simple, cheap platforms running what are basically simple programs. Thus while UBL is unlikely to ever replicate even a relatively simple Predator, he COULD develop unmanned swarming capability.
Personally, I feel that it is nearly inevitable that this technology will fall in the hands of the wrong people. When that day happens, I would much rather have a military that has experimented with, understands, and has prepared defenses against such a technology. That's why we need to approach step one and learn.
It takes three people to remotely pilot a Predator. There are never enough Predators or Global Hawks in the sky for all the intelligence we would like to gather. We don't have enough people, platforms and dollars to buy, launch, pilot, and support all the reconnaissance we would like. And while the imaging capabilities on the big unmanned platforms is impressive, they still can't see through mountain ridges or down deep urban canyons. For that you need something that can fly right overhead and get close enough without being seen or heard and that requires lots of small UASs. But the only way we can get enough of those into the air is to have some way for a single person to manage two or a hundred platforms just as easily as one.
Swarm may be an unfortunate term, since it can evoke the image of a killer swarm of bees - hence we naturally think of swarms as lethal attack technology. In fact, unmanned attack swarms are still science fiction. The swarming research that is going on (and demonstrated in the article) is all about surveillance and reconnaissance. Even if we get to the point of arming the individual swarming platforms, there will always be a human in the loop making the final decision to fire a weapon. Don't kid yourself: even with all the new technology it has only gotten more difficult to make the decision to engage not easier over time. Ask those that do this for a living about the hoops they have to run through before they can fire a weapon from a Reaper.
The reason why they are calling these UAVs "swarms" is because they are using Particle Swarm Optimization to determine their flight path and schedule.
No, its not based on Particle Swarm Optimization (PSO). See the linked article in Defense Industry Daily. The swarm is controlled by digital pheromones, a very different technology from PSO. Nor is it centralized. In fact the algorithm is completely decentralized: no central computation, no global knowledge. Of course, the operator provides the guidance, objectives, rules of engagement, constraints and the swarm figures out the rest.
"If people are good only because they fear punishment, and hope for reward, then we are a sorry lot indeed." -- Albert Einstein