Comment Re:Prime Directive (Score 1) 686
You're thinking colonization. I'm thinking studying - knowledge. I believe that any starfaring species will also be infovores.
You're thinking colonization. I'm thinking studying - knowledge. I believe that any starfaring species will also be infovores.
I don't know that we are in the stix - we might be in an ideal spot, or at least the ideal distance from the galactic core. There might be a "galactic Goldilocks zone." Too far in and there stellar life is too interesting, in terms of supernovas, gamma-ray bursts, etc. Too far out and stellar life may be too boring, as in not enough generations to create enough of the heavier elements.
As for FTL, I seriously don't expect Star Trek or Star Wars. I expect robot probes, and the question becomes whether they're AIs, uploads, mixes, hybrids, or whatever. Once you're talking robot probes time, as we see it, drops out of the equation.
Any you're right, in that there is no need for intelligent life to exist. It's just that the galaxy is a more interesting place if it does. As I said, if it doesn't then maybe the Earth really IS the center of the universe, at least in the philosophical sense. Once you've accepted that you can easily plummet down your navel into the Dark Ages, again. That is, from a species perspective, or you could embrace your status as Progenitors and grow into the role.
First off, forget Hitler's Munich Olympics broadcast, that's way to new. The most interesting thing about Earth is roughly half a billion years old, and that's its "unnatural" atmosphere. Our atmosphere shouts, "Life!" like nothing else. The stuff in our air just doesn't cohabit from ordinary chemical processes - it has to be maintained. Not as old, but still older than Hitler's broadcast is the sustained presence of pollutants in the atmosphere. This might suggest, "intelligent, if immature/foolhardy life."
We can almost see this kind of stuff with Kepler, though to get to this level of detail we use several instruments in parallel - Kepler is the first-weeder. We're nowhere near having interstellar technology, so any race that does will likely have commensurate technologies in other areas as well. Most notably, if you're going to travel far, you want to know which direction to go, and as much about your destination as you can. They would have tools that make Kepler look like a child's toy. They would know how interesting Earth is. Where that ranks us with respect to other planets in another question, but I'll bet it's not as bleak a prospect as some say.
Personally I think the presence of us on Earth has to do with it's "sufficiently interesting history", including the collision that formed the moon, several asteroid/comet strikes like the dinosaur killer, etc. Not to mention plate tectonics, the magnetic field that keeps the solar wind from blowing our atmosphere away, etc. Like I said, I think Earth would be on the short-list.
By the same token, I also think they would observe. Our society and existence are fragile enough, one big kick could easily topple the whole mess. Imagine a preemptive strike by one power to prevent another power from getting "the advantages of alien technology," etc. We're also pretty darned "memetically susceptible," and even allowing an alien idea to reach us might upset the apple cart.
Or as an alternative, perhaps the Catholic Church was right, and Galileo (and Copernicus) were wrong. If not the physical center of the universe, if we're all there is, perhaps the Earth is the philosophical center of the universe.
So:
1 - We're all there is, perhaps to become the Progenitors, perhaps not.
2 - There is other life, hasn't gotten here yet, may not bother, may not be able.
3 - There is other life, observing us, careful to remain unknown - the Prime Directive.
4 - There is other life, getting ready to invade/destroy us.
5 - There is other life, in contact only with the Illuminati and Club of Rome.
Personally I'd prefer option 3. Option 2 is equally likely. Option 1 is rather sad. Options 4 and 5 are IMHO silly.
In Vermont we have a variation. You pick up a Republican ballot or a Democratic ballot. It was upheld in court, that you were a Republican or Democrat - even for that one day. This is of course for the purposes of primaries.
The limitation is that once you've picked up the Republican ballot, none of the Democrat choices are available for you consideration, and vice-versa. In other words, you can't cast your primary vote to choose a particular Republican House candidate, and a particular Democrat Senate candidate. Once you've chosen Democrat or Republican, that's it - for that one day.
Doesn't stop voting-to-spoil, but it makes you throw away all of your own party choices when you do so. Is it any more broken than the rest of our balloting scheme? Trigger the IRV vs Condorcet vs whatever voting scheme debates...
Well of course it can be done.
You just might not like the price tag.
Good point on latency, I forgot about that. What's worse is that streaming media can readily compensate for latency, as long as it's reasonably consistent. On the other hand, I work from home a fair amount, sometimes with vnc, sometimes with remote X. I'm a heck of a lot more sensitive to latency.
But even if you regulate Netflix like a content provider, it still leaves Comcast jealous, because none of the effects of that regulation wind up in Comcast's pockets. The reality is that Comcast doesn't want to be an ISP, they want to be a content provider, because that's where they're from. Being an ISP was just an "opportunity" they were well poised to take advantage of, with their infrastructure. Only problem is that in the real world, instead of Comcast's dream work, the "opportunity adder" is bigger than their core business. They're working really hard to impose their dream on reality, and since they own the pipes, they're getting away with it.
Once upon a time there was talk of Internet2 - I believe it hooks some universities, national labs, and businesses together. Part of me wonders if at some point corporate US really will manage to turn the internet into a series of walled gardens, and we'll be back to the days of modems, bang-paths, and line-of-sight hacks.
Go ahead and choose your walled garden, I won't stop you.
But from where I sit, it looks like everything that connects to the home is going to walled gardens, and open as an option is fading away.
Serious proposal: Allow a "fast lane" by any/all ISPs. They've got such a hard-on for a fast lane that they're going to keep buying legislators until they get one. Then place a limit on it. The fast lane can only be X times faster than the "neutral net lane", and NO traffic shaping or limits are allowed on that lane, other than being 1/X the speed of the fast lane. Plus X needs to be a legally asserted and testable value.
Balderdash, Apples to Apples, Pit all work for larger groups.
For slightly smaller groups, Dominion
For 3 or 4, Settlers or Dominos
And of course there is a whole range of games for all group sizes using regular playing cards.
Issues like this are why Asimov sold a lot of books, and why the Three Laws come up whenever robots are discussed. He came up with a reasonable, minimal code of conduct, and then explored what could possibly go wrong.
I don't remember him writing about your type of situation, which is rather odd when you think about it, because that scenario is rather obvious. But his stories often lived in the cracks where it was really hard to apply the Three Laws. Two examples that come to mind, off the top of my head are:
1 - The Powell and Donovan story at hyper-base, where the act of going through hyperspace "temporarily" sort-of killed the passengers, causing the robot directing the ship all sorts of distress and neuroses.
2 - The robots who were taught the idea of preferentially applying the First Law in favor of the "best humans", and going on to logically decide that they were indeed the "best humans", and therefore to be favored above those organic beings that created them.
Since it's all conjecture, really fiction, let's drop back to Asimov for a moment.
1 - A robot may not harm a human being, or through inaction allow a human being to come to harm.
What is a "human being"? Is it a torso with 2 arms, 2 legs, and a head? How do you differentiate that from a manniquin, a crash-test dummy, or a "terrorist decoy"? What about an amputee missing one or more of those limbs? So maybe we're down to the torso and head?? What about one of those neck-injury patients with a halo supporting their skull? Does that still pass visual muster as a "head"? What about a dead body then, that has a head, 2 arms, and 2 legs? Or if you've included temperature sensing, the dead body of a sick person who had a fever and is, some time later, still passing through the normal human temperature range.
Silly, yes. Absurd, yes. But before you can consider any code of conduct with respect to a human being, you have to first identify that human being AS a human being.
Pretend we get past that, then we can start talking about "harm", and trying to algorithmically define that.
These are all things we take for granted, having been born as human beings, raised by human beings, and spent years doing so. In most parts of the world it takes something like 18 years of experience to quit being a "child", an apprentice human being, and be considered autonomous in your own right. In that time, we have all both harmed and been harmed by other human beings, though thankfully generally on a lesser scale.
Each of us represents a lot of training and experience, which we frequently neglect, often calling it "common sense", sometimes making the observation that common sense is in fact uncommon. At some point we set about contemplating matters of (at some level) philosophy, such as this one.
But it takes us something approaching 18 years to learn the technical aspects. I know we can program machines and give them some amount of information "at birth", but I think we are underestimating the difficulty and value of those 18 years and overestimating our technical prowess. We're a long way from teaching machines philosophy.
Perhaps the best thing about arming drones now is that in a way it's like arming young children, and they generally try to do what their parents tell them to do. If machines became moral, and could decide what to do for themselves, we might not like those decisions. Forget the nightmare scenarios, think of the benign scenario taken to the nightmare, like "With Folded Hands."
Final thought... At one point, Asimov suggested that the 3 Laws were actually pretty decent conduct suggestions, even for people. (I would certainly question the relative priority of #2 and #3 in general life for real people, of course.)
An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.