Comment Re:WIMPs (Score 4, Interesting) 236
That the thing about dark matter... it has a perfectly reasonable explanation (WIMPs). It's not that weird of a "thing".
Dark energy on the other hand, that's just WEIRD
That the thing about dark matter... it has a perfectly reasonable explanation (WIMPs). It's not that weird of a "thing".
Dark energy on the other hand, that's just WEIRD
Isn't this what one would expect if dark matter is WIMPs?
Sure there is: add this to the CPDLC standard and make all of the hardware modifications needed to support it:
----
Message type: Revert flight plan and lock
Message arguments: TIME: the time of the flight plan to use
Message description: Revert to the flight plan that was active at TIME that had been approved by both ground control and the pilot; engage autopilot; and disable all pilot / copilot access to all systems. If there is no approved flight plan then the flight plan is to return to the nearest suitable airport in the most direct route possible.
----
Additional modifications: Make sure that the pilot can never disable datalink communications with ground by any means that ground wouldn't have time to respond to.
Result: Nobody is ever "remote controlling" the plane from the ground. A murderous / terrorist ground controller can't crash the plane, only make it autopilot itself on a previously approved or otherwise reasonable flight plan. A pilot behaving suspiciously can't crash the plane, as ground control will just engage the autopilot and lock them out. To abuse the system both ground and the pilot would have to agree on a suicidal flight plan.
Yeah, the suggested method for generating passwords generates needlessly long passwords. The total entropy is good, but the entropy per character is pretty poor. You get much better entropy per character with abbreviation passwords, where you have a sentence or group of random words and you use the first letter from each, or second, or last, or alternating, or whatever suits you. It's still not as much entropy per character as a random pattern, but it's much better than writing out full words - and pops into your head just as fast (because it is, in essence, the same).
Agreed!
Are people really going to miss yet another totally fake show pretending to be reality? Is it just because this one combined cars and Daily Mail-style politics?
Sorry, but I have no sympathy for a primadonna for whom curses at an employee for 20 minutes and then physically assaults him up for half a minute (without any resistance from his victim) before someone pulled him off, all because the Clarkson's food wasn't warm. And this is hardly the first time Clarkson has behaved like this, he was already on "final warning" after a string of other incidents. What befalls him is his own bloody fault. And all of the abuse that the victim got over this whole thing... my favorite tweet on the subject was:
"Man assaults another man and victim receives abuse because people can’t watch a TV show about cars. Bravo society. "
Anyway, about bloody time that this happened. I'm surprised that he hasn't already blamed his firing on a conspiracy of liberal enviro-wackos and brown people.
What I want to know is, is this going to apply to just the EU, or will it affect the EFTA too?
Seems that everyone blocks access to bloody everything here in Iceland.
.... but their beaches, usually not so much. So hopefully this won't be too much of an eyesore. Japan is usually pretty good about trying to fit human-made structures into the landscape; my friends and I had a running joke when we were there: "They have the prettiest drainage ditches here!"
I'm rather curious about what kind of concrete they're going to use. Japan has been a pioneer in the use of fiber-reinforced concrete, I wonder if they'll use that in lieu of steel that may need cathodic protection in such a high salt environment?
Nature is one step ahead of you.
Well, not exactly. The answer to the question of how the immune system can defeat a foe that is mutating and evolving so quickly is "it also is mutating and evolving quickly". Immunoglobulin genes in B cells mutate very rapidly. Those whose antigen binds best with an invader are stimulated to reproduce (and evolve more), ultimately differentiating into plasma B cells (whose job it is to mass produce antibodies) and memory B cells (which stay alive for long periods of time, allowing the body to "remember" how to fight off an invader that it fought off in the past).
That said, this only applies to genes responsible for antibody production, and only in B cells.
Where do you get that? Wikipedia says that the human genome is 3,23473 billion base pairs. I mean, you could compress that to fit on a CD, but it won't fit at one byte per BP. Won't even fit at 2 bits per BP.
And if we want to think of a BP like a letter in a piece of code, with an average programming code line length of say 15 non-whitespace characters, that corresponds to a program 216 million lines long. That'd be no little program...
Of course, only a tiny fraction of our DNA codes for what we would consider to be the "interesting stuff".
Part of the premise to the problem is that you know it will work. If you'd rather, you can look at the scenario of a doctor with several dying patients who need transplants deciding to kill one of his other patients to save the lives of all of the others. It's a question of where the boundaries to sacrificing one to save multiple becomes troubling to people. Knowing how to define these boundaries are critical to being able to program acceptable "morality" in robots.
... to even understand why we consider certain judgements to be moral or immoral, I'm not sure how we're supposed to convey that to robots.
The classic example would be the Trolley Problem: there's an out of control trolley racing toward four strangers on a track. You're too far away to warn them, but you're close to a diversion switch - you'd save the four people, but the one stranger standing on the diversion track would die instead. Would you do it, sacrifice the one to save the four?
Most people say "yes", that that's the moral decision.
Okay, so you're not next to the switch, you're on a bridge over the track. You still have no way to warn the people on the track. But there's a very fat man standing on the bridge next to you, and if you pushed him off to his death on the track below, it'd stop the trolley. Do you do it?
Most people say "no", and even most of those who say yes seem to struggle with it.
Understanding just what the difference between these two scenarios is that flips the perceived morality has long been debated, with all sorts of variants for the problem proposed to try to elucidate it, for example, a circular track where the fat man is going to get hit either way but doesn't know it, situations where you know negative things about the fat man, and so forth. And it's no small issue that any "intelligent robots" in our midst get morality right! Most of us would want the robot to throw the switch, but not start pushing people off bridges for the greater good. You don't want a robot doctor deciding to kill and cut up a patient who in the course of a checkup discovers that the patient has organs that could save the lives of several of his other patients, sacrificing one to save several, for example.
At least, most people wouldn't want that!
Anyone can make an omelet with eggs. The trick is to make one with none.