That's the exact opposite of a battlefield, which is not a known environment (act like it is and the enemy will use that assumption against you), there are a very large number of possible actions, and being predictable can quickly turn into being dead.
You're delusional. The poker robots already exceed expert human players in precisely calibrating their lack of predictability.
In early iterations, ALPHA easily beat other AI opponents. Lee repeatedly attempted to score a kill against more mature versions of ALPHA. However, the artificial intelligence combat simulator shot Lee out of the air every time during protracted engagements. ALPHA has bested Lee and other field experts.
"I was surprised at how aware and reactive it was," said Lee. "It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."
Lee has trained with thousands of U.S. Air Force pilots, flown in several fighter aircraft and graduated from the U.S. Fighter Weapons School, yet when Lee flies against ALPHA in hours-long sessions that mimic real missions, "I go home feeling washed out. I'm tired, drained and mentally exhausted. This may be artificial intelligence, but it represents a real challenge."
Presently, combat AI is a saber-toothed tribble-tigger confined to a small box. That box is heading for puberty real darn soon.
If you can't reproduce it, it's either fake or you were just being sloppy. Either way, it's no wonder ordinary civilians have doubts.
As I type, that remark is presently moderated +5. Huston, we have an insightfulness crisis.
"Just" sloppy, you say? Because any group of 50 undergraduates is as close to the Platonic ideal as any group of 50 trillion electrons?
I've monitoring carefully for the past year. I'm pretty sure there's no word in the English language that precedes a sloppy thought more reliably than that potent little trigger word "just".
Good night, sleep tight, loony dimsight.
Especially when it would literally cost them nothing to get a lawyer to take this on contingency.
Your shallow grasp of the cost function of suing a big, madhouse employer (while you're quietly vesting, among other things) leaves pretty much the whole of human history unexplored.
Of course, if you have no supportive social network within your professional niche worth two nickles to rub together, this is an easy trap to fall into.
"Oh, the gap in my resume circa 2017? That's when I took off an entire year to sue my former employer for a HUGE punitive settlement over a toxic, offhand comment by a testosterone-fuelled, bottom-line-driven corporate executive during a late-night outing at some drunken corporate retreat."
But then, you're probably much better at explaining things than I am. After you explain it, the response would probably be, "well, son, that's exactly how we roll around here: zero tolerance. We like your spunk. Welcome on board. You start tomorrow."
Just guessing, there. IANALC, I could be wrong.
[*] I Am Not A Life Coach
What does "I myself" add which just saying "I" lacks?
I spend a little money on apps but I think the point with Android is you don't have to. I don't know what Apple owners are spending money on but I think it's great you can get a reasonably cheap Android device and can go months/years without having to buy an app; it means the OS and free apps/games are doing the job. Why spend money on stuff you don't need when you can save it for something you can justify? And in case you were wondering, Android is still miles ahead of Apple in terms of market share, so it's not as if people are going to suddenly stop developing for Android (it's always been this way).
This is all very natural and good
Nice frame jump. What's natural and good in human culture (exodus from Eden being at the outset nasty, brutish, and short) is to get as far away as humanly possible from what's natural and good in nature (red in tooth and claw).
So I call SB.
[*] strange bedfellows
We used to call them "hot grits". In was an everyday topic of discussion.
What you need are citations to trustworthy sources and to be reviewed by trustworthy peers.
You've already lost the fight: no human system outperforms its incentive structure.
Peer review is hopelessly ensnared by academic advancement culture. Entire disciplines can end up publishing bunk, if that becomes the tenure-track fashion of the decade. Tulip bubbles are not restricted to the business cycle. Even hard sciences have been hit pretty bad. Et tu, string theory?
The fundamental theorem of peer review is due to Max Planck:
Science advances one funeral at a time.
The zone of convergence of peer review involves the passing of interested parties. In most of the hard sciences, fifty years pretty much weeds out the crap.
However, if you take a field such as nutrition science, I dare say it's still inadvisable to take fresh "peer review" at face value. John Yudkin was on the right track in 1958. Fifty years downstream, the truth is out there, but it's still far from evenly distributed in the public imagination.
Nutrition science was subverted by a white coat army of industry apparatchiks. These studies are expensive and, oh yeah, replication crisis.
Most human systems can be trusted some of the time. The real art of bullshit detection is figuring which times are those times. Even the best human systems are bullshit on the margin.
What you need to understand here is that the journalist impulse to publish is directly proportional to the tenuousness of the result in question.
Well, if the speed of light falls derp derp wormholes derp derp Stargate derp derp dusty von Daniken booster spice derp derp human immorality derp derp Omni Magazine alternate-reality cum shot. Well, you got your $4 worth, didn't you?
There's an enormous term in the human condition centered around escape from reality. This makes sense to some degree, because human reality usually contains a giant heal spur of oppression of the downtrodden masses (success has a habit of being highly asymmetric).
Trump's monosyllabic barrage becomes tremendously more convincing if you want to believe the underlying message.
Somehow, one supposes, being suckers for false hope must be evolutionarily adaptive (who, after all, is qualified to challenge the modern evolutionary synthesis?)
And then you get right down to it, the anchor tenants of modern bullshit culture are the major religions (being largely incompatible, at most 1 of N could anywhere close to broadly correct). Because, you know, life without bullshit would be empty and meaningless.
Deep down, most of us don't really want to drain the bullshit pond. And it's not just one pond. It's pond after pond. Never get comfortable.
The fundamental theorem of bullshit busting is due to Richard Feynman:
The first principle is that you must not fool yourself and you are the easiest person to fool.
Evolution took a long look at Hamlet, and came up with satisficing.
Make happy assumptions that are compatible with medium-term survival (generally best obtained from proven survivors—aka your parents and select community), then behave with the efficiency of assuming their truth, until the shit really hits the fan; then sit back, renounce, regroup, and repeat.
Dawkins pretty much feels about religion the way Einstein felt about cosmic inflation and quantum indeterminacy. Right model, wrong hope, long painful row to hoe. Even when our best minds get something right, they're often left wishing they hadn't.
So there's this unhappy observation about the human condition, meanwhile the creationists are still stuck on our too-close-for-comfort family resemblance to the other apes (none of whom are paragons of family values).
For some reason that I'm still striving to properly elucidate, bullshit is a prized lubricant of human culture.
For forty years I've lived a Mertonesque credo that "godlessness is next to cleanliness" and so I've managed to winnow the standard-issue pint ("but trailing clouds of gesta do we come") down to about a teaspoon of personal bullshit lubricant.
[*] our heritage of gesta (deeds) soon turns to egesta (bodily waste), which completely explains e-gesta (deeds on the internet)
Not long ago, I was reading Amos Tversky on the perils of metaphor, and now suddenly the scales fall from my eyes.
For 99% of the population, bullshit is sugar sweet. And even among the recalcitrant 1%, no-one ever sheds their very last sweet tooth.
The article has a poor to false understanding of how blue light interacts with DLMO (dim light melatonin onset).
I'm pretty sure the entrainment effect of blue light is via direct neuronal connection to the SCN, and I doubt it involves melatonin, except indirectly.
The homeostatic sleep pressure signal builds up (more or less linearly) for as long as you're awake. On its own, this would mean that you taper into drowsiness all day long. So the sleep system has another mechanism that suppresses response to the sleep pressure signal. I vaguely recall that what happens with DLMO is that melatonin onset signals the body to turn off the suppression switch, so that the body begins to notice the homeostatic sleep pressure signal.
DLMO, however, is easily inhibited by exposure to blue light at a point in time approximately an hour before bedtime. If you're outdoors hunting moose in the bright light of late-evening arctic summer, this is a useful adaptation.
You'll get to bed later, which means you'll sleep a bit later (but not much) and then you will get less blue light early the next morning, which will affect your entrainment, gradually, on the slow-drip program.
As a rough, empirical ratio, for every extra hour you stay up, you'll sleep about twenty minutes later the next morning. It's not uncommon to stay up for an extra two hours, then barely sleep in for an extra half hour. (We need to ignore here that modern society tends to run a massive, permanent sleep deficit, which can suddenly turn into sleeping four to six hours late at the first opportunity that allows this to happen. That's a different beast entirely.)
I have a circadian rhythm disorder, and I know from decades of sleep tracking that morning wake-up time is about three times more reliable in estimating my sleep phase than time of retirement.
This is a worthwhile paper from the top of my notes, but it's hard to wade through:
I like this paper because it shows how social convention (adolescent schooling) also influences DLMO phase.
The sleep pressure signal eventually overwhelms the suppression of this signal, regardless of the DLMO mechanism.
James Maas is a good representative of the modern sleep science orthodoxy:
I just love the page break at the end of page 6. But then I'm really into microscopic moments of small page-formatting humour. (It's probably not unrelated to all those long, lonely nights, before I found a viable treatment.)
Here's a good summary, I just found for the first time.
The reason I only vaguely remember this mechanism is that all the phase response curves in the literature are dose dependent.
There is no PRC I've ever seen that computes the phase response differential to endogenous melatonin levels. No, what you do is administer some dose/formulation (which can include sustained-release components) at staggered times over several weeks, and then you plot the graph averaged over your test population (which thus includes all the metabolic uptake and clearance variability).
There was a time I desperately wanted to consult one of these curves and then to declare "I am here", but it never happened. These are, in effect, better regarded as qualitative curves than quantitative curves.
The model was never predictive enough to be worth memorizing exactly. And thus I remain slightly dim on DLMO when I really shouldn't be after all these years.
You're assuming that it is a one time operation. here's the long term problem: At best 5% of a launch mass goes to the payload to reach LEO. At best. If launching a manned mission to Mars, the vast majority of the launch fuel will be to get the vehicle to reach LEO. So the vehicle most likely will have to be re-fueled in orbit. Again 5% of a launch just to refuel a craft.
Certainly it can be done short term for a few flights. However a long term mission to Mars might be better off with moon refueling where gravity is 1/6 that of Earth. Will it be easy to set up a Moon base to serve as a refueling point? No. But in the long run it will be better.
aka, for every 10 kg you launch to LEO you get 1kg payload to your destination
That's factually not true.
Falcon Heavy Weight (1,420,788 kg), LEO payload (54,400 kg ): 3.83%
Ariane 5 Weight: 777,000kg, LEO payload: 16,000 kg. 2.05%
Atlas V Weight 345,000, LEO payload: 18,810 kg: 5.45%
At best it is 5%. At best. Many orbital launch systems deliver less than 5%.
Just 3000 m/s is a nearly 3:1 ratio.
Someone is forgetting basic physics. F=(1/2)m(v^2). The ratio is not 3:1. The ratio is (10^2)/(3^2) or 11:1.
Because they're... probes? Most of them weigh so little and go by so energy efficient orbits that there's no point.
If their orbits are so efficient why do many of them have flybys so they get a gravity assist? These are also unmanned probes for which time is not as important a factor as a manned mission.
Your typical probe is maybe a ton, the Curiosity mission was a real heavyweight at almost four tons total - of which the rover itself was around one, but still something a regular Falcon 9, Atlas V or Delta IV could deliver to Mars. There's still room for bigger missions on a Delta IV Heavy, even before the Falcon Heavy flies. We don't do it because there's no point in adding that complexity and the extra expense doesn't give any payback in science. It's better science to send two small probes than one big one.
Falcon Heavy's max payload to Mars is 13,000 kg which is about the size of 1 ISS module. Orion spacecraft is estimated to be 25,000kg with 9,000kg of fuel. Bigger rockets will be necessary, or refueling in space has to be an option.