Having no government interdiction at all would mean the rich would take directly from the middle class until all that was left is a class of extreme wealth at the top and a massive class of abject poverty below.
How does that differ from our present system of government?
And I did RTFA. Limiting travel doesn't have to mean their interpretation of it (forbidding flights from/to certain places).
Uh huh. And when you eliminate most of the travelers, what do you think the airlines are going to do? Maintain their flight schedules with empty planes? Partial restrictions have much larger effects than just eliminating the restricted travelers.
For starters, I'd restrict visas for non-US naionals from those places, regardless of where their particular flight originated.
WTF? Why would you restrict people who haven't even been in the region? I'm resisting the urge to throw down the race card, but it's hard.
Finally, asking people who have been in close contact with Ebola patients to quarantine if at all symptomatic isn't unreasonable.
I don't agree, but even supposing I did have you thought about the effects of doing that? Like, for example, discouraging doctors and nurses from traveling to West Africa to help out? The biggest problem with controlling the epidemic there is the lack of healthcare workers, and unnecessary mandatory quarantines are going to reduce the number of health care workers willing to go.
Don't believe me? It's already happening, even with the limited state-based quarantine requirements. If the quarantines were actually necessary or useful, that would be unavoidable, but it's not, so quarantines are actively damaging the fight against the disease in the place where it's most needed, in order to assuage groundless fears of people who have 0% of ever contracting the disease.
People, not to put too fine a point on it, like you.
"The goal of low-cost sub-orbital manned flights"
Expand your goals! That one is little. We have a long ways to go.
No argument. I was just restricting it to what SpaceShipTwo was working towards. It's intended to be sub-orbital only, AFAIK.
To satisfy the investor class, you need to generate an ever increasing stock price.
This is incorrect.
To satisfy investors, you have to give them a return on their investment. This doesn't require an ever-increasing stock price, or an ever-increasing revenue or profit stream.
The nominal value of a stock is the net present value of its future dividend stream. A company (like Microsoft) that pays regular dividends then merely needs to generate a sustained profit and distribute it via dividends. As long as that profit, and hence dividend, stream is high enough, the stock price will stay at a given level, based on how much that dividend stream is worth.
Those are the basics. I'll leave figuring out how this applies to companies whose current stock price exceeds the NPV of their future dividend stream (perhaps because they don't issue dividends) based on investor expectations of future growth as an exercise for the reader.
Next release of android will remove that feature by default because it will enable encryption by default. You will need special software on the computer and a key (which is buried in the phone somehow) to view the files.
This is incorrect.
It would be correct if Android still supported USB Mass Storage, but thanks to the switch from UMS to MTP back in JellyBean days (IIRC), you're now relying on the operating system to read the file system, and it knows how to transparently decrypt things.
What was the last version of Android that actually let you do that?
Some of them become martyrs for the knowledge needed to achieve the goal.
No offense, but "the goal" was achieved decades ago.
Nonsense. The goal of low-cost sub-orbital manned flights with completely reusable spacecraft has not been achieved. The fact that sub-orbital space flight was achieved decades ago, at massive expense and with single-use craft (or craft that have to be overhauled after every flight), isn't relevant. Achieving regular manned commercial space travel is also worthwhile, and also unachieved. What Virgin Galactic is trying to do is new, and worthwhile, in several ways. And even if all of the above had been done, that still wouldn't make it useless to design and test new spacecraft designs... and that's still an inherently dangerous process. Test pilots still die from time to time in aircraft, and we've been doing that for a half century longer yet.
I realize that you just wanted a chance to poke at your favorite strawman, but that just increases the ridiculousness of your statement.
I agree, but people have been working on ideas for changing the planetary climates independent of carbon sequestration or lack thereof.
Ideas, yes, but AFAICT no one is talking even remotely seriously about implementing any of them. It may be that implementation is premature, of course, that the ideas aren't sufficiently well-developed and tested, but I think we should at least be talking about the ideas in public fora.
It would be real useful to be able to stabilize it at some point well suited for the current human civilization.
Exactly. Though "stabilize" is the wrong word, I think, because I doubt it will ever be "stable". Instead, what we should be able to someday achieve is a sort of dynamic stability, via active management, so whenever it starts to drift out of our preferred "Goldilocks Zone", we nudge it back.
Sure, there's a difference between the explanation and the predictions. Perhaps we're just disagreeing on whether a new explanation that better fits predictions to observations makes the new theory "right" and the old one "wrong", or whether it's just an increase in completeness. There's a rational basis for both positions. I prefer the latter (as did Asimov in his essay) for several reasons, not least because it doesn't imply that the new theory is "right".
I recognize there's a danger in the "increasing completeness" perspective of falling into the empiricism trap of believing that theories are only about predictions and not about explaining the underlying reality. I believe that the goal of theories is to explain the underlying reality, while keeping in mind that it may be the case that our theories never actually explain the truth.
This position is something of a leap of faith, because it's impossible to distinguish between two explanations that have exactly the same predictive power and exactly the same predictions. This is why the Copenhagen and many-worlds interpretations of quantum mechanics continue to co-exist, and may potentially always coexist, even though they provide radically different explanations for the observed randomness of the quantum world. In spite of that, I persist in believing that scientific theories attempt to describe underlying reality and aren't merely convenient predictive models.
Just to put this in perspective Newtonian mechanics is a radically different view of the world than relativistic mechanics yet is still overwhelmingly used to do almost all calculations in engineering.
This is exactly Asimov's point, though he doesn't use this example. He argues that at any given time science's view of the world isn't so much wrong as it is incomplete. That's definitely the relationship between classical and relativistic mechanics; they have radically different explanations, but the latter implies the former and clarifies under what circumstances the classical computations are correct and to what degree. He uses the example of flat vs spherical vs oblate spheroid conceptions of the shape of the world. To a considerable degree the view that Earth is flat isn't wrong, particularly if your scope of operation is limited by the speeds and distances available to people on foot, but it is definitely incomplete. A spherical view is more complete, and more correct and dramatically more useful, but still incorrect, still incomplete, and flat wrong if you want to map the globe in detail and coordinate it with satellite-based positioning (for that matter, Newtonian mechanics isn't sufficiently correct to run an accurate global positioning system). And so on.
I suspect the same will hold true whenever a viable formulation of quantum gravity is made.
I'd be shocked if that weren't the case. I think the only way it would be untrue is if the quantum gravitic approach yielded dramatically simpler computations, which seems highly unlikely given that Newton's equations are so concise and elegant.
Maybe there isn't anything big left to discover and we know it all, I wouldn't bet on that though.
I'd go further and say that the claim is laughable on its face given the huge amount we know we don't know, plus the fact that there is almost certainly a lot more that we don't know we don't know. The lack of a theory integrating quantum mechanics and gravitation, the wild profusion of seemingly-random subatomic particles and their bewilderingly varied interactions, the big holes in cosmology around dark matter and dark energy and the early moments of the big bang, our lack of understanding of many emergent properties of the chemical processes of biology, including such crucial matters as our ignorance of what intelligence is (an area in which everything we learn is still serving mostly to illuminate the depth and breadth of our cluelessness), our limited understanding of our planet's climate... I could go on and on, and I'm sure you could as well.
With so many big, obvious holes and even outright contradictions, it seems clear to me that there MUST be lots of fundamental discoveries yet to be made. Many of them likely hiding in the areas we don't know we don't know, just as relativity was hiding in the difficulty of measuring the luminiferous aether.
It's a marvellous time to be alive
Not to mention crashes caused by rare, hard-to-reproduce race conditions.
Indeed. That is one sub-category of the obscure software defects I mentioned. It's probably the best example, actually.
It's interesting to describe the approach used by many systems at Google (where I work; I'm now in Android but used to work on Google servers): a common pattern at Google is to crash immediately upon detection of any error. This is actually just a logical extrapolation from Google's long-used approach of building reliable systems on top of cheap, unreliable, commodity hardware, applying the same notion to individual software components. System designs assume that anything can fail at any time, and are built to recover gracefully, possibly with some degradation. So there is extensive infrastructure to fail a request over to to another process instance, and to automatically restart any failed processes. And of course there is extensive and detailed monitoring, with lots of statistical analysis of failure modes plus charting of everything to enable patterns to be recognized and various forms of alerting, ranging from automated bug filings, to e-mails, to pages delivered to on-call engineers.
Given that approach to fault tolerance, it's often very reasonable, at least for non-Java processes, to simply abort/crash whenever anything goes wrong. Restarts are automated and fast (for non-Java processes; JVMs are a bit slower to start) and monitoring and alerting take care of letting people know what happened and how often. This includes both hardware and software-related failures. Monitoring also pays special attention to processes that fail repeatedly (called "flapping") upon restart and generates high-priority alerts. The restart infrastructure will also slow and even stop the restarts of flapping processes.
Anyone who's used the googletest unit test framework for C++ may have wondered about the extensive support for and documentation of so-called "death tests", which allow you to verify that your code crashes when it's supposed to, and in the right way. This is a consequence of this particular approach to fault tolerance; if crashing is part of your reliability plan, you need to test that your code crashes when and how it's supposed to.
None of this has anything to do with systemd, of course. The fact that a strategy is effective in Google's environment is utterly meaningless in single-server contexts. In this case, though, auto-restart plus monitoring and flapping control seems like something that could usefully work in many contexts, perhaps even as the default.
Also, not coincidentally, quietly allowing hardware problems to persist until data structures and the filesystem to be corrupted before anybody notices.
Hence the importance of monitoring so that the failure is not quiet, as I already pointed out. Please try reading and fully understanding posts before responding.
In which case a nanny process restart is useless. Thanks for making my point, idiot.
Many hardware failures are transient, and a process restart is a very effective fix, at least in the short term. In the longer term, you'd better have monitoring in place so you know that the restarts are happening and can decide when to fix the hardware.
In addition, many process crashes are caused not by hardware failures, but by obscure software defects, and a process restart is not only effective at getting the production system back online, but arguably is a complete solution to the problem if the defect is sufficiently obscure that it's very rarely triggered, and hence not worth the large amount of effort required to identify and fix it.