You could make a film about a pile of dead body parts assembled into the form of a man being shocked by lightning and being given the will to live. You could even add some wanton violence and philosophical questions of existence to make the story interesting.
Uhhh...just FYI? Rohm and the SA leadership were pretty much ALL gay and Hitler and pals didn't have a problem with it until Rohm started talking about a "second revolution" because he thought "the little colonel" had betrayed the socialist part of national socialism, just FYI.
Hitler had a pretty firm "babies good, homosexuals bad" policy for the common folk. Rohm was a party insider long before Hitler was elected Chancellor; in general, Hitler was pretty willing to give special treatment to party insiders, even ones less senior than Rohm. Even so, I'm not aware of any other SA leaders who got a pass for the same reason; care to name names?
For that matter, Hitler's family doctor Eduard Bloch was Jewish, and he got special treatment too (only Jew in Linz with special protection from the Gestapo, notes Wikipedia). Adolf reportedly had quite the soft spot for him after he did everything he could to treat Klara Hitler's rather horrifically advanced breast cancer, despite her financial hardship. Basically, Hitler was a giant hypocrite who tried to ignore the brutality of his own policies by shielding only the people he cared about and could personally see suffering from them.
I am not a physicist.
But I keep hearing that there is actually nothing mysterious about entanglement at all... Something along the lines of:
You post 2 envelopes containing cards in opposite directions, one with a printed letter A, the other card with the letter B.
At one destination, the envelope is opened to reveal the letter A.
And that's about all there is to entanglement....
Can any physicist confirm?
I'm not a physicist, just a well-read layman, but...
It is more mysterious than that, but if you go with the Many Worlds interpretation it's not much more mysterious.
Basically, if you entangle letters A and B and send them in opposite directions, you're really creating two universes corresponding to the two possibilities: universe P (A here, B there) and universe Q (B here, A there). If you open the envelope to reveal A, for instance, then that copy of you in universe P now knows they exists in universe P, and likewise for B and Q. But unlike in classical physics, universe P is not completely separated from universe Q. P and Q still exist as a single mathematical object, P-plus-Q, and you can manipulate that mathematical object in ways that don't make sense from a classical standpoint.
Basically, it all comes down to one small thing with big consequences. The real world is NOT described by classical probability (real numbers in the range [0,1]). Instead, the real world is described by quantum probability (complex numbers obeying Re[x]^2 + Im[x]^2 = 1).
As it turns out, "system P-plus-Q has a 50% chance of P and a 50% chance of Q" is really saying "system P-plus-Q lies at a 45deg angle between the P axis and the Q axis". Starting from P-plus-Q, you can rotate 45deg in one direction to get orthogonal P (A always here), or you can rotate 45deg in the opposite direction to get orthogonal Q (B always here), thus deleting the history of whether A or B was "originally" here. (If P and Q were independent universes, this would decrease entropy and thus break the laws of physics.) Even more counterintuitively, you can even rotate P-plus-Q by 15deg to get a 75% chance A is here and a 25% chance B is here (or vice versa, depending on which quadrant the starting angle was in). Circular rotations in 2-dimensional probability space are the thing that makes quantum probability different from classical probability, and thus the thing that makes quantum physics from classical physics.
Classically, A is either definitely here or definitely there, and until we open the envelope and look we are merely ignorant of which is the case. Classical physics is time-symmetric, and it therefore forbids randomness from being created or destroyed; classical probability actually measures ignorance of starting conditions. In a classical world obeying classical rules, you can't start from "50% A-here, 50% B-here" and transform it into "75% A-here, 25% B-here" without cheating. The required operation would be "flip a coin; if B is here and the coin lands heads, swap envelopes", and you can't carry that out without opening the envelope to check if B is here or not. Quantum physics is also time-symmetric and also forbids the creation and destruction of randomness, but quantum probability (also called "amplitude") is not a mere measure of ignorance. In the Many Worlds way of thinking, physics makes many copies of each possible universe, and the quantum amplitude determines how many copies of each universe to make. At 30deg off the P axis, cos(30deg)^2 = 75% of the copies are copies of universe P, and you experience this as a 75% probability of finding yourself in a universe with "A here, B there".
(Or something like that. It'll probably make more sense once we eliminate time from the equations. At the moment not even Many Worlds can help us wrap our heads around the fact that quantum entanglement works backward the same as it does forward. The equations as they stand today imply that many past-universes containing past-yous have precisely converged to become the present-universe containing present-you.)
One last complication. If the information of A's location spreads to more particles than A and B, then P and Q become more and more different, and as a consequence the quantum probability rules become harder and harder to distinguish from the classical ones. If you open the envelope and learn "A is here", for instance, then P now contains billions of particles that are different from Q (at the very least, the particles in your brain that make up your memory) and it now becomes impossible-ish to perform rotations on P-plus-Q, because you would need to find each particle that changed and rotate it individually. (Not truly impossible, but staggeringly impractical in the same sense that freezing a glass of room-temperature water by gripping each molecule individually to make it sit still is staggeringly impractical. And both are impractical for the same reason: entropy.)
When so many particles are involved that we can't merge the universes back together, we call the situation "decoherence", but it's really just "entanglement of too many things to keep track of". Entanglement itself isn't really that special; what's special is limiting the entanglement to a small group of particles that we can keep track of and manipulate as a group.
Just under what legal theory before the FDA was poisoning people a legitimate business ?
THE RADIUM WATER IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
Back in the U.S. robber-baron era (1870-1905) it used to be the case that it was your own fault if you put it in your mouth. It didn't matter if the seller marketed it as edible despite knowing or suspecting that the product was poisonous (such as radium water or formaldehyde-preserved milk). As the buyer you were supposed to know better, as summarized by the legal doctrine caveat emptor ("let the buyer beware"). It was only later that caveat emptor was _partially_ overturned by the invention of the "implied warranty", as federally formalized in the Uniform Commercial Code of 1952 (though the concept was kicking around decades earlier than that on a state-by-state basis). In the absence of a warranty (explicit or otherwise), the seller had made no promise to the buyer about the product sold, and with no promise to break there was therefore no fraud on the seller's part. No fraud, therefore no wrong and no restitution: no wrongful death damages, no medical bill expenses, not even a "satisfaction or your money back" refund guarantee.
To this day, there's still quite a bit of caveat emptor in the law. For example, cigarette smoke is poisonous at the intended dosage, full stop. Habitual smoking of cigarettes is known to inactivate hemoglobin by way of carbon monoxide, to reduce lung capacity by accumulation of scar tissue, to damage the cardiovascular system by hardening the arterial walls, and to dramatically increase the risk of lung and other cancers. But despite their documented toxicity, to this day tobacco companies are not held liable for selling them. They have been sued several times, but generally for their advertising, and many of the advertising suits have been for ads that played up false benefits or downplayed real drawbacks -- i.e. they made a promise (implied warranty of fitness) that was then broken (fraud). But so long as the buyer is duly warned (no false advertising, the Surgeon General's Warning is present), the situation reverts to caveat emptor and it's again the buyer's own fault if they put poison in their mouth.
How they maintain security with C and C++ applets?
NaCl (in its standard, non-Portable flavor) is essentially a bytecode that happens to be directly executable as machine code (either x86-64 or ARM). The bytecode can be statically verified to mathematically prove that the instructions obey certain rules (e.g. exactly one interpretation for any bytecode, execution only leaves the verified bytecode by calling trusted functions, can only read/write memory in the sandbox, cannot write to bytecode, etc.). As I understand it, PNaCl is similar to classic x86/ARM NaCl but trades fake bytecode for real bytecode (LLVM's intermediate representation, last I heard) and statically compiles it to native machine code after the bytecode verification step. Basically, in this scheme the verified C code can run at near-native speed, but it can only communicate with the world outside the sandbox by calling trusted functions that the enclosing app chooses to expose.
Theoretically, Java ought to be just as strongly sandboxed as NaCl: Java code in a JVM sandbox can only call trusted functions that the JVM chooses to expose, too. But in practice the Java standard library exposes a ridiculously broad attack surface, giving sandboxed apps plenty of chances to exploit bugs and escape the sandbox. (For instance, java.lang.String is a final class today because folks discovered that you could subclass it to make it mutable, pass a sandbox-approved value to e.g. a file I/O function, then modify the value to a sandbox-forbidden value after the security check but before the OS system call.) Basically, Java's attack surface is broad and leaky because Java was designed for running embedded devices and servers, not for sandboxed applets downloaded from hostile sites on the Internet. Applets were a distant afterthought compared to Java's "let's write an OS for set-top cable boxes" origin.
A repost of a Google+ post I wrote a year and some change ago:
From today forward, all federal government expenditures will be priced in "Iraq War Days" (IWD) or "Iraq War Years" (IWY). For quick reference:
- - MSL mission w/ Curiosity rover: 3.5 IWD
- - Cost of giving $10 to all 312M US citizens: 4.33 IWD
- - 2012 "General Science, Space and Technology" budget: 43.04 IWD
- - Cost of giving $100 to all 312M US citizens: 43.3 IWD
- - 2012 Welfare budget: 210.3 IWD (0.6 IWY)
- ~ Computed as 26% of the 2012 "Income Security" budget
- ~ Includes TANF (22%) welfare, SNAP (70%) and WIC (8%) food stamps
- ~ All ratios from 3rd party analysis of 2010 data; see "How much do we REALLY spend on Welfare?"
- - 2012 "Medicare" budget: 672.9 IWD (1.8 IWY)
- - Cost of giving $2250 to all 312M US citizens: 975 IWD (2.7 IWY)
- - 2012 "National Defense" budget: 994.9 IWD (2.7 IWY)
- - 2012 "Social Security" budget: 1081 IWD (3.0 IWY)
- - 2012 Total budget: 4986 IWD (13 IWY)
Source: "United States Federal budget, 2012" and "Mars Science Laboratory" pages on Wikipedia for budgets, google.com/publicdata for US population, National Priorities Project via "Cost of War" Wikipedia page for IWD exchange rate.
Something I didn't note in my original post that's probably worth mentioning in passing: Social Security is huge, "bigger than the National Defense budget" huge, but it's basically self-funding because it's a retirement investment paid for by payroll taxes (modulo population bumps, e.g. the post-WW2 "baby boom"). Person A pays in, person A cashes out, theoretical net cost to taxpayers $0.
So then the question becomes, could an actual fission reactor be designed small and powerful enough to power a car (or horse) -like vehicle?
[...] Meanwhile, engineers will continue to look at alternate cooling solutions, such as liquid hydrogen. [...]
This doesn't work. There's no viable substitute for helium, not even hydrogen. The reason helium is so useful is that it boils at 4 K (by far the coldest boiling point of any substance), remains liquid all the way down to absolute zero at standard pressure, and becomes superfluid at 2 K (the only bulk superfluid achievable on Earth).
The boiling point is important because that's how cryogenic cooling works: when you use a circulating liquid coolant, the temperature of the (coolant plus apparatus) system cannot exceed the boiling point of the coolant until the coolant has entirely boiled away, so you get a very consistent and predictable temperature (right up until the coolant is gone). 4 K is below the critical temperature of the most common materials for superconducting electromagnets: niobium-titanium (10 K, relatively cheap) and niobium-tin (18 K, highest known T_c for a traditional superconductor). Hydrogen is not a substitute, because it boils at 20 K; that's noticeably too warm for any traditional superconductor, and even if it weren't, superconductors can handle stronger magnetic fields the colder you chill them, so they'd be less useful in an MRI machine. And you can't chill hydrogen much colder than its boiling point before you hit its melting point, 14 K, at which point it stops circulating and becomes much less useful as a coolant.
The superfluidity is not quite as useful day to day, but it's used to study the behavior of other quantum mechanical systems, such as neutron star interiors, that we can't recreate in a lab. It also forms a rigorous analogy with superconductivity, especially in the case of fermionic He-3, so it gives us a chance to play with a bulk fluid that propagates fluid currents in the same way that superconductors propagate electrical currents. Nothing else can replace it for this purpose.
(Side note: helium is not a truly expendable resource. Of the helium present on Earth, not a single gram is left over from the formation of the solar system; Earth doesn't have the mass to retain helium in its atmosphere. All our helium comes from the alpha particle decay of heavier radioactive elements, like radon. When the alpha particles relax and become neutral helium gas, the gas is trapped by the same gas-impermeable rock formations that trap natural gas. However, the natural recharge rate from radioactive decay is much slower than the rate that we're extracting it and venting it, so if we don't curtail our waste we're going to run out regardless.)
Trades were executed in Chicago before the change was announced in Washington D.C. in a relativistic physics sense.
Actually, in relativistic physics sense, the trades in Chicago where outside of the light cone of the Washington event (neither in the future cone nor in the past cone). That being said, since Washington and Chicago do not move at relativistic speed with respect to each other, the trades are still at a later time than the announce, even if there's no possible causality.
But the DC announcement was not in the past light cone for the Chicago trade. Therefore the information had not yet reached the Chicago public. That is the criterion being judged, not simultaneity. Insider trading, case closed.
(And even if we take the classical limit of c approaches infinity, are we really to believe that a trade conducted within single-digit milliseconds of the announcement was based on consideration of the contents of the announcement? There exist fully automated flash trading systems hooked up to news wire services, but AFAIK even those don't react quickly enough to explain the speed of this trade. Shakier conclusion, but still insider trading.)
Package your ruleset.xml into DeploymentRuleSet.jar
Packaging your ruleset allows the desktop administrator to apply cryptographic signatures [emphasis mine] and prevent users from overriding your policy. This requires usage of a trusted signing certificate. The easiest route to get a signature is to buy one from a certificate authority like Symantec/Verisign, Comodo, GoDaddy, or any other; [...]. The default certificate authority list contains about 80 authorities from which you may purchase a signing certificate [emphasis mine].
-- Introducing Deployment Rule Sets, Java Platform Group blog
Why in the name of the everliving fuck would anyone think this step was a good idea? The file is already located in a directory that can only be written by root (or Administrator, as OS appropriate). Why require a signature? This adds zero security. If you have root on the machine, you can add a self-signed CA to the trusted CA list anyway. Do they have a kickback arrangement with Verisign or something?
Since you obviously know that a *file* can be fragmented, obviously you already know that a file doesn't have to be contiguously written.
Thus, you don't need to defragment it. The directory structure knows that the 'file' is in blocks 1-5, 8, 14.
As other people pointed out, disk seeks are most assuredly something to avoid on spinning media. But even when seeks are free, as they are on SSD, fragmentation still sucks and you should avoid it like you owe it money. For one, some filesystems use run-length encoding for the list of blocks in a file. Basically, instead of recording "1, 2, 3, 4, 5, 8, 14", they notice the pattern and record "1-5, 8, 14" like you just did in your post. (The ext family doesn't do this, but IIRC some of the post-ext2 up-and-comers use it.) RLE lets you inline more metadata directly in the inode without resorting to indirect blocks, which basically means you get your data with fewer round trips to the disk. (It might save you from needing to read a meta-meta-block to find the meta-blocks that tell you where the blocks are. Instead you can fit all the blocks in one meta-block and skip a round trip.) For two, even filesystems on SSD that don't do RLE still suffer under fragmentation. Unfragmented files make it easy for the kernel I/O scheduler to coalesce those sequential block reads into big, happy multi-block SATA reads when you're streaming through the file. As before that means fragmentation = more round trips to the disk, but it also means fragmentation = spamming the SATA controller with more commands and spamming the CPU with more interrupt handlers for the command completions. (In other words, copying a big fragmented file slows down everything else on the computer, moreso than copying a big un-fragmented file.)
Disclaimer: I am not a filesystem designer, I just play one on Slashdot.
Do the studies of herd immunity account for a mix of herd and non-herd immunity zones in close proximity? If there's this city of non-herd, how will that interact as an island of non-herd in a sea of herd mentality? This isn't that far from D/FW, and it's reasonable to assume at least one person works in a dense area, hopefully with herd protection.
It's a lot less mathematically tractable than the "homogeneous population" model, so you can't just throw calculus at it. AFAIK there haven't been any good empirical studies, but I don't follow the literature so I could be off-base. I would naïvely expect that someone's tried Monte Carlo or other computer simulation methods? Again, not familiar with the literature so I'm unqualified to comment further.
Yeah, I must be missing something here. Are those who do not get vaccinated putting those of us who are at serious risk?
Yes. The measles herd immunity threshold for the MMR vaccine is 92-94%. If more than 6% of the idiots around you go unvaccinated, measles becomes likely to spread among people who have already taken the vaccine or otherwise acquired immunity.
The reason is simple: the immune system is random. The B cells in each vaccinated individual produce different antibodies in response to the same antigen. Since an antibody's response to antigen X1 doesn't correlate much with its response to antigen X2, and different lines of a disease have different antigens, no vaccine can be 100% effective. Any one person might have total immunity to some given line of the disease (called a "quasispecies"), yet be totally vulnerable to some other quasispecies whose antigens are invisible to the existing antibodies. Different people are vulnerable to different quasispecies, and there are thousands of quasispecies (grouped into 21 strains in the case of measles), so we usually just throw our hands up in the air and pretend that infection vulnerability is a wholly non-deterministic thing.
Herd immunity is the threshold where each infection produces, on average, one new infection. If the vaccination rate is above herd immunity, each infection produces less than one new infection (exponential decay). The outbreak reaches its peak quickly, then vanishes as the existing victims fight off the disease (or die). If the vaccination rate is below herd immunity, then each infection leads to more than one new infection (exponential growth). The outbreak then grows rapidly until so many people are already carrying the disease that the disease runs out of new hosts, reaching a new steady-state of one new infection per infection... at which point we say it has transformed from epidemic (an outbreak) to endemic (never going away on its own).
If vaccines were 100% effective, falling below the herd immunity threshold wouldn't be so worrisome for people who are vaccinated. True, among vaccine-refusing populations (and those who can't benefit from vaccines, e.g. babies, the very elderly, AIDS patients, and organ transplant recipients) the disease would perpetually rage, as there would be enough contact between vulnerable islands that the disease never quite burns out. But in reality (a) each person who is immunized has a small-but-nonzero chance of catching the infection (and passing it on), so everyone is potential virus-habitat regardless of vaccination status, and (b) more victims means larger viral population means more viral reproduction means creation of more quasispecies. More quasispecies means that, if there is some way that the antigens can change that will give the disease access to new victims without compromising the disease's ability to spread, evolution will find and exploit it sooner rather than later, so the virus can get its grubby little capsid proteins on fresh meat that other strains can't touch (i.e. you).
What we're seeing in Texas is an outbreak in an overall US population where vaccination rates are falling, but still above the herd immunity threshold... for now. If rates continue to fall, we can expect these outbreaks to become larger and more frequent, until they eventually reach criticality and the end of one outbreak always overlaps the beginning of the next, i.e. the disease becomes endemic again.
(Pertussis is also stupid contagious and thus has a high threshold for herd immunity, but pertussis is about 10 times more likely to kill a baby than measles is. Like measles, pertussis is also seeing big ugly outbreaks these days: the Denver metro area, Northern California around Marin, Washington state, i.e. basically the places where the cultish and vaccine-refusing Waldorf School has a notable presence. Annoyingly enough, the DPT and TDaP vaccines was never even implicated in the original Wakefield autism-vaccine nonsense, yet the vaccination rates have been falling about as dramatically as those of MMR, probably because Wakefield's "MMR is bad (and here's a patented replacement vaccine, no payola I promise!)" got simplified into "vaccines are bad" in the US's celebrity-worshipping mass media echo chamber.)
Back when I was living in Wichita, Kansas, one of the few nice things about the area was the Cosmosphere, a shockingly out of place top-notch aerospace museum in nearby retirement town Hutchinson. It has a decommissioned SR-71 hanging from the ceiling in the lobby. I'm not by any means an aircraft geek, but even I have to stop and mumble "that is a gorgeous plane".
Why is PNG needed any more, anyway? It was only developed because of Unisys patents. GIF patents expired years ago.
The LZW patents were the impetus for PNG, but PNG is superior in every possible way... except that PNG skipped animation, because animated GIFs didn't seem like an important use case to support. (As I recall, their primary use at the time was badly pixelated spinning red alarm lights on Geocities pages.)