So then the question becomes, could an actual fission reactor be designed small and powerful enough to power a car (or horse) -like vehicle?
So then the question becomes, could an actual fission reactor be designed small and powerful enough to power a car (or horse) -like vehicle?
[...] Meanwhile, engineers will continue to look at alternate cooling solutions, such as liquid hydrogen. [...]
This doesn't work. There's no viable substitute for helium, not even hydrogen. The reason helium is so useful is that it boils at 4 K (by far the coldest boiling point of any substance), remains liquid all the way down to absolute zero at standard pressure, and becomes superfluid at 2 K (the only bulk superfluid achievable on Earth).
The boiling point is important because that's how cryogenic cooling works: when you use a circulating liquid coolant, the temperature of the (coolant plus apparatus) system cannot exceed the boiling point of the coolant until the coolant has entirely boiled away, so you get a very consistent and predictable temperature (right up until the coolant is gone). 4 K is below the critical temperature of the most common materials for superconducting electromagnets: niobium-titanium (10 K, relatively cheap) and niobium-tin (18 K, highest known T_c for a traditional superconductor). Hydrogen is not a substitute, because it boils at 20 K; that's noticeably too warm for any traditional superconductor, and even if it weren't, superconductors can handle stronger magnetic fields the colder you chill them, so they'd be less useful in an MRI machine. And you can't chill hydrogen much colder than its boiling point before you hit its melting point, 14 K, at which point it stops circulating and becomes much less useful as a coolant.
The superfluidity is not quite as useful day to day, but it's used to study the behavior of other quantum mechanical systems, such as neutron star interiors, that we can't recreate in a lab. It also forms a rigorous analogy with superconductivity, especially in the case of fermionic He-3, so it gives us a chance to play with a bulk fluid that propagates fluid currents in the same way that superconductors propagate electrical currents. Nothing else can replace it for this purpose.
(Side note: helium is not a truly expendable resource. Of the helium present on Earth, not a single gram is left over from the formation of the solar system; Earth doesn't have the mass to retain helium in its atmosphere. All our helium comes from the alpha particle decay of heavier radioactive elements, like radon. When the alpha particles relax and become neutral helium gas, the gas is trapped by the same gas-impermeable rock formations that trap natural gas. However, the natural recharge rate from radioactive decay is much slower than the rate that we're extracting it and venting it, so if we don't curtail our waste we're going to run out regardless.)
Trades were executed in Chicago before the change was announced in Washington D.C. in a relativistic physics sense.
Actually, in relativistic physics sense, the trades in Chicago where outside of the light cone of the Washington event (neither in the future cone nor in the past cone). That being said, since Washington and Chicago do not move at relativistic speed with respect to each other, the trades are still at a later time than the announce, even if there's no possible causality.
But the DC announcement was not in the past light cone for the Chicago trade. Therefore the information had not yet reached the Chicago public. That is the criterion being judged, not simultaneity. Insider trading, case closed.
(And even if we take the classical limit of c approaches infinity, are we really to believe that a trade conducted within single-digit milliseconds of the announcement was based on consideration of the contents of the announcement? There exist fully automated flash trading systems hooked up to news wire services, but AFAIK even those don't react quickly enough to explain the speed of this trade. Shakier conclusion, but still insider trading.)
Package your ruleset.xml into DeploymentRuleSet.jar
Packaging your ruleset allows the desktop administrator to apply cryptographic signatures [emphasis mine] and prevent users from overriding your policy. This requires usage of a trusted signing certificate. The easiest route to get a signature is to buy one from a certificate authority like Symantec/Verisign, Comodo, GoDaddy, or any other; [...]. The default certificate authority list contains about 80 authorities from which you may purchase a signing certificate [emphasis mine].
-- Introducing Deployment Rule Sets, Java Platform Group blog
Why in the name of the everliving fuck would anyone think this step was a good idea? The file is already located in a directory that can only be written by root (or Administrator, as OS appropriate). Why require a signature? This adds zero security. If you have root on the machine, you can add a self-signed CA to the trusted CA list anyway. Do they have a kickback arrangement with Verisign or something?
Since you obviously know that a *file* can be fragmented, obviously you already know that a file doesn't have to be contiguously written.
Thus, you don't need to defragment it. The directory structure knows that the 'file' is in blocks 1-5, 8, 14.
As other people pointed out, disk seeks are most assuredly something to avoid on spinning media. But even when seeks are free, as they are on SSD, fragmentation still sucks and you should avoid it like you owe it money. For one, some filesystems use run-length encoding for the list of blocks in a file. Basically, instead of recording "1, 2, 3, 4, 5, 8, 14", they notice the pattern and record "1-5, 8, 14" like you just did in your post. (The ext family doesn't do this, but IIRC some of the post-ext2 up-and-comers use it.) RLE lets you inline more metadata directly in the inode without resorting to indirect blocks, which basically means you get your data with fewer round trips to the disk. (It might save you from needing to read a meta-meta-block to find the meta-blocks that tell you where the blocks are. Instead you can fit all the blocks in one meta-block and skip a round trip.) For two, even filesystems on SSD that don't do RLE still suffer under fragmentation. Unfragmented files make it easy for the kernel I/O scheduler to coalesce those sequential block reads into big, happy multi-block SATA reads when you're streaming through the file. As before that means fragmentation = more round trips to the disk, but it also means fragmentation = spamming the SATA controller with more commands and spamming the CPU with more interrupt handlers for the command completions. (In other words, copying a big fragmented file slows down everything else on the computer, moreso than copying a big un-fragmented file.)
Disclaimer: I am not a filesystem designer, I just play one on Slashdot.
Do the studies of herd immunity account for a mix of herd and non-herd immunity zones in close proximity? If there's this city of non-herd, how will that interact as an island of non-herd in a sea of herd mentality? This isn't that far from D/FW, and it's reasonable to assume at least one person works in a dense area, hopefully with herd protection.
It's a lot less mathematically tractable than the "homogeneous population" model, so you can't just throw calculus at it. AFAIK there haven't been any good empirical studies, but I don't follow the literature so I could be off-base. I would naïvely expect that someone's tried Monte Carlo or other computer simulation methods? Again, not familiar with the literature so I'm unqualified to comment further.
Yeah, I must be missing something here. Are those who do not get vaccinated putting those of us who are at serious risk?
Yes. The measles herd immunity threshold for the MMR vaccine is 92-94%. If more than 6% of the idiots around you go unvaccinated, measles becomes likely to spread among people who have already taken the vaccine or otherwise acquired immunity.
The reason is simple: the immune system is random. The B cells in each vaccinated individual produce different antibodies in response to the same antigen. Since an antibody's response to antigen X1 doesn't correlate much with its response to antigen X2, and different lines of a disease have different antigens, no vaccine can be 100% effective. Any one person might have total immunity to some given line of the disease (called a "quasispecies"), yet be totally vulnerable to some other quasispecies whose antigens are invisible to the existing antibodies. Different people are vulnerable to different quasispecies, and there are thousands of quasispecies (grouped into 21 strains in the case of measles), so we usually just throw our hands up in the air and pretend that infection vulnerability is a wholly non-deterministic thing.
Herd immunity is the threshold where each infection produces, on average, one new infection. If the vaccination rate is above herd immunity, each infection produces less than one new infection (exponential decay). The outbreak reaches its peak quickly, then vanishes as the existing victims fight off the disease (or die). If the vaccination rate is below herd immunity, then each infection leads to more than one new infection (exponential growth). The outbreak then grows rapidly until so many people are already carrying the disease that the disease runs out of new hosts, reaching a new steady-state of one new infection per infection... at which point we say it has transformed from epidemic (an outbreak) to endemic (never going away on its own).
If vaccines were 100% effective, falling below the herd immunity threshold wouldn't be so worrisome for people who are vaccinated. True, among vaccine-refusing populations (and those who can't benefit from vaccines, e.g. babies, the very elderly, AIDS patients, and organ transplant recipients) the disease would perpetually rage, as there would be enough contact between vulnerable islands that the disease never quite burns out. But in reality (a) each person who is immunized has a small-but-nonzero chance of catching the infection (and passing it on), so everyone is potential virus-habitat regardless of vaccination status, and (b) more victims means larger viral population means more viral reproduction means creation of more quasispecies. More quasispecies means that, if there is some way that the antigens can change that will give the disease access to new victims without compromising the disease's ability to spread, evolution will find and exploit it sooner rather than later, so the virus can get its grubby little capsid proteins on fresh meat that other strains can't touch (i.e. you).
What we're seeing in Texas is an outbreak in an overall US population where vaccination rates are falling, but still above the herd immunity threshold... for now. If rates continue to fall, we can expect these outbreaks to become larger and more frequent, until they eventually reach criticality and the end of one outbreak always overlaps the beginning of the next, i.e. the disease becomes endemic again.
(Pertussis is also stupid contagious and thus has a high threshold for herd immunity, but pertussis is about 10 times more likely to kill a baby than measles is. Like measles, pertussis is also seeing big ugly outbreaks these days: the Denver metro area, Northern California around Marin, Washington state, i.e. basically the places where the cultish and vaccine-refusing Waldorf School has a notable presence. Annoyingly enough, the DPT and TDaP vaccines was never even implicated in the original Wakefield autism-vaccine nonsense, yet the vaccination rates have been falling about as dramatically as those of MMR, probably because Wakefield's "MMR is bad (and here's a patented replacement vaccine, no payola I promise!)" got simplified into "vaccines are bad" in the US's celebrity-worshipping mass media echo chamber.)
Back when I was living in Wichita, Kansas, one of the few nice things about the area was the Cosmosphere, a shockingly out of place top-notch aerospace museum in nearby retirement town Hutchinson. It has a decommissioned SR-71 hanging from the ceiling in the lobby. I'm not by any means an aircraft geek, but even I have to stop and mumble "that is a gorgeous plane".
Why is PNG needed any more, anyway? It was only developed because of Unisys patents. GIF patents expired years ago.
The LZW patents were the impetus for PNG, but PNG is superior in every possible way... except that PNG skipped animation, because animated GIFs didn't seem like an important use case to support. (As I recall, their primary use at the time was badly pixelated spinning red alarm lights on Geocities pages.)
I thought the most telling names were FASCIA and BANYAN.
FASCIA: Immediately makes me think it has something to do with face recognition
BANYAN: Named after a parasitic tree that grows in the cracks of other trees. Uh huh...
FASCIA is actually a real word: the name for the thin sheets of connective tissue that bundle other tissues into tubes. It's not uncommon for someone with arch support problems to pull or tear a muscle fascia in their foot. More ominously, fasciae have previously made it into the news by way of "flesh-eating disease" (necrotizing fasciitis), which is where a bacterial infection (esp. strep or staph) breaches the superficial fascia and uses it to spread quickly under the skin, faster than the immune system can pin it down and mount a credible threat.
The Periodic Table isn't a model, or at least not a functional model. It's a chart - a way to represent data.
It's more than a chart. A table is not just a way to represent data; a simple list of all items in random order can represent the data just as well as a table can. A table is a way to organize data -- by spotting patterns, identifying which patterns are most important, then arranging the items to highlight those patterns. By choosing which patterns are important, you are implicitly constructing a model of what the items in the table are.
The Mendeleev-derived periodic table has done quite nicely for us: it predicted the properties of many elements long before we actually isolated them, and it was doing so well before we understood that the patterns highlighted by the table (the table's implicit model) were ultimately caused by the arrangement of electrons into quantum-mechanical energy-level shells by way of Pauli exclusion, with the arrangement of elements in each row directly dependent on the quantized degrees of freedom in each shell's energy level (hence the 2*, 2*[1+3], 2*[1+3+5], 2*[1+3+5+7] pattern in the table's row widths). Think of the table as a quick first-order approximation to the deeper equations needed to compute the true physics, such as the energy of a filled d-orbital in the third electron shell. A more complex table with an extra dimension or two of symmetry might be able to capture more patterns, giving us a more detailed model that produces better, more subtle approximations than the Mendeleev-derived model can yield; yet that new model would still bypass the tough work of calculating how electrons actually behave when packed around a single nucleus. (Or perhaps we could capture some symmetry affecting how an atom forms molecular bonds, or a nucleon symmetry that gives better predictions of stability and half-life or that better captures why the stable proton:neutron ratio isn't a perfectly smooth curve.)
... and, uh, "praise" is not the word to describe what my co-workers are saying about the movie.
Go read "What Colour are your bits?".
Lies. There's nothing wrong with X that can be attributed to the protocol. It's the Xorg codebase that's gotten unwieldy. Wayland throws the baby out with the bathwater.
This is, of course, why XCB has taken the Linux universe by storm and everyone has abandoned toolkits like GTK in favor of the unicorns and puppies that XCB brings us. Everyone loves atoms, pixmaps, and server-side bitmap fonts.
You could drop a chair from an airplane and see... marvel at that incredible force that is gravity, see how it easily defeats that feeble electromagnetic force, and turns what was once a chair into a pile of splinters, and in due time-- they will make their way into the earth...
What's holding up the plane in the first place, giving the chair the potential energy to shatter on the ground below? Oh, right, the electrostatic repulsion of the electrons in the air pushing against the electrons in the plane's wing.
Seriously, though: gravity is 10,000,000,000,000,000,000,000,000,000,000,000,000 times weaker than the electromagnetic force.
Electromagnetic attraction also decays by a great amount over any significant distance...
Both decay 1/d^2, but the chair is electrically neutral (or very close to it), while the Earth is pulling against you with the full might of 10^24 kg of gravitational charge. Because the chair is neutral, it can only hold you up with the residual electromagnetic force, i.e. the fact that electrons and protons aren't evenly smeared throughout the atom's interior, and that's an incredibly weak effect compared to the actual electromagnetic force.
(This, by the way, is why the Electric Universe cranks inhabiting Slashdot are so off-base. Do they really think nobody would notice the un-subtle effects of a force 10^37 times more powerful than gravity?)
I am a computer. I am dumber than any human and smarter than any administrator.