It calms people down, and since there are no THC receptors in the brainstem, high doses aren't life-threatening. You might need a lot of it though, and an unintended consequence may be that people would deliberately try to get police to use it on them.
Do you have a citation for the meteorite work?
Also, even if there's a slight chiral asymmetry in space rocks, space is a high radiation environment compared to the primordial soup. Tinkerton makes a very good point: when the chiral chemical effects of beta decay are this weak even under lab conditions engineered to maximize their strength, it's hard to imagine this asymmetry would play a significant role in the wild.
There's a type of camera technology emerging with a view of the world similar to what a honey bee sees. The images appear blurry and hazy, but if you're a bee, good enough for finding flowers and people to sting.
We use a spiral diffraction grating plus computation instead of a lens, which lets us shrink our sensor much smaller than any sane conventional optical system. The grating is only 200 microns in diameter, and the whole sensor can be made using only standard CMOS techniques, meaning it will cost only a trivial amount to add low-resolution eyes to any digital device.
Link to Original Source
Regardless of how good battery tech gets, it will always be easier to store money than to store energy. How can the former substitute for the latter? There are some latency-insensitive electricity consumers, like heating, cooling, pumping water, etc. While there's a shortage of supply, give the consumers an incentive to store money (not pay for expensive electricity) until there's more supply, and they can make up for the backlog then.
Letting the electricity price float is a natural way to give consumers an incentive to shift their consumption. If a smart thermostat could pay for itself in less than a year by monitoring prices and price forecasts, we'd all buy them, and then we'd be able to store money rather than energy, which is technologically a much easier prospect.
Employment should be thought of as almost a human right. If a spouse isn't allowed to pursue his calling simply because of where he lives, and he sticks by his working wife for over half a decade on a no-working-allowed H4 visa, that actually sucks pretty hard. It crosses the line from "tough choice" to "ok, now this policy is actually breaking a person's ability to develop". Economics aside, I have a moral objection to placing these kinds of restrictions on a human's development for so long.
Some who disagree with me will say it's the H4's fault for falling in love with a worker going to an H1B job. Others will say that if it's not worth the sacrifice, they should both go home. I say that both of these counterarguments are kind of disheartening: do you really want to force other people into making these tough choices? That doesn't feel like what America is all about.
I have a PhD in sensory neuroscience from UC Berkeley. It could be the effect mentioned in TFA is sensory, not memorization.
Caffeine is known to increase acetylcholine release. Acetylcholine makes your brain pay more attention to here-and-now details than to its internal model of what's going on.
I'm also dubious about the idea that any one, simple chemical can ever make you smarter in any general way without adverse consequences. Evolution has a lot of time to scope out all simple neurochemical effects, so beware studies that suggest they've found a "smart pill". Sure, it's possible to take a drug to make you better at one specific task to the detriment of some others, but the idea that there is any simple cognitive enhancing substance would imply either "evolution couldn't mimic the effect of this substance on the brain" or "cognitive enhancement isn't an evolutionary good move". Neither seems very likely.
What post-quantum assymetric crypto is there?
Wikipedia to the rescue: https://en.wikipedia.org/wiki/Post-quantum_cryptography. My personal favorite is the McEliece cryptosystem, based on error-correcting codes: https://en.wikipedia.org/wiki/McEliece_cryptosystem. They key size is huge (well, under 1 MB still) but computation isn't too bad. I'd still recommend adding RSA plus several post-quantum schemes in an XOR chain as I described.
About increasing key size without a clear need, a lot of crypto algorithms take compute time that grows faster than linearly with key size. Executing several independent algorithms in parallel is better for two reasons: first, the key sizes of each one aren't large so don't suffer the nonlinear slowing, and second, they can be executed on separate cores in parallel.
I'd welcome advice from an expert, but my impression is that the mainstream crypto researchers think that it's more conservative to adopt a single, trusted crypto algorithm and bet the farm on it. My instincts are that this is a bad approach. Composed algorithms like the one I described where all of (say) 5 schemes must be cracked before the attacker gets anywhere are more conservative in my view since they are at least as strong as each of their constituents. However, I'm not a crypto researcher, and there might be a good reason not to shield RSA (which we know is secure to classical but not quantum attacks) with a variety of layers that each provide a good chance of being robust against a quantum attack.
When will we have quantum computers? One reasonable scenario is that by 2020 we'll have a Sputnik moment where somebody will build a quantum computer much better than the sleepy mainstream expects, yet not powerful enough to run Shore's algorithm against 1024-bit RSA. This will shock the world into a bit of a panic that a bigger quantum computer will come soon, and RSA and elliptic curves will be seen as untrustworthy by 2025. We'd be better off adding a layer of protection now, especially since we're sending data now that we wouldn't want to be public for a lot longer than 2025.
Then all that happens is we adopt those other schemes faster, spot the holes faster[....]
I agree, and I'd argue we don't go far enough yet. We should adopt a few of these post-quantum schemes now alongside a trusted but quantum-vulnerable protocol such as RSA.
You ensure that communications are safe unless all schemes can be broken. Here's how. Most public key cryptography is used to send a roughly 128 to 256 bit long one-time use key for a symmetric cipher like AES. It would be possible to select, say, 5 different public key protocols: 4 new (and therefore perhaps flawed) post-quantum schemes plus one quantum-vulnerable but trusted protocol like RSA. Generate your AES key, then generate 4 random bitstrings of the same length. Then, using the 5 protocols, use the first protocol (RSA) to securely send the key XORed with the 4 random strings, and use each of the other 4 protocols to securely send one of the random keys. An attacker who can crack any 4 of the 5 protocols cannot obtain any information about the key.
The upside to this is that if you take a diverse set of promising strategies for post-quantum public key crypto from several agencies that don't trust each other, chances are there will be at least one that's OK. Even if none of them work well, you're still no worse off from a secrecy standpoint than with plain RSA.
The downside is that keys will become longer (many post-quantum algorithms need many kilobytes) and computation will be more substantial. Practically, that means you won't want to ever have to read your public key to someone over the phone (but you could read them a hash of it - almost as good), and tiny, frequent crypto-protected payloads would see an increase in CPU utilization, but there would not be as much of a change for long payloads where the cost of the public key handshake to transfer the AES key is amortized over much more data.
With computation becoming faster, and with the Internet increasingly carrying data that may be sensitive even a few decades in the future, we should start using quantum-prudent methods defensively ASAP, especially since the downside is negligible already, and it's shrinking with Moore's law.
Very prudent. By the way, it's a slim possibility that he's the NSA's Emmanuel Goldstein (https://en.wikipedia.org/wiki/Emmanuel_Goldstein). Not necessarily likely, but the point should be that rather than trusting a person it's better to trust the process of critical examination of all aspects of the crypto. That's not a task any one individual (even the most honest, most intelligent human alive) can do by themselves. In short, we need a large organization of dedicated folks operating transparently, who understand that they may make mistakes (or deliberate, covert sabotage) yet set up their organization in such a way that these mistakes don't result in security breaches. One person can't do that alone.
Is there any other career where brainpower is rewarded less?
Maybe. The doctors however learn about what works from clinical trials, or rather that's what they should be doing when the system works "properly," so big pharma has it good either way.
The test is cheap and hopefully will become a standard part of a routine examination
I admire your optimism. However, preventing cancer cheaply is not in the interests of medical research companies: it shrinks the size of one of their most profitable markets. Although medical corporations are not evil, they are amoral, and it would be a bad business decision for any of them to front the big bucks needed to fund enough clinical trials to make anything this cheap and useful part of the standard medical examination. It would be shooting themselves in the foot, and we can't expect companies to act grossly altruistically.
I think the incentive system is to blame: medical patents should get the boot and in their place there should be a whole lot more money directed through the NIH to fund the types of clinical studies that are now mostly only funded by drug companies. NIH has a lot more freedom to align its interests with the public than private companies. However, we might need to tame the knee-jerk "socialism is bad" reflex in the USA before this kind of change can happen.
Let me play devil's advocate.
Ideally, the legal system works best if you have optimal lawyers on both sides. The difference between the legal arguing and reasoning ability of a superstar lawyer and a merely competent lawyer is probably less than the difference between the legal abilities of randomly-selected folks, so the system in practice isn't grievously broken.
The weird part is that for the system to work, a lawyer has to contractually agree to represent a client's interest as well as possible before knowing all the facts from both sides of the case. The practical consequence of this is that lawyers end up having a duty to promote the interests of even rotten and nasty clients to the best of their ability. For all the lawyer knows, the other side's client may be secretly even worse. Lawyers are able to sleep well at night knowing that they are not in the business of deciding what's right for themselves, and so long as they obey the law and do everything legally possible to promote their client's interests, overall the system will work out better than if people had to advocate for themselves.
Comparing a lawyer to a concentration camp guard is merely inflammatory. A better analogy might be comparing a lawyer with a soldier conducting symmetric warfare, since ideally both sides are roughly equally-equipped, but still the lawyers use words and not guns, which in my view puts them ahead.
The big difference is that biology isn't concerned with finding the optimal solution to problems; any very good solution (optimal or not) will let you live to see another day. A lot of math and computer science is dedicated to finding ironclad proofs that under every circumstance, a particular algorithm will deliver he optimal solution. While that's great when it's feasible, sometimes it's OK to go with something that works well even if it isn't optimal.
The set of good heuristics is a strict superset of the set of provably good heuristics. Nature can discover the former, but academics (largely) get paid only for the latter.