This is incorrect. You can't build a passive device that is 100% transmissive from one side and 100% reflective on the other side.
You can, however (in theory), build a device that with these properties:
- Light from side A is transmitted and comes out side B
- Light from side B is absorbed and turns into heat
- Absorbed heat is re-radiated, mostly from side A
This thing will be non-reciprocal. I've never heard of anyone building one that looks like a sheet of glass, but they're common for microwaves -- they're called circulators. There's a reason that this new contraption is called an acoustic circulator. The only misleading part is that one-way glass is reciprocal -- this new thing is much more impressive than one-way glass.
Here is what I personally don't get and since I'm not a crypto guy maybe I'm missing something but here goes...it looks like all these attacks come from using a RNG that has been rigged to be less than random, but why use their RNG when there are so many sources of randomness in the world?
There is the background radiation of the universe for starters, and how many webcams are freely accessible in heavily trafficked public places? It shouldn't be hard to write a program that does a quick head count, multiple that by the dollar amount of the biggest box office draw last week. How many letters is in headlines of the top 60 newspapers on the planet? Multiple that by the amount of temp detected by 30 weather stations and divide by the number of folks who went to see the fourth most popular movie yesterday squared by the ratings of the most popular reality show.
You can do things like this (with a little bit more care) to generate numbers that can't be predicted in advance. Unfortunately, that's not the point. Web servers need random numbers that can't be guessed or manipulated by anyone, and they need to generate lots of them. If everyone generated the same random numbers (because they looked at the same webcams), then those numbers aren't useful for cryptography.
Except that RSA destroyed their whole business a couple of years ago when it was found that they'd left the root keys for their SecureID tokens on an unsecured, network-connected machine. After that no one could trust them again.
RSA lost my trust when I found out that root keys or critical secrets for the SecureID system existed in the first place.
If they designed the system well, they would make physical tokens and deliver the tokens and the keys for those tokens to the clients. They would not keep any record whatsoever of those keys, nor would they have a way to reconstruct them
As a physical analogy, a locksmith making safety deposit box keys should make a random key and not write down the key code
Or we could finally fix the law and declare EULAs to be unenforceable. Unilateral contracts like EULAs are out of control.
Sometimes I think that things like this should be felonies, though. Criminal offense or not, in a sensible world this would put alphanetworks out of business.
These people should be using a real time Java VM and an ahead of time compiler for those parts.
It means you can skip the warm up. The only problem with it is you can't hire regular Java guys - real time Java is a little bit special, and you can't use regular Java collections and other things in the ways you'd expect.
That's not the only problem. Doing this switches you from using a well-standardized, well-supported language to a nonstandard subset. This removes a lot of the advantages of Java.
One of the most technically competent exchanges I work with is trying out Azul, which is (AFAIK) the top-of-the-line low-latency JVM. I was a bit surprised to learn that their software didn't Just Work. In fact, it didn't work at all. So now they have to decide which AOT-compiling JVM to run and figure out how to support it. I suppose that Excelsior and gcj (eek!) would be other options.
My main point here is that, when low latency is involved, a lot of people cite the existence of high-quality real-time JVMs as evidence that Java is a good choice. These JVMs exist, but they are by no means panaceas, and users of Java for real-time or low-latency work will need to make a real technical and monetary investment in dealing with them. Hotspot is not (yet?) appropriate for this kind of thing. Oddly,
I'm not a Java user, so I've never directly tuned for things like GC, nor do I interact with it directly. Warm up is a different story.
I interact with quite a few exchanges (over all kinds of protocols). Most are, unsurprisingly, written in Java. Almost all of them perform terribly at the beginning of the week. The issue is a standard one: the JVM hasn't JITted important code paths, and it won't until several thousand requests come in. For a standard throughput-oriented program, this doesn't matter -- the total time wasted running interpreted code is small. For a low-latency network service, it's a different story entirely: all of this wasted time happens exactly when it matters.
The standard fix seems to be to write apps to exercise their own critical paths at startup. This is *hard* when dealing with front-end code on the edge of the system you control. Even when it's easy, it's still something you have to do in Java that is entirely irrelevant in compiled languages.
If JVMs some day allow you to export a profile of what code is hot and re-import it at startup, this problem may be solved. Until then, low-latency programmers should weigh the faster development time of Java with the time spent trying to solve annoying problems like warm-up.
No, because the summary is (as usual) thoroughly overstated. This experiment, like any other form of quantum state tomography lets you take a lot of identical quantum systems and characterize them. For it to work, you need a source of identical quantum states.
As a really simple example, take a polarized light source and a polarizer (e.g. a good pair of sunglasses). Rotate the polarizer and you can easily figure out which way the light is polarized. This is neither surprising nor a big deal -- there are lots of identically polarized photons, so the usual uncertainty constraints don't apply.
The whole point of QKD (the BB84 and similar protocols) is that you send exactly one photon with the relevant state. One copy = no tomography.
First, keeping the voltage on requires power, which drives up the energy consumption of the microchip.
Barely. Almost every digital chip out there uses CMOS logic. The whole point of CMOS logic is that, when the gates aren't switching, no current flows. That means that no power is drawn. In practice, a little bit of current leaks, but this is a small effect at all but the smallest process sizes.
It's not all clear from the abstract how the authors expect to maintain a magnetic field without any static power consumption. Perhaps using ferromagnets, but I wouldn't hold my breath -- MRAM still hasn't happened.
If you ignore all the weird DRM-ish uses (which are basically unsupported for now anyway ), the TPM makes a nice cryptographic token. Unfortunately, TPM v1.1 hard-coded the OAEP label to "TPM", which made it incompatible with everything. TPM v2.0 fixes this -- the label is now user-specified. That means that you can use it for modern hardware crypto (sadly, using SHA-1, which should be phased out).
 For meaningful DRM, you need an endorsed TPM, which most vendors don't provide. See http://www.privacyca.com/ekcred.html
We assume that there is are [sic] random erasures within in the network.
802.11n aggregates frames. This means that (at least for throughput-limited flows) multiple packets will be transmitted at once, and they're likely to all be lost together. This won't help. In any case, it makes sense for inherently lossy links to do their own FEC -- the end-to-end principle is not a good way to maximize capacity. But hacking around TCP like this seems silly -- just fix the problem at the link layer.
Considering a thermoelectric device with a cold-side temperature of 350K and a hot-side temperature of 950K, respective waste-heat conversion efficiencies of ~16.5% and ~20% are predicted.
For a hot-side temperature of 950 K and a cold-side temperature of 350 K, the Carnot efficiency (i.e. the maximum possible efficiency of any device) is ~63%. So this is somewhere between 1/4 and 1/3 as efficient as it could possibly be. Large generators, such as combined cycle gas turbines are considerably more efficient, but these devices are small and silent. In other words: not bad.