Moving faster causes time to slow down (special relativity), but so does beeing in a deeper gravitational well (general relativity). As you move away from the Earth, both effects have opposite (but not equal) magnitude. I'm too lazy to do the math right now, but here's a walkthrough (for the case of GPS satellites, but the same equations hold; you just need to know the distance from Earth's center to Death Valley and to Mount Everest, and work out their linear velocity from that).
Um, no, x86 CPUs are nothing like ARM and I'm not aware of any commercial x86 CPU with an ARM backend. Yes, modern x86 cores use a RISC-ish microcode backend with an x86 decoder frontend, but that doesn't say anything in favor of ARM. All it means is that the industry has collectively agreed that CISC as a microarchitecture is a stupid idea - not necessarily as an instruction set.
I'm not a fan of x86 myself, and I think it's a stupid design with a vast amount of baggage causing a significant power/performance impact when designing an x86 CPU (that Intel can get away with because they're a generation or two ahead of everyone else in silicon tech), but then again ARM isn't the pinnacle of RISC either (though I do think it's better than x86).
Me, I'll take whatever microarch gets the best performance per watt at whatever TDP is relevant. If Intel can pull that off with x86-64, by all means. If ARM AArch64 ends up ahead, awesome. If both are about equal, I'll take whatever's more practical based on other factors.
And since this is a camera passthrough, not an optical overlay, that's a glaring implementation flaw. Properly aligning the head tracking framerate, camera framerate, and rendering would let them render the virtual objects in lockstep with the physical ones (at least at speeds where motion blur isn't a significant issue; you can fake that by minimizing motion blur in the real image by using a short shutter time on the cameras).
So, they're locking out things that can brick the card (flash ROM/fuses, screw up thermal sensors) and apparently a hint of OS security (the Falcons that respond to userspace commands can no longer access physical memory, only virtual memory). The latter sounds somewhat bizarre, considering the firmware should be fully under the control of the driver, not userspace (I guess/hope?), but not unreasonable. Maybe there are software security reasons for this.
Nouveau is free to continue using its own free blobs or to switch to nvidia's. If they start adding restrictions that actively cripple useful features or are DRM nonsense, then I would start complaining, but so far it sounds like an attempt at protecting the hardware while maintaining manufacturing flexibility for nvidia. This isn't much different from devices which are fused at the factory with thermal parameters and with some units disabled; the only difference is that here firmware is involved.
NV seem to be turning friendlier towards nouveau, so I'd give them the benefit of the doubt. If they wanted to be evil, they would've just required signed firmware for the card to function at all. The fact that they're bothering to have non-secure modes and are only locking out very specific features suggests they're actively trying to play nicely with open source software.
Not 2^16 (Unicode already has way over 2^16 codepoints assigned). The maximum Unicode codepoint value is 1114111, which is somewhat over 2^20 (and happens to be the highest codepoint encodable in UTF-16).
It's 2Ah, so 240A.
Now, it could be that their battery runs at a higher voltage (and thus not really 2Ah, but they're using that figure as a 3.7V li-ion equivalent capabity), or that there is a power converter built into the battery pack (unlikely for a prototype, though). Still, even for a 37V battery (vs. 3.7V for a normal Li-Ion cell), we're talking 24A. That cord didn't look like 24A cord, and I highly doubt they were using a voltage higher than 37V to charge (especially not with exposed banana jacks like that).
I call the demo highly dubious if not an outright fake/mock.
Sorry for the threadjack, but this is yet another case of horrible security reporting.
From watching the video, what it seems happened here was that eBay chose phpBB for their community forum, but did not integrate its authentication system directly with eBay's on the server side. Instead, the site was set-up as a standalone system, and whoever implemented the integration had the bright idea of hardcoding the forum password for everyone as username+123456, and then just having the eBay login page issue a hidden POST request behind the scenes to authenticate users to the community forum section.
Thus, this allows anyone to trivially impersonate anyone else on the forum. It shouldn't have anything to do with the rest of the site, though. Nor does this have anything to do with initial passwords, salts, or any of the other terms that have been thrown around.
A case of absolutely retarded login integration for the community site, but not something that would allow people to take over others' main eBay account. What this says about the people running eBay is another matter entirely...
Did you actually read that article? It clearly describes exactly what I said: they use resistors on the data pins to signal the available current. There is no bidirectional negotiation going on. There are no extra pins or wires. The charger just has 4 resistors to create two voltage dividers for the D- and D+ pins.
This is incorrect. There is no bidirectional negotiation between chargers and devices, nor are there any magic extra pins (at least for pretty much all Android and Apple products - dunno about Zune).
What there is is one USB charging standard, that basically says one thing and one thing only (that matters): if the data pins are shorted together (but otherwise not connected to anything), then the port is a Dedicated Charging Port. A DCP must meet certain voltage/current curve ranges and may be engineered to supply anywhere from 500mA to 1.5A (or more), with the voltage dropping as the device exceeds the charger's maximum. Devices are simply supposed to regulate current draw upwards until the voltage drops below a threshold, indicating the charger's capability. No digital negotiation takes place. Devices are limited to 1.5A charging current, which is quite typical for modern devices (and significantly better than the 500mA of a non-charging port).
There is a newer USB Power Delivery specification that is much more recent, supports higher powers, probably uses more complex negotiation (I haven't read it), and nothing implements it yet.
Then there's what Apple does - they have an incompatible implementation that uses resistors on the data pins in the charger to signal its current capability. Different resulting voltages mean different current levels. This is completely incompatible with the USB charging standard. Recent Apple devices (since the iPhone 3G or so) do support DCP chargers (to some extent - some charge more slowly, and I don't know about larger iPads?), but non-Apple devices will only charge at 500mA or worse from Apple chargers.
This is false. Decoding for modern video formats is strictly defined, and all decoders must produce bit-perfect output. You can add as many filters as you want after that, but that's a postprocessing step in the video player and has nothing to do with the decoder. Things like in-loop filters are strictly defined as part of the decoding process and must be there for the decoder to be considered correct.
Nope, they just crash, lag, or play it with severe artifacts (the latter happens with some hardware codecs and 10bit files).
Basically no modern video codecs are designed to gracefully degrade given limited decoder features, because they rely on bit-perfect output to be used as a reference for future frames. Any error accumulates in the decoding loop and becomes significant artifacting until the next I frame.
Link to Original Source
Link to Original Source
Link to Original Source