With hardware support in the CPU this can be done properly.
CPU-unique public/private key pair generated by the manufacturer. Public key signed by manufacturer's private key. To install program, CPU public key is validated, program is encrypted with unique key, unique key is encrypted with CPU public key, program and encrypted key is sent to customer.
CPU would then be givent the execution key, which it decrypts internally with private key and saves securely (no access via JTAG, no instructions to access it in any way). Instructions are then decrypted on-the-fly into internal secure instruction cache. You could do the same thing with data, with specific instructions to read/write unencrypted (after all, you do have to get the results out somehow), using a random key internally generated by the CPU. That key could be read/stored, but only encrypted with the instruction key (and changing the instruction key would wipe the data key).
Encryption key for each block would include the location of that block (e.g. take decrypted key and hash with location, then use that as the key for the block). A final step could be to have a block of (encrypted) hashes of each block that would be verified as each block is decrypted (with immediate wipe of decryption keys and cached code if it fails).
Breaking the private key of an individual CPU would, of course, allow you to emulate such a processor and break any program that's been keyed to it, but if such a CPU also required booting into encrypted firmware it could be very difficult to do (assuming the hardware is properly hardened), with the only practical attack being to break it using the public key. If you could do that, there are much better targets to go after than to get a free copy of some expensive program.
Bacteria isn't going to be an issue with this, not at 1000 degrees C. Doesn't take a specialist to understand that.
That's a terrible solution. It simply guarantees that there will be even more significant problems when you do trigger that Leap Minute. Having this occur every year or two means you have an incentive to handle it correctly. Having it occur once every 60-100 years means that no one will bother handling I correctly, or will implement handling it incorrectly.
Think of a critical system that hangs for a minute rather than a second. The results would be much more damaging.
That's like fixing a memory leak by adding more memory to your system. You're just pushing problems down the line and making them more significant.
Exactly. The system clock should be uniform and continuous down to the resolution of the system/hardware. All conversions to/from wall time (including time zones, DST, and leap seconds) should be done separately. The tz database/library is already capable of supporting that mode.
I think it was one of the biggest mistakes in time processing to have NTP adjust the system clock on a leap second. Have NTP include the current offset, even have something that automatically updates the leap second history file when NTP indicates a pending leap second (or is showing a different offset from the current database, which would indicate that a database update is needed, say for a system that's been turned off or disconnected for a long time - not perfect, but close).
This could be phased in in several ways, perhaps just changing it and overriding the few programs that would break (perhaps with a per-process flag to modify the kernel calls to get the time, which the tz library could take into account).
PLATO Plasma panel terminals (1973 or so) had the same thing. It was only 16x16, and wasn't "multi-touch", but worked well.
So, basically 40 year old tech.
Champaign and Urbana are the same system, working also with the University of Illinois.
They have the core network in place, City, schools, some businesses, and some under-served neighborhoods (using a federal grant), but progress in connecting other neighborhoods has been very slow. They're now working with another area company to install neighborhoods, but no good indication of how fast it will go. They've made some commitments, but only if enough houses in each neighborhood sign up.
The biggest problem I've seen is getting a competent company to do the work, and keeping people informed. I'm still hopeful, I want to get away from AT&T. The City/University group has been turned into a non-profit, and they've pledged that the network will be open to ISPs on an equal basis (though I assume that the company building out the home connections will get a chunk of any revenue for some time until they've recouped their investment).
Yeah, I really like the idea of setting up a bug tracking system for your competitor that all their customers can contibute to.
One of the biggest turn-offs to me is a company that doesn't have any good way to report bugs or to request changes. The ideal company for me would be one where every bug or suggestion either generates a new tracking entry or is assigned to an existing one, and that tracking ID is sent to me as a response.
Now I can see what's happening with an issue that affects me, I can provide further details when I see that no one else has pointed something out (or not create redundant reports when they have) - such a system should have a "me too" capability for tracking how many people have that issue without them all needing to take up support time by reporting it. It doesn't need to show all the developer notes on progress or specifics about internals, but it really isn't that hard to give a status update that's useful to the customer, or an explanation of why something isn't going to be done, work-arounds, etc.
Make it easy for your customer to find out the issues and you won't have as much of a problem with wild rumors and complaints and mobs with pitchforks.
Yes, security-related issues should be redacted. No big deal.
Shouldn't be any problem to restrict it to customers who request it, at least for non-consumer-based products, as long as there's a simple process for a prospect to be given access as well, but I really don't think it's worth the hassle of keeping access restricted. It would be interesting to see the sales/marketing response after seeing how mnay of their sales are contingent upon getting access to the bug tracking system.
I'm really looking forward to seeing how the Rift and the Glyph compare. They both seem to be converging from different sides to be very similar, but with the delivery tech being quite different. I'm excited about the form factor of the Glyph and the emphasis on audio. The video doesn't have the resolution of the Rift yet, but it sounds like it is still very good.
It would be really interesting to see innovations from both put together. I really like the idea of using micro-mirror arrays to create the virtual image, and I really like that the Glyph can be used without corrective lenses.
If the two companies could have merged and joined the best of both, that would have been really excellent.
It's not like it's a surprise that there's a lot of Netflix traffic. I could forgive an ISP for not having the connections in place to handle that amount of traffic if all of a sudden it sprang up, but they should be able to handle it by now.
Customers are paying for that level of service. If most of their traffic is coming from Netflix, that's because THAT'S what's driving their customers to pay more for higher speed service. That means that they're getting more money, but most of the capacity increase for their network can be concentrated on serving the Netflix traffic. That's probably less expensive than building out the capacity to handle all those high-bandwidth customers spreading it around more.
It would actually be fairly easy to show that it isn't traffic-analysis throttling going on - set up a server somewhere that you can get a 5Mbps stream going, and that also can get 3Mbps to Netflix, then use an un-encrypted port forward. Given that Verizon and Level3 have both shown that it's a bottleneck at their interconnect point, I'd expect that method to get you a full speed Netflix stream with no problem.
Now, that wouldn't necessarily be a real solution - the route you're getting would probably also be overwhelmed by the traffic if a large number of people were all routing traffic through it. What it does show is that Verizon needs to fix the bottleneck. That's what they're being paid for by their customers. The providers Netflix is using can handle the load, and they clearly have no incentive to not build out their networks in whatever way is needed to handle it properly.
If 90% of Verizon's traffic ends up coming from Netflix, so what? That means they only need 10% of their network for everything else. Their customers are already paying to receive that data, why should Netflix pay again?
The people talking about "unbalanced data flows" are missing the point. It wouldn't make things better if Netflix changed the protocol to require that customers send them as much data as they receive. Bits aren't a resource, nor are they toxic waste, the country won't start to tilt if Netflix sends too many bits in one direction without accepting the same number in return.
If that's the way it worked, then Netflix could simply set up a Cloud backup service.
My parents have an apple tree growing in the front that has apples that don't brown at all. They taste pretty good as well, and don't seem to have much of a problem with insects. I have no idea if the tree was grown from ra andom seed from an apple or what its lineage is, I don't think it's been grafted. Does that mean it's potentially worth something?
BTW, regarding the article - that's Urbana, Ohio. There's more than one Urbana (e.g. Urbana, Illinois, with the University of Illinois, not Urbana University). That confused me briefly!
Google contributes quite a bit, just because its software doesn't mean it's not creative.
I'd be willing to bet that he uses free software all the time. Why doesn't he think that's a worthwhile contribution?
You forgot Jony Ives and iOS7. He did fairly well with some earlier stuff, but ugh.
If A = B and B = C, then A = C, except where void or prohibited by law. -- Roy Santoro