Comment Re:The Rules (Score 1) 347
That's not the legal definition of "broadband" in the context of this rule. Read the rule. It's right there in the definitions section. Pretty much any Internet connection except dialup is "broadband".
That's not the legal definition of "broadband" in the context of this rule. Read the rule. It's right there in the definitions section. Pretty much any Internet connection except dialup is "broadband".
It isn't 400 pages of regulation, it's about 8.5 pages of (new/modified) regulation, including all the definitions, procedures for filing complaints, etc.
The other 391 pages are commentary, explaining the rationale, the legal authority, discussing the public comments and rebuttals, talking about the implementation and implications, and so on.
Saying this is 400 pages of regulation is totally false. The 400 pages are in fact going to be published, and can be used by courts when deciding cases influenced by the new regulations, but they are not themselves regulations.
Most of the 400 pages are commentary on the rules - justification, clarification, intent, responding to comments, legal authority, possible legal challenges, implications, etc.
I don't know about the "305 words" bit. The actual rule (the part that says "amend this part to read
However, the heart of it is contained in 3 short sections, about 1200 characters depending on encoding and whether you include the editing directives:
8.5 No blocking.
A person engaged in the provision of broadband Internet access service, insofar as such person is so engaged, shall not block lawful content, applications, services, or non-harmful devices, subject to reasonable network management.8.7 No throttling.
A person engaged in the provision of broadband Internet access service, insofar as such person is so engaged, shall not impair or degrade lawful Internet traffic on the basis of Internet content, application, or service, or use of a non-harmful device, subject to reasonable network management.8.9 No paid prioritization.
(a) A person engaged in the provision of broadband Internet access service, insofar as such person is so engaged, shall not engage in paid prioritization.
(b) “Paid prioritization” refers to the management of a broadband provider’s network to directly or indirectly favor some traffic over other traffic, including through use of techniques such as traffic shaping, prioritization, resource reservation, or other forms of preferential traffic management, either (a) in exchange for consideration (monetary or otherwise) from a third party, or (b) to benefit an affiliated entity.
The definition in the rule makes no such reference to speed:
8.2 Definitions.
a) Broadband Internet access service. A mass-market retail service by wire or radio that provides the capability to transmit data to and receive data from all or substantially all Internet endpoints, including any capabilities that are incidental to and enable the operation of the communications service, but excluding dial-up Internet access service. This term also encompasses any service that the Commission finds to be providing a functional equivalent of the service described in the previous sentence, or that is used to evade the protections set forth in this Part.
I'd be curious to know what problems would have been found AT THE TIME (not now, a few years later), with the e-mail server itself (not web front-ends other than as actual vectors to compromise the system, not just an individual connection; is there any indication Clinton ever used a web front-end?), and compare that with the state.gov e-mail server (also at the same time).
Comparing this to someone using a gmail account is irrelevant. The biggest threat to security is probably going to be the people at a commercial business.
The distinction between "personal" or "private" or "government" e-mail systems is sort of dumb when she's using a specific system AS a "government" e-mail system. Perhaps she even had it authorized through whatever route that might take, maybe having State IT people take a look at it.
What were the data retention policies for the state.gov e-mail server at the time? Did they retain every single piece of mail, so you could ask now to see how many Viagra spams she received while in office? If she deleted a message, was it archived or is it gone now? Would outgoing messages be retained? What if an e-mail client was configured to send outgoing e-mail directly to the recipients server (I realize that's becoming harder to do now as more and more servers are set up to require relaying through an official authenticated server via DNS records, but what was the situation then?)
The people to put on the stand here are the IT people responsible for the state.gov e-mail servers and the IT people that Clinton used to set up her server.
With hardware support in the CPU this can be done properly.
CPU-unique public/private key pair generated by the manufacturer. Public key signed by manufacturer's private key. To install program, CPU public key is validated, program is encrypted with unique key, unique key is encrypted with CPU public key, program and encrypted key is sent to customer.
CPU would then be givent the execution key, which it decrypts internally with private key and saves securely (no access via JTAG, no instructions to access it in any way). Instructions are then decrypted on-the-fly into internal secure instruction cache. You could do the same thing with data, with specific instructions to read/write unencrypted (after all, you do have to get the results out somehow), using a random key internally generated by the CPU. That key could be read/stored, but only encrypted with the instruction key (and changing the instruction key would wipe the data key).
Encryption key for each block would include the location of that block (e.g. take decrypted key and hash with location, then use that as the key for the block). A final step could be to have a block of (encrypted) hashes of each block that would be verified as each block is decrypted (with immediate wipe of decryption keys and cached code if it fails).
Breaking the private key of an individual CPU would, of course, allow you to emulate such a processor and break any program that's been keyed to it, but if such a CPU also required booting into encrypted firmware it could be very difficult to do (assuming the hardware is properly hardened), with the only practical attack being to break it using the public key. If you could do that, there are much better targets to go after than to get a free copy of some expensive program.
Bacteria isn't going to be an issue with this, not at 1000 degrees C. Doesn't take a specialist to understand that.
That's a terrible solution. It simply guarantees that there will be even more significant problems when you do trigger that Leap Minute. Having this occur every year or two means you have an incentive to handle it correctly. Having it occur once every 60-100 years means that no one will bother handling I correctly, or will implement handling it incorrectly.
Think of a critical system that hangs for a minute rather than a second. The results would be much more damaging.
That's like fixing a memory leak by adding more memory to your system. You're just pushing problems down the line and making them more significant.
Exactly. The system clock should be uniform and continuous down to the resolution of the system/hardware. All conversions to/from wall time (including time zones, DST, and leap seconds) should be done separately. The tz database/library is already capable of supporting that mode.
I think it was one of the biggest mistakes in time processing to have NTP adjust the system clock on a leap second. Have NTP include the current offset, even have something that automatically updates the leap second history file when NTP indicates a pending leap second (or is showing a different offset from the current database, which would indicate that a database update is needed, say for a system that's been turned off or disconnected for a long time - not perfect, but close).
This could be phased in in several ways, perhaps just changing it and overriding the few programs that would break (perhaps with a per-process flag to modify the kernel calls to get the time, which the tz library could take into account).
PLATO Plasma panel terminals (1973 or so) had the same thing. It was only 16x16, and wasn't "multi-touch", but worked well.
So, basically 40 year old tech.
Champaign and Urbana are the same system, working also with the University of Illinois.
They have the core network in place, City, schools, some businesses, and some under-served neighborhoods (using a federal grant), but progress in connecting other neighborhoods has been very slow. They're now working with another area company to install neighborhoods, but no good indication of how fast it will go. They've made some commitments, but only if enough houses in each neighborhood sign up.
The biggest problem I've seen is getting a competent company to do the work, and keeping people informed. I'm still hopeful, I want to get away from AT&T. The City/University group has been turned into a non-profit, and they've pledged that the network will be open to ISPs on an equal basis (though I assume that the company building out the home connections will get a chunk of any revenue for some time until they've recouped their investment).
Yeah, I really like the idea of setting up a bug tracking system for your competitor that all their customers can contibute to.
One of the biggest turn-offs to me is a company that doesn't have any good way to report bugs or to request changes. The ideal company for me would be one where every bug or suggestion either generates a new tracking entry or is assigned to an existing one, and that tracking ID is sent to me as a response.
Now I can see what's happening with an issue that affects me, I can provide further details when I see that no one else has pointed something out (or not create redundant reports when they have) - such a system should have a "me too" capability for tracking how many people have that issue without them all needing to take up support time by reporting it. It doesn't need to show all the developer notes on progress or specifics about internals, but it really isn't that hard to give a status update that's useful to the customer, or an explanation of why something isn't going to be done, work-arounds, etc.
Make it easy for your customer to find out the issues and you won't have as much of a problem with wild rumors and complaints and mobs with pitchforks.
Yes, security-related issues should be redacted. No big deal.
Shouldn't be any problem to restrict it to customers who request it, at least for non-consumer-based products, as long as there's a simple process for a prospect to be given access as well, but I really don't think it's worth the hassle of keeping access restricted. It would be interesting to see the sales/marketing response after seeing how mnay of their sales are contingent upon getting access to the bug tracking system.
I'm really looking forward to seeing how the Rift and the Glyph compare. They both seem to be converging from different sides to be very similar, but with the delivery tech being quite different. I'm excited about the form factor of the Glyph and the emphasis on audio. The video doesn't have the resolution of the Rift yet, but it sounds like it is still very good.
It would be really interesting to see innovations from both put together. I really like the idea of using micro-mirror arrays to create the virtual image, and I really like that the Glyph can be used without corrective lenses.
If the two companies could have merged and joined the best of both, that would have been really excellent.
Anything free is worth what you pay for it.