Comment Re:It's in the wording, I think.... (Score 2, Interesting) 172
I'm not even sure that matters. It's like saying "go rob a bank, but make sure you do it legally."
I'm not even sure that matters. It's like saying "go rob a bank, but make sure you do it legally."
Whether the data is in the cloud makes no difference with respect to discovery requests. If you are served a discovery subpoena, you have to turn over the data whether it's in the cloud or not.
The difference is that under the Stored Communications Act, the provider can turn it over to the Government without notifying you. That's what has most data security experts nervous about cloud storage.
I agree, and this is why I have nothing but contempt for typical "best provider performance" conclusions that are driven solely by single-connection TCP transfer tests (e.g. speedtest.net).
In most cases, latency matters more than bandwidth (where bandwidth is roughly the same within an order of magnitude or so). This is why there's a very strong correlation between the provider that had the lowest measured latency and the provider that had the lowest page retrieval time. In the end, real-world page loading is precisely what we use smartphones for, and so we need to know how that application performs, instead of what raw transfer rates are.
I still think the Gizmodo tests are deficient, though, as they are unclear as to whether they repeated the tests at regular intervals over a 24-hour period. Network congestion varies throughout the day, and at any given moment one path may be more congested than another. A valid test, IMO, would take the average (or median) of each metric over a 24-hour period (or even longer, covering both a weekday and a weekend, since usage varies among them).
But if you are concerned about performance, and you are already running your RDBMS servers at their limits, then you also already know way, way too much about the internal RDBMS structure, how tables are split, where they are split, and so on.
At some point the comparative cost of doing your own joins is less than tweaking your RDBMS to scale. However, this point is rarely reached in most organizations.
What if Google had won the auction?
Enron's marketplace concerned long-haul pipes. The market for last-mile connections is quite different, and is where most of the congestion is, because telcos are cheap and laying new fiber to leaf nodes (i.e., homes and small businesses) is expensive.
I think you mistook my comment to suggest a high per-bit price. There are lots of ways to charge for bandwidth utilization, the 95%ile method being one of them.
Or, they could just charge by the bit, like every other utility (water, gas, electricity).
The FCC redacted that part, not Google, presumably on behalf of Google because the Apple Developer Agreement makes your communications with Apple confidential (subject to law enforcement inquiries). The FCC *does* possess the redacted parts of Google's response.
Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB). Are they testing the filesystems or the buffer cache? I don't see any indication that any of these filesystems are mounted with the "sync" flag.
I wonder whether pouring coffee on them would be just as effective.
The article claims that both AT&T and Verizon will be moving to LTE in the future. If this ever comes to pass, and Apple releases an LTE-compatible iPhone, the technology roadblock should vanish.
It doesn't if you're merely using it to unwrap the SSL from the connection and proxy it to another TCP port.
The other 20%, and your time, is what makes it worth $25k.
These days I'm writing exclusively in Ruby and it is "fast enough" (even with 1.8.X).
I suspect that's because your website doesn't receive thousands of dynamic requests per second.
HELP!!!! I'm being held prisoner in /usr/games/lib!