Forgot your password?
typodupeerror

Comment: Re:Compared to Azure (Score 2) 94

by Just Some Guy (#48014453) Attached to: Amazon Forced To Reboot EC2 To Patch Bug In Xen

The architecture of Google is utterly useless for many businesses cases.There are many use cases where it'd be perfectly appropriate.

it does not and can not provide accurate answers to queries.

In most cases, businesses don't really care about accurate answers to queries; they want quick, more-or-less correct answers. For example, suppose Amazon has a dashboard that shows their book sales on an hourly basis. Timeliness is more important than exactness here, and answers more precise than the pixel resolution of the graph on the big TV are wasted. A "big data" style query that is 99% correct and runs in 5 seconds is much more valuable here than the exact answer that returns in 2 hours.

For accounting types of reporting, slow, exact architectures are probably more appropriate. For realtime analytics, a best guess that comes back immediately may be the right thing.

Comment: Re:The tipping point (Score 1) 147

by Just Some Guy (#48006611) Attached to: PostgreSQL Outperforms MongoDB In New Round of Tests

you are limited by your storage hardware regardless of what technology you use.

Well, right, but I think we set our expectations too low in some cases. For example, the data item {"key": "foo", "value": "bar"} serializes to 30 bytes of JSON. With a few bytes to act as record separators, a hard drive with a 100MB/s write speed should be table of recording about 3,000,000 items per second. There's a lot more overhead than that, of course! But in the document we're discussing, PostgreSQL was averaging about 1,700 inserts per second, or about 170,000 times slower than the hypothetical maximum. Exactly how much overhead should we expect to have when doing simple inserts into a non-foreign-keyed table?

Cassandra makes data access between many servers easy (once you get used to its specialized API), but you could have done the same on multiple servers with their own PostgreSQL server by sharding your data among them.

Our write throughput was 150 times that of the "fast" PostgreSQL server in the article. We were running Cassandra on a cluster of 4 decent sized (but not heroic) EC2 instances. We had neither the time, money, nor desire to replace a 4-node Cassandra cluster and its out-of-the-box configuration with a 150-node sharded PostgreSQL cluster. Sure, it could be done, but there was no reason in the world why we'd want to.

Cassandra/MongoDB/Redis/etc. are not appropriate replacements for PostgreSQL in every - or even many - cases. Likewise, PostgreSQL is not an appropriate replacement for them when dealing with their own specialized use cases.

Comment: Re:The tipping point (Score 2) 147

by Just Some Guy (#48005391) Attached to: PostgreSQL Outperforms MongoDB In New Round of Tests

If you have a single machine, then Oracle is the best performing database, followed by Postgres. When you need more than 4 dedicated servers hosting a database, then mongo can handle about 180% of the volume that oracle can, and about 220% the volume of postgres, and about 110% the volume of Casandra.

This, this, a million times this. A recent employer needed to be able to sustain 250,000 inserts per second. Not 24/7, mind you, but at random prolonged intervals throughout the day. The "PostgreSQL is the fast" chart shows it handling 10,600 bulk load operations per second or 1,700 individual inserts per second. That would be about 1/150th of the insert load we needed to handle.

I'm a huge fan of PostgreSQL - when it's appropriate. If you need strong relational and consistency guarantees, there's nothing I'd recommend over it. But sometimes you just need to move enormous amounts of data around very, very quickly. That's the use case where various NoSQL stores suddenly become very attractive. We chose Cassandra here because its big-O algorithmic complexity matched up very nicely with our access patterns, being O(1) where we needed it to be and O(n^2) where we couldn't care less.

Comment: Re:Think of the children (Score 2) 353

by Just Some Guy (#48004107) Attached to: FBI Chief: Apple, Google Phone Encryption Perilous
That's certainly possible. Alternatively, the demand for customer privacy might have ratcheted up enough recently that Apple et al started taking them seriously. Not so long ago, such things were something only cypherpunks and a few other geeks cared about. Now my mother-in-law wants to know if her iPhone is secure. That's a sea change in customer opinion, and Apple's and Google's actions could be chalked up to simply meeting market demand.

Comment: Unlike my house keys, sir? (Score 4, Informative) 353

by Just Some Guy (#47999445) Attached to: FBI Chief: Apple, Google Phone Encryption Perilous

Change the subject to house keys and the company to Master Lock. Does Mr. Comey, who is employed by me and my fellow taxpayers, also disagree with strong locks on houses? "What concerns me about this is companies marketing something expressly to allow people to place themselves beyond the law." Yes. That's one application, of many, for locks. They can also be used for securing my person, house, papers, and effects, as is explicitly protected by the Bill of Rights. I want to lock my house at night, not just to keep out the police but to keep out everyone who doesn't live here. I want to lock my phone at night for exactly the same reasons. Pity if that's an inconvenience to someone; frankly, I don't care.

Comment: Re:Preempting dumb discussion (Score 1) 317

Privilege escalation is always the kernel's fault. A failed/exploited process should never be able to gain control of a system.

Bullshit. First, "shellshock" isn't a privilege escalation bug, it's a remote code execution bug. Second, an overly liberal /etc/sudoers is a time bomb waiting to happen but has nothing to do with the kernel. Combine the two - say when a dev has something like httpd ALL=(ALL) ALL so that users can change their password via a web interface or something insane but common like that - and suddenly Johnny Cracker can hack the Gibson with only a single authorized setuid() call.

Comment: Re:Netflix is not perfect... (Score 1) 94

by Just Some Guy (#47996035) Attached to: Amazon Forced To Reboot EC2 To Patch Bug In Xen

Netflix certainly isn't perfect, but they're Pretty Darn Good (tm). I haven't experienced any more glitches with streaming Netflix than I have with Comcast breaking other downloads.

Meanwhile, even with their 'kill stuff randomly' methodology, the wrong thing still dies ever so often and brings the whole thing to a screeching halt.

The whole idea behind Chaos Monkey is to make sure there's no such "the wrong thing" single point of failure. Having talked to their SREs, I think such outages are exceedingly rare.

Comment: Re:Compared to Azure (Score 3, Informative) 94

by Just Some Guy (#47994403) Attached to: Amazon Forced To Reboot EC2 To Patch Bug In Xen

When hosting your app in the cloud, regardless of provider, it is considered best practice to design for failure.

Netflix goes so far as to randomly kill services throughout the day. Their idea is that it's better to find systems that aren't auto-healing correctly by testing recovery during routine operations than to be surprised by it at 3AM. It's successful to the point that you generally don't know that the streaming server you were connected to has been killed and a peer took over for it. That is how you make reliable cloud services.

Comment: Re:Compared to Azure (Score 1) 94

by Just Some Guy (#47994323) Attached to: Amazon Forced To Reboot EC2 To Patch Bug In Xen

so you've never worked on vertical computer systems?

Fixed that for you. You're conflating vertically scaled monoliths with "serious systems". That's quaint. While there are certainly still use cases for that kind of bulletproof all-your-eggs-in-one-basket architecture, that's a niche compared to the number of applications where horizontally scaled eventually consistent architecture is more appropriate.

The mainframe and vms clusters I've used had databases working for years (over a decade in one case as new hardware joined sequentially to cluster as old retired).

Undoubtedly, and the distributed clusters I've used where you can make progress as long as at least some reasonable subset of nodes are still alive have similar uptimes. When was the last time you heard about Google being completely dead in the water? Their software was written with the expectation that failures happen (and a lot at their scale) so that clients need to intelligently reconnect to unresponsive servers, etc. That design seems to be working out pretty well for them.

Comment: First, independent, now a corporate SW Architect (Score 2) 189

by Just Some Guy (#47987543) Attached to: BlackBerry Launches Square-Screened Passport Phone

That reeks of sour grapes. "I don't want to play the games I can't run! I don't want to download the apps that aren't available!"

My iPhone is **not** a toy, I use it for doing business. I have roughly a zillion apps, for very precisely described needs. Only the bare basics were on the phone when I got it, and I was able to pick a great SSH client, slick personal finance app, excellent public transit apps, a nice RPN calculator, my bank's app (so I can deposit checks by taking pictures of them), Yelp for when I want to take my team to a good dinner on business trips, a few instant messengers (because I can't get all my friends to "upgrade" to the ones I like), a document scanner with OCR, our corporate chat client, an outstanding GTD system (wassup, OmniFocus?), and a passel of games for idling away downtime at the airport.

I'm sure a BlackBerry would meet my needs if I had very few needs. But then again, I use Unix as an IDE and drive a minivan.

Everyone has a purpose in life. Perhaps yours is watching television. - David Letterman

Working...