Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Amazon's Werner Vogels on Large Scale Systems 49

ChelleChelle writes "When it comes to managing and deploying large scale systems and networks, discipline and focus matter more than specific technologies. In a conversation with ACM Queuecast host Mike Vizard, Amazon CTO Werner Vogels says the key to success is to have a 'relentless commitment to a modular computer architecture that makes it possible for the people who build the applications to also be responsible for running and deploying those systems within a common IT framework.'"
This discussion has been archived. No new comments can be posted.

Amazon's Werner Vogels on Large Scale Systems

Comments Filter:
  • scale? (Score:5, Informative)

    by lecithin ( 745575 ) on Monday July 24, 2006 @11:38PM (#15773790)

    "When it comes to managing and deploying large scale systems and networks, discipline and focus matter more than specific technologies.

    How about:

    When it comes to DOING ANYTHING, discipline and focus matter more than specific technologies.

    If you are at a 'small scale' environment and are limited to specific technologies, discipline and focus matter even more. Your choice is less with technologies and more with how you use them.

    "the key to success is to have a 'relentless commitment to a modular computer architecture that makes it possible for the people who build the applications to also be responsible for running and deploying those systems within a common IT framework.'"

    We have a BINGO!!!!!

  • by AuMatar ( 183847 ) on Tuesday July 25, 2006 @01:58AM (#15774182)
    I work at Amazon. While we'd certainly be allowed to use SML, Lisp, etc, nobody does. 99%+ of development is in Perl (or Mason), C++, and Java. Probably at least one extra 9 there. If someone did write a production service in something other than C++ or Java, they'd probably see a push to rewrite it immediately to something more maintainable- something more than a tiny percent of our devs know. What little might be done in odd languages like that are probably one off or personal scripts used by devs and not in production. Anyone who suggested that our success is from procedural or niche languages has no idea whats really going on at the company.
  • by Anonymous Coward on Tuesday July 25, 2006 @02:18AM (#15774233)
    Under normal circumstances, at Amazon you'll have to support what you wrote. That means if your code crashes all the time, you'll get paged in the middle of the night.

    Now, even if you get rid of some incompetent programmer (say by moving him to another team), the rest of the team will still get bogged down with supporting the code he wrote. And since engineers now have to do support for the other teams using their service, their productvity eventually grinds down to a halt and new development becomes extremely hard. Things will also stick around forever.

    Posted amazonymously.

  • by Stu Charlton ( 1311 ) on Tuesday July 25, 2006 @08:45AM (#15775313) Homepage
    The best resource (though getting dated) on the origins and meaning of shared nothing v. shared-something archticture is Greg Pfister's In Search of Clusters, 2nd ed. [amazon.com].

    There's "degenerate" shared nothing, which is what I find most people referring to today -- you have web server farm and you don't store session state, or if you do, you "pin" it to a particular server. Or you just rely on the database. It's degenerate because, sure, it's scalable (memory isn't as directly linked to concurrent users), but it really just shifts the burden to the database, which tends to be 1 big box.

    So the question becomes, how do you scale the database horizontally?

    In the database world, the term has become somewhat overloaded. Originally it meant physically shared disks and/memory vs. using network interconnectivity. But with the rise of I/O shipping technologies over networks (iSCSI, high speed NFS/NAS, SAN fibre-channel), this isn't really true anymore. So now, it comes down to how your data is partitioned and how you ship a read/write function to that node. Does a node "own" it's data (or a replica)? Or can any node touch any data? That's the debate.

    In short, it works well in some cases: read-mostly parallel queries and/or search, which is why Google's using it, or why you see it with data warehouses (Teradata, DB2 UDB). It works OK if you have mostly have transactional data updates within a well-defined partitionable set of data (such as the TPC-C benchmark). It works less well when dealing with transactional updates spread across the entire data set (assuming a normal distribution), as you'll need to update replicas with a two-phase commit. The load balancing of your data across nodes also requires care in picking the appropriate partitioning key: sometimes a hash works well, sometimes range-values work well. If you need to re-partition your data for whatever reason, it's going to be a big job.

    Commercially, Oracle 10g's Real Application Clusters is an example of a shared disk database, though they use an interconnect between nodes for cache coherency. Microsoft SQL Server, DB2, Teradata, MySQL, etc. are all "shared nothing".

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...