Comment Re:Iris (Score 1) 800
What if I said "i need to find a cheap glockenspiel"
I just tried this - it performed the right search for me - neat!
What if I said "i need to find a cheap glockenspiel"
I just tried this - it performed the right search for me - neat!
I've experience working in academia and the private sector... I don't think it's a matter of competition, universities simply aren't pushing students: everything is spoon-fed, there are very few lecturers who would say "go learn about X".
With an academic hat on I can see the advantage of staying with theoretical topics - teach the basis well and it is applicable to any language or environment. But universities are struggling to stay relevant (and afloat in our budget-constrained times). With corporate research outstripping university research because of the decreasing academic appetite for risk, universities need to be moving with the times, not retreating into maximising student throughput and grant money - teaching essential job skills for programming doesn't have to be mutually exclusive with computer science theory.
There's this bizarre focus on single languages - previously Java and now C#... and spending a lot of time teaching them to students. That runs the risk of the student only learning skin-deep how principles are applied in Java or C#. It's not really fair to compare MIT to another random university but watching their Open Courseware videos it's clear how much those students are expected to figure out on their own - all universities should expect that from their students (because if you're not good at programming you shouldn't be in a programming degree and you *definitely* shouldn't be passing it).
Students should be pushed to learn languages on their own, not spending an entire course learning a language - by the time they graduate they'd better be familiar with a load of languages which force them to think differently about their solutions.
My primary concern is that there's very little focus on letting students learn wisdom about refactoring/good design/etc because they never live with their code - they don't have to deal with the crappy code they wrote 6 months ago so they don't learn the benefits of doing it right the first time (or of realising you made a mistake and rewriting and refactoring it).
While that would be nice to know I don't think it's relevant to a postmortem: they described the architectural elements which encountered the failures.
FYI, though, based on what they've said today and in the past: it seems that they are using regular servers with disks rather than a SAN & I believe they use GNBD to connect the EBS server disk images and EC2 instances (rather than iSCSI)
Amazon have complete isolation between Regions and good isolation between Availability Zones.
At work we'd recommend people use 2 cloud providers for their important services (which could be 2 Amazon regions or it could be Amazon and Rackspace) to prevent this sort of failure taking your business offline. You can't rely on any particular cloud provider to be reliable but it's a reasonably safe bet that a selection of cloud providers won't have significant overlapping downtime
It's also worth pointing out that all cloud SLAs are basically useless: if Amazon falls below their advertised uptime they'll refund you some of your charges - but they'll never refund more than what you've paid them: they don't compensate you for all the money you're losing (and the AWS charges are likely pocket change compared to this)
I agree completely, I was just making the point that it is a bit more complicated than X forwarding. As I said, I don't think this should be patentable -- specifically, I think it doesn't go into enough detail on the methods used to ameliorate latency and bandwidth issues.
Scientists will study your brain to learn more about your distant cousin, Man.