Follow Slashdot stories on Twitter


Forgot your password?
Cloud Microsoft

Microsoft's Azure Cloud Suffers Major Downtime 210

New submitter dcraid writes with a quote from El Reg: "Microsoft's cloudy platform, Windows Azure, is experiencing a major outage: at the time of writing, its service management system had been down for about seven hours worldwide. A customer described the problem to The Register as an 'admin nightmare' and said they couldn't understand how such an important system could go down. 'This should never happen,' said our source. 'The system should be redundant and outages should be confined to some data centres only.'" The Azure service dashboard has regular updates on the situation. According to their update feed the situation should have been resolved a few hours ago but has instead gotten worse: "We continue to work through the issues that are blocking the restoration of service management for some customers in North Central US, South Central US and North Europe sub-regions. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers." To be fair, other cloud providers have had similar issues before.
This discussion has been archived. No new comments can be posted.

Microsoft's Azure Cloud Suffers Major Downtime

Comments Filter:
  • But Remember - (Score:5, Insightful)

    by Ralph Spoilsport ( 673134 ) on Wednesday February 29, 2012 @11:52AM (#39197901) Journal
    Your data's safe in the Cloud.

    Until it isn't.

    • by Anonymous Coward on Wednesday February 29, 2012 @11:53AM (#39197919)

      It's very safe though - just so safe no one can get access to it! :)

    • by tnk1 ( 899206 )

      Oh their data is safe. They just can't get to it or use it in any way. :)

      • by geekoid ( 135745 )

        Yes, they can. It's service management that's down, not data.
        Users can still access data.

    • by tripleevenfall ( 1990004 ) on Wednesday February 29, 2012 @11:53AM (#39197925)

      Nonsense, Microsoft is the name you can trust for security.

    • by masternerdguy ( 2468142 ) on Wednesday February 29, 2012 @11:58AM (#39197985)
      Also remember the cloud is just the 21st century spin of the dummy terminal-mainframe model.
      • by Barsteward ( 969998 ) on Wednesday February 29, 2012 @12:15PM (#39198183)
        Stop talking sense, its no use here on /.
      • by icebraining ( 1313345 ) on Wednesday February 29, 2012 @12:45PM (#39198573) Homepage

        Except those dumb terminals were, well, dumb, while nowadays the "terminals" are essentially the same as the "mainframe" but slower. So you can have hybrid configurations were a dedicated machines handles the base load and spins up remote resources on demand to handle peaks. If those resources are unavailable, the dedicated machine can still do the job, just with some performance degradation.

        A good example would be a script on your laptop that started an EC2 instance running distcc to reduce your compilation time from hours to minutes. If the instance can't be loaded, you could still compile, it just takes more time.

      • by dave420 ( 699308 ) on Wednesday February 29, 2012 @12:57PM (#39198769)
        Except this time you can add as many mainframes you wanted, dynamically. And access them over the internet. And serve content to millions of people over said internet. That wasn't possible with this clichéd "mainframes!!!!!1" nonsense. Yes, you are using a remote computer. That's the only similarity. The current terminals are far from dumb, and the server being connected to is vastly different to the mainframes of old.
        • Re:But Remember - (Score:5, Interesting)

          by hawguy ( 1600213 ) on Wednesday February 29, 2012 @02:12PM (#39199855)

          Except this time you can add as many mainframes you wanted, dynamically. And access them over the internet. And serve content to millions of people over said internet. That wasn't possible with this clichéd "mainframes!!!!!1" nonsense. Yes, you are using a remote computer. That's the only similarity. The current terminals are far from dumb, and the server being connected to is vastly different to the mainframes of old.

          I wonder how old you are? The current "Web 2.0" paradigm reminds me very much of the old 3270 style mainframe environment.

          The 3270 terminal (well, the controller) was not exactly "dumb" - it had some base level of intelligence, it knew how to display forms, it could do input validation, etc but it didn't really do much with the data beyond sending it up to the mainframe. The mainframe on the backend took the data and actually did something with it. This is pretty much exactly how "Web 2.0" works, except instead of a 3270 terminal communicating to the mainframe over SNA, you have web browsers calling back to the web server over HTTP using Javascript.

          Yes, both the endpoints and servers have become more capable, but there are still many similarities to the old style model.

        • by lgw ( 121541 )

          Everything old is new again. The only real advance ove rthe mainframe model here is AJAX - and until HTML 5 really matures it's still a half-assed solution, but better than a dumb terminal.

          Of course mainframes could add compute power to particular customers dynamically. Of course they could serve content to millions of customers (Visa used mainframes for transaction processing until quite recently). "Normal" HTML pages and forms are *very* similar to terminal-mainframe interaction. The older web server

        • by s.petry ( 762400 )

          The same problems with scaling now existed back in the main frame days, and the same solutions were present. No matter how many CPUs you throw to a developer, the applications must be developed to scale with them.

          Do some digging on DMP (Dynamic Multi-Processor) and SMP (Symetric Multi-Processor) architectures and you will probably be amazed at how long ago these methods were being used.

          This to me is the hilarity of the people that push "Cloud". They say things like Microsoft did in their "Yeah Cloud" comm

      • Cloud services are great for hundreds of thousands of small businesses which are big enough to need centralized computing and file-storage services, but not big enough to have full-time IT staff to support it. You basically outsource your email server support to Google, your file server maintenance to Amazon, etc. and pay by the account or GB. The analogy to the mainframe/dummy terminal doesn't work for these companies because back in the day they never could've afforded a mainframe much less an IT staff t
    • Hey! After rain comes sunshine. Now they'll just have to wait for cloud formations again...

    • Re:But Remember - (Score:4, Interesting)

      by poetmatt ( 793785 ) on Wednesday February 29, 2012 @12:23PM (#39198261) Journal

      When you rely on a 3rd party for cloud storage and that 3rd party has a basically nonexistent SLA for an under 30 day outage, it becomes your own fault for making a horrible business decision.

      when you take a 3rd party cloud storage solution and implement it yourself for your enterprise, guess what? it works. And if there's issues, you know who's to blame. [] - this is one example of but many.

    • Or, as I like to phrase it.

      If all your data is in the Cloud, what happens when it rains?

  • Eggs? (Score:5, Insightful)

    by OzPeter ( 195038 ) on Wednesday February 29, 2012 @11:53AM (#39197917)


    Or how about "Never outsource your core functionality?

    • Never outsource your core functionality

      Or more specifically, don't cloud your reasons for using it. Know what you are getting before you go there.

    • Re:Eggs? (Score:5, Insightful)

      by Sir_Sri ( 199544 ) on Wednesday February 29, 2012 @12:35PM (#39198423)

      Ah, so there's the question. How much would it cost for you to run a system with 'no' downtime? I'm at a university, some of our labs (not so much in comp sci but generally) have fairly specific requirements about say not losing power, because it would damage/destroy equipment or running experiments.

      But IT is more than just power. In almost 4 years here every year we've had several days of downtime for our main undergraduate server (the one undergrads are supposed to use for various things, and that handles their logins and file storage), and several on the separate but arguably more important staff server, which is supposed does the same thing, but that includes all of our grant applications.

      Causes of our server outages (I'm not an IT guy, this is just what they've told us that I can remember): Power failures. Yes we have battery backups, but they're only good for so long, and since none of our equipment suffers permanent damage without power this isn't high priority. Networking. We only have two redundant pipes. That, for home use for example, or most businesses is pretty good. For our pipes one goes to a host to the west, one to the east. I'm not specifically familiar with what failed that took our networking offline for 7 or 8 hours but it affected both pipes. Storage: bad raid controller on the main fileserver. This has a few cascading effects. If you don't realizing it's garbling data it ends up distributing that garble off to the backups or clones. When it crashes (which doesn't take that long after the controller starts getting messy) you may have several backups that need to be repaired. We can't do much to the file system while it's being repaired or rebuilt (which, afaik you should be able to do on most professional grade setups, but for whatever reason our linux guys can't get it to behave). Added fun: When the system comes back up, if you tried to access your e-mail while the file system was garbled you probably still can't. And you get no error message about it. It just spits back nothing, as though you have no new mail. The system is 'up' but doesn't work and you have to go into your directory and delete some files that most people have never heard of. It's not hard to do, but because you have no idea that there's a problem the less technically inclined (or just ESL) people in building full of computer scientists don't always fix it immediately. The net effect is that if the storage controller gets messed up, we're down for 3 or 4 days if not longer.

      And that's just one university department. We have a relatively decent amount of money, and several full time staff for these things. But we probably can't match any cloud services uptime, even with 7 or 8 hours of downtime regularly, not even close. It's not a trivial calculation, even a 50 or 60 employee outfit will probably have trouble matching Amazon or Azure uptime with a full time IT guy. There's probably a cross over point where you have enough employees to support big enterprise IT infrastructure and manpower, but only support it badly (there's not enough money for proper replication or whatever), and then eventually you get big enough that you just run everything in house anyway because there's definitely no cost advantage to hiring someone. For us, I think we have 5 or 6 IT staff, if we could toss 3 of them, + all of their equipment, you're looking at somewhere around 350, 400k/year to spend on a support contract. I'm guessing, but don't know, if you can get a cloud service for ~20 TB of reasonably reliable file and e-mail storage for less than 350k/year from these guys.

      The big place I see people right now (as a sort of flavour of the month) using cloud service as an augment to burst capacity needs. That's a whole other analysis.

      • Re: (Score:2, Informative)

        What ghetto assed university do you go to where they cannot get their server straight?


      • by durdur ( 252098 )

        Good points. Near 100% uptime is intrinsically hard. And if you think your admins can do it better those at a dedicated cloud hosting provider .. well, maybe they can, but it's a good chance they can't. Get big enough and you can invest in the hardware, network and support resources to do it right, but that's not cheap.

        • 8,760 hours in the year
          4 9's (99.99%) uptime is 8759.124 hours.

          That means to achieve 4 9's you can only have < 1 hour of downtime per year. This is possible.

          Microsoft being out for 7+ hours is a nightmare.
    • Fortunately, there is a solution. You can have your own personal cloud [] . The best part, "access speed is just as fast as a local hard drive."
    • by gweihir ( 88907 )


      Or how about "Never outsource your core functionality?

      That would be a good engineering practice. A good business practice is to show your initiative by outsourcing to the cloud and then hope to be promoted away before anything bad happens. It really is time for managers to be liable for the mistakes they make in long-term decisions.

    • by sjames ( 1099 )

      I can see clearly now that the rain is gone...

  • One of the worst things about the cloud is that it can go wrong when someone else screws up, so you get the blame for their mistakes.
    • by gral ( 697468 ) <[kscarr73] [at] []> on Wednesday February 29, 2012 @11:56AM (#39197965) Homepage
      The companies I deal with tend to say things like, we want to go with a company like this so we can can get "Support". Which usually means, so we can blame them if something goes wrong.
    • This may depend on your specific company or situation. I get the impression our upper management likes the cloud so when things go wrong they can blame someone else (even if only partially). When we were doing things in house, it didn't matter who actually screwed up, ultimately management took the blame. With the cloud, they can now point fingers at someone else and hold up a contract stating this wouldn't happen. We're a company that's still just small enough that we are pretty much always understaffed an
  • One of the selling points of using cloud services was that it would be more reliable than managing your own hardware/software. But to date, every single big player has suffered major downtime. If I would be hesitant to believe the sales pitch.
    • by characterZer0 ( 138196 ) on Wednesday February 29, 2012 @12:01PM (#39198025)

      Cluster at the application level and have nodes at different providers. If your volume is too high for that, you are big enough to host your own stuff.

    • by timeOday ( 582209 ) on Wednesday February 29, 2012 @12:06PM (#39198081)
      I agree, I have nothing against the idea of cloud services, but they do need to work and reputations are based on events like this. After an outage this long, it takes a LOOONG time to earn your way back to five nines (which works out to 5.5 minutes of downtime per year).
      • by vlm ( 69642 ) on Wednesday February 29, 2012 @12:23PM (#39198263)

        After an outage this long, it takes a LOOONG time to earn your way back to five nines (which works out to 5.5 minutes of downtime per year).

        Only 84 years per the article, and growing at a rate of a year every 5 minutes.

        Thats probably about how long it would take me to trust MS in an enterprise environment.

      • After an outage this long, it takes a LOOONG time to earn your way back to five nines (which works out to 5.5 minutes of downtime per year).

        I'd be surprised if Microsoft (or anybody) is actually offering five nines for uptime.

        The fine print often says "well, we don't actually promise anything, and any outage and loss is your problem".

    • by hawguy ( 1600213 ) on Wednesday February 29, 2012 @12:26PM (#39198297)

      One of the selling points of using cloud services was that it would be more reliable than managing your own hardware/software. But to date, every single big player has suffered major downtime. If I would be hesitant to believe the sales pitch.

      But still, for most companies that are good candidates for cloud offerings, even 8 hours of downtime once a year is probably better than they can guarantee using their own infrastructure. Companies in this range tend to not have redundant servers, offsite backups, disaster recovery sites, etc. Larger companies that can build redundant infrastructure (and staff it properly) are probably better off staying away from the cloud since they can guarantee any level of uptime and redunancy they want to pay for.

      Of course, when a small company Admin spills a cup of coffee in the Exchange server and they are down for 5 days while building a replacement server, it doesn't make the news so you never hear about it...while when a large cloud provider has a 2 hour outage, it's all over the news.

    • by dave420 ( 699308 )
      And there's a very good chance your own hardware/software would also suffer downtime in the same period.
  • ...the British Governments Cloud service suffers the inevitable Microsoft kiss of death [].

  • 2/29/2012 (Score:5, Interesting)

    by MacBrave ( 247640 ) on Wednesday February 29, 2012 @11:57AM (#39197977) Journal

    Leap year strikes again?

    • That was my first thought.

    • Re:2/29/2012 (Score:5, Informative)

      by the_other_chewey ( 1119125 ) on Wednesday February 29, 2012 @12:50PM (#39198637)
      From the service dashboard:

      "4:00 AM UTC We have identified the root cause of this incident. It has been traced back to a cert issue triggered on 2/29/2012 GMT."

      So yeah, a leap day bug sounds probable.
  • by Pollux ( 102520 ) <speter&tedata,net,eg> on Wednesday February 29, 2012 @11:59AM (#39198001) Journal

    Yay, cloud!

  • by Sqr(twg) ( 2126054 ) on Wednesday February 29, 2012 @12:05PM (#39198069)

    This is not helping, guys!

  • by afidel ( 530433 )
    Wait, so Azure isn't down just the admin functionality is? Who gives a crap. Man, I can't spin up a new VM for 8 hours, boo hoo. This isn't an admin nightmare, the VM's being down for 8 hours would absolutely be a nightmare but the only admins this is a nightmare for are the poor guys working for MS trying to fix whatever the code monkeys screwed up =)
    • Given that one of the major selling points of 'cloud' is the ability to swiftly spin up(and down) instances as you do or don't require them, that's a bigger deal than it might otherwise be.

      If you are doing a BYO Server thing, or a conventional static-sized hosting package, and buying to fit largely static demand, you may never have touched the power button after you first shoved it in the rack and fired it up. However, if you are doing the cloud thing and not spinning stuff up and down pretty frequently,
    • We have a vendor that provides software distribution through Azure. It is completely down; no software and not even the web-based administration panel.

      So it isn't just the ability to fire up new VMs, but (from my experience) seems to be a complete platform failure for some customers.

    • I concur with what others have said. There are numerous services, being provided by Azure, that are completely unreachable, and have been so for longer than seven hours.

  • last time (Score:5, Informative)

    by phantomfive ( 622387 ) on Wednesday February 29, 2012 @12:06PM (#39198075) Journal
    Last time a Microsoft cloud product went down, users sustained real data loss. Of course, Microsoft claimed it couldn't happen with Azure [].
  • like google does when something goes wrong. just explain how you're going to change things and why it happened and it will all be OK

  • Credibility (Score:2, Interesting)

    by hism ( 561757 )

    At this point, the best way to keep their credibility from further deteriorating is to provide good reports on what is going on. E.g., not like PSN, more like Amazon []. Currently that Azure dashboard doesn't even load for me... has it been slashdotted or something?

    As an aside: whenever a cloud system goes down, people come out to rag on the reliability of the cloud. While I'm also annoyed by the marketing guys throwing around "just put it in the cloud!!" as much as anyone else, and agree some applications

  • It seems like even the biggest guys can't make it work reliably, and presumably given the high profile of these services, they're not afraid to throw money and smart people at these problems.

    • by medcalf ( 68293 )
      Well, the real problem is that you can never eliminate human error. When combined with the difficulties and costs of maintaining a proper test environment (full duplicate of production, essentially), the odds of something going wrong are always going to be non-zero. Then when you add the interconnectivity that clouds require on top of that, the odds that that something that goes wrong will make everything go wrong all at once becomes non-zero as well. So failure modes for well-designed cloud services tend t
  • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday February 29, 2012 @12:11PM (#39198133) Journal
    Since the image that "Azure" and "Cloud" conjurs up is more "sky" than "cloud" it would be my suggestion that Microsoft simply register and set up an Azure service status monitor/report page there.

    They could have an adorable cartoon chicken that, when the system is working normally, runs around scratching and pecking(speed dependent on load). When downtime occurs, it would begin squawking about how the sky is falling. What could make failure more endearing?

    Just to add that Microsoft touch, they could do the entire thing as a Microsoft Agent ActiveX control []!
  • by Howard Beale ( 92386 ) on Wednesday February 29, 2012 @12:13PM (#39198155)
    Well...maybe not right now...
  • It's the Blue CLoud of Death!
  • When it rains, it poors...
  • Ah, the cloud... (Score:5, Insightful)

    by ErichTheRed ( 39327 ) on Wednesday February 29, 2012 @12:25PM (#39198289)

    It's funny how those of us who bring up issues of data security and service resiliency are dismissed as just trying to protect our jobs.

    Like so many other things, the actual technical underpinnings of "the cloud" are great, and have been standard fare for years. Virtual machines + flexible networking are a godsend for systems guys tasked with getting capacity for a new project up and going yesterday. I love being able to build and rip down entire test environments just to try something out...that used to mean a rack of physical servers, switchgear, etc. tied up while it was being used. That's why everyone's slowly coming around to the "private/hybrid cloud" model, which is really just code for "VMs + network capacity + something to tie it all together + maybe some external hosting".

    The problem is that "the cloud" is very badly misunderstood. As sson as a CIO sees "virtual, on-demand capacity without those pesky physical on-site machines and IT staff, for a fixed cost per compute-hour" everything else takes a back seat. Then, it's "why do we need IT staff on-site, everything's being taken care of in the cloud." Public clouds like Amazon or Azure are great for startups who can't really afford their own data centers, or even bigger businesses to offload some of the nonessential stuff. When you start looking at hosting everything though, the marketing hype of the cloud sometimes distracts people from realities that they have to contend with.

    Also, I'm not saying that businesses who go the private cloud or traditional hosting/outsourcing route won't have downtime -- they will. However, having onsite staff and infrastructure means you can work those staff until they fix the problem, and you have control over them. Most sane outsourcing contracts have SLAs in them stating that the vendor will expend X amount of effort to fix your problems. Cloud provider agreements, unless specifically mentioned otherwise, are "as is, where is, best effort restoration with no warranty." OK, maybe some providers will give you an SLA, but all that does is buy you free service at a later date if they violate doesn't bring your application back online. You still have no choice but to sit and wait around for the provider to fix whatever's wrong...just ask Amazon EC2 customers about what happened during their last outage...

    Companies need to draw sane boundaries around hosted systems, and decide what is critical and what can be offloaded. Do I care about a set of development/test machines that get used once a month? Probably a lot less than the critical database/application servers that run my core business. Comfort level, cost per minute of downtime vs. cost of dedicated resources and other factors need to be carefully considered before jumping into the cloud with both feet.

    • by geekoid ( 135745 )

      Just so you know, the data is still accessible in Azure, it's the management console that's
      down. That's still bad, but lets deal with the actual facts.

      A) the cloud doesn't need to mean offsite. It often is, but the philosophy can be brought in house.
      B) redundancy.

      Companies should completely adopt the cloud philosophy, but keep onsite system redundancy; which is still cheaper and easier then current non cloud solutions.

      The desktops should just be cloud machines. Note, I don't say dumb terminals bacau

  • Advice (Score:5, Informative)

    by DickBreath ( 207180 ) on Wednesday February 29, 2012 @12:26PM (#39198301) Homepage
    Use the MCSE mantra:
    1. Perform virus scan.
    2. If that doesn't work, find a different program that will display a reassuring green graphic.
    3. If that doesn't work, reboot.
    4. If that doesn't work, reformat, reinstall.
    5. If that doesn't work, GOTO 1.

    Microsoft wouldn't know anything about data center running if it were chase aftering them at full speedo.

    Google this: "Microsoft Sidekick / Danger" [] [] []
  • if so, that's the breaks. If not, then there should be contractual SLAs and penalties involved.

    • ...there should be contractual SLAs and penalties involved

      Do you really think Microsoft would put a gun on their own head like that, assuming they learned from their past?

      I think they provide the service "As-is and with best-effort service recovery". Read the fine prints, I'm sure you'll find something like that.

  • I had an outage on Salesforce for 1 week and they did absolutely nothing regarding giving me any free account time or anything except "Sorry".
    Their explanation was a massive multiterabyte log file had to processed since what corruption they had extended to their backup.
    Shouldn't ever happen.
    This was last Autumn.
    All boy scouts should take away this: Cloud promises are made to be broken.

    • by gweihir ( 88907 )

      Given how boastful and grand these claims are, this really is not a surprise to anybody competent. Complex systems fail. They fail in complex ways. Redundancy helps in some ways, but makes things worse in others, by increasing complexity.

      Also keep in mind that when outsourcing IT, the IT people suddenly have different business goals than you do. As long as they stay afloat, they do not really care whether you go under. In-house IT is different. They are sitting in the same boat. And any sane management wil

  • by gmuslera ( 3436 ) * on Wednesday February 29, 2012 @12:32PM (#39198375) Homepage Journal
    Put your servers in the Azure cloud to have an uptime of 9.999999999%
  • Clouds are, in a sense, all about using tight control to gain efficiency. Control requires centralization. But this introduces failure modes that are catastrophic: rather than degrading performance overall or seeing point failures, everything is perfect until everything is gone. Resiliency — the ability to survive failures and still function to some degree — requires decentralization both of infrastructure and of decision making power. So attempts to become more efficient, past a certain point,
  • Cloud ain't so bad (Score:5, Insightful)

    by Martz ( 861209 ) on Wednesday February 29, 2012 @12:36PM (#39198431)

    I wrote a comment on slashdot a while back which questioned the sensibleness of running services in the cloud. I used to be a sceptic.

    Since then I've used Rackspace Cloud and found that it's actually a very good idea, for certain things.

    The benefits of using a cloud system are scalability and no commitment- it's not about reliability or higher availability - but you do get a little win in those areas.

    To give some examples, I was recently able to play around with mysql clustering. I followed a mysql clustering howto [] and played around with it, setup a mysql cluster with load balancers. Once I was finished geeking about, I saved the VMs to the file storage and deleted the cloud instances. Total cost a £/$2-3 maximum. I hadn't previously been able to do this, I would have had to rent a dedicated server which would serve websites, email etc. I couldn't really use the dedicated server to play with new technology in case it had a negative impact on the live systems. I did have development box for a while, but it essentially doubled my costs without making any more money, just offering some protecting.

    Now I have staging/development instances in the cloud - and no commitments to them - I don't have to worry about a £250 monthly bill or sign a 12 month contract to get my own box. I can fire up some resources, use them, and throw it away when I'm done.

    The upshot is that I can play around with other peoples cool open source software without risk or buggering something up on my live box, and the costs are insignificant since I'm only renting it per hour. I can try something new, if it works great - it might go/stay in production. If not, delete it and move onto the next cool thing.

    If I need high availability, I would use Rackspace, Amazon, Azure, and I'd ensure that I have a plan to deal with a major outage with any of the providers. Each have APIs, so in theory I could create new instances automagically and failover between different cloud providers with a quick DNS change, while keep costs low.

    To recap, the cloud isn't all about high availability - no matter what the marketing says. It's about scaling systems and running resources for small amounts of time, and is perfectly suited to services which have peak demand (ticket sales for example).

    • by PPH ( 736903 )

      Correct about the availability. Cloud services are a cheap way to rent processing/storage resources really cheap.

      If I need high availability, I would use Rackspace, Amazon, Azure, and I'd ensure that I have a plan to deal with a major outage with any of the providers. Each have APIs,

      Unless the API is proprietary (or just non standard) and the cloud operator introduces some systemic fault* into their services. What then?

      Building apps targeted to LAMP services (for example) don't necessarily suffer from these problems. Because not every provider is installing the same patches at the same time (or even running the same configurations). So you gain reliability from a sort of ge

  • I wonder if this is what is causing the Daily Show to post a maintenance sign on login?
  • It's called Fog.

  • by gweihir ( 88907 ) on Wednesday February 29, 2012 @01:06PM (#39198897)

    People that believe the cloud is not as risk for downtimes are just stupid and deserve exactly what they get. The cloud not only has the normal risks any comparable infrastructure has, but also suffers from additional risks because of complex network connectivity, complex usage patterns and untried system administration patterns.

    People that still think this now are not only stupid but unwilling to learn, as the Amazon outage last year clearly showed the risks. In addition, Amazon is very likely more competent than Microsoft at this by any sane metric.

  • I could fix this with a $35 payment to someone?

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling