Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Red Hat Software The Internet

Red Hat Suffers Massive Data Center Network Outage 85

An anonymous reader writes: According to multiple reports on Twitter, the Fedora Infrastructure Status page, and the #fedora-admin Freenode IRC channel, Red Hat is suffering a massive network outage at their primary data center. Details are sketchy at this point, but it looks to be impacting the Red Hat Customer Portal; as well as all their repositories (including Fedora, EPEL, Copr); their public build system, Koji; and a whole host of other popular services. There is no ETA for restoration of services at this point.
This discussion has been archived. No new comments can be posted.

Red Hat Suffers Massive Data Center Network Outage

Comments Filter:
  • by TechyImmigrant ( 175943 ) on Friday April 21, 2017 @12:27PM (#54276989) Homepage Journal

    'Nuff said.

    • by Anonymous Coward on Friday April 21, 2017 @12:34PM (#54277041)

      That's what you get from running systemd in production.

      • by Anonymous Coward

        German agent Lennart Poettering

      • by JWW ( 79176 )

        Damn, I wish I had some mod points!!

        Mod this up!!!!

      • Re: (Score:3, Insightful)

        I don't understand all this systemd bashing. I've been working with it for a few months on CentOS 7, and all I can say is that it is easy to work with and until now, it has proven to be very stable. Never had a crash related to systemd.
        • I don't understand all this systemd bashing.

          A lot of people don't like change.

          There is one gripe, though, that I can sympathize with, and that's how systemd is expanding to encompass much more than is readily understandable. There may be perfectly good reasons for the expansion, but they're not readily apparent.

          That said, I've been using systemd ever since Kubuntu switched to it, and I haven't had any problems with it. But then, I haven't tried a recursive rm recently.

          • Re:DeadHat !! (Score:4, Insightful)

            by CanadianMacFan ( 1900244 ) on Friday April 21, 2017 @01:26PM (#54277527)

            The problem is that systemd keeps on expanding and that goes against the philosophy of UNIX/Linux where each thing is kept small in scale and does it well. systemd keeps up integrating applications that have worked perfectly for a long time for the philosophy of one person who isn't really well respected in some areas of Linux development.

            • Re:DeadHat !! (Score:5, Insightful)

              by F.Ultra ( 1673484 ) on Friday April 21, 2017 @01:32PM (#54277583)
              Not exactly true. The systemd that you talk about (encompassing applications) are the systemd the project and not systemd the pid 1 (init) application. Each new "application integration" is done via a separate application so the UNIX philosophy still stands. And these are not done in order to match the philosophy of one person (the whole systemd project have lots of developers this day) but are done in order to present a common plumbing layer mostly aimed at container developers at this moment, i.e to present a common set of tools that work and look the same.
              • by tepples ( 727027 )

                But do these "separate application[s]" break if pid 1 is something other than systemd?

                • But do these "separate application[s]" break if pid 1 is something other than systemd?

                  Depends on the applications.

                  Boot-loader ? NTP clients ? these aren't deeply interdependent.
                  You could very much use these and then run openrc if you want.
                  These are just handled by systemd in the sens that these are program who are now developed by people who are on the systemd *project* team.
                  At most systemd might leverage boot-loader in the sense that it can more easily send parameters to it for the next boot.

                  Though other daemon might be much more interlinked with systemd's (the daemon running at PID 1) job

                  • Linux kernel, for example, offers cgroup isolation, namespaces, etc. [...] The older bash-script-based mess of code that used to predate systemd has absolutely zero ability to leverage them.

                    This is ignorant at best. Both cgroups and containers can be created and manipulated from the command line. Nobody bothered to do this as a PoC before going off on a tear and creating systemd.

                    • Both cgroups and containers can be created and manipulated from the command line. Nobody bothered to do this as a PoC before going off on a tear and creating systemd.

                      There were command-line demos of cgroup/namespace (video of devs launching "make -j 255" while the desktop remains responsive).

                      What nobody bothered, is trying to rewrite the mass amount of bash script to support it.
                      (Specially since nearly every distro seems to write their own script madness to handle starting/stopping jobs *).

                      You would either need each single distro to rewrite mass amount of in-house developed shell code in order to leverage such newer kernel functionnality.
                      Or, you need that a few standard

                • Of course not. There might however be some that requires a process on the other side of a D-BUS interface to speak the same events and namespace but that is the hardest dependency any of these applications might have.
            • Yeah, right, the Linux philosophy. For which Linux itself is a prime example, of course, since it is kept so small in scale and fulfills only its core task of being a kernel (which is elementary resource management, in case you were wondering).
        • A few whole months, eh? Mind if we call you grandpa?

          • Yeah well, a few like nearly a year, and running a 24/7 infrastructure on it, it has been reliable as a swiss clock.

            Maybe I call you grandpa for refusing to move on from init.d that dates back to the 70s.
        • Re: (Score:1, Insightful)

          by Anonymous Coward

          Systemd is great, right up until something goes wrong or you find something that doesn't fit the World According to Poettering. You'll never be able to figure out why things aren't working, you'll never be able to integrate whatever it is you want to do into the new systemd world. Ever wonder why they keep adding more and more junk to systemd? Because it doesn't play nice with anything that isn't itself, so they can't use things that already exist and already work. Instead everything has to be thrown into s

        • It isn't just about stability. I run systemd in production on hundreds of servers, and have no stability or reliability problems with it.

          That doesn't mean I actually like it.

          The command naming is terrible; "systemctl" and "journalctl" do not roll off the keyboard for a touch typist. The logging is a mess. I could go on... ...and it just happened one day. There was little visible discussion, and little benefit outside the desktop.

        • by sjames ( 1099 )

          This is a YMMV situation. Systemd seems to work OK these days as long as you aren't doing anything unusual. If you are, it can be damned near impossible to get it to do the right thing.

          • I don't know about that, what I see is a lot of hand waving.
            How about a for instance, provide examples

            • by sjames ( 1099 )

              It absolutely positively will not mount btrfs in degraded mode. It drops to the emergency shell.

              Same deal for RAID.

              • It absolutely positively will not mount btrfs in degraded mode. It drops to the emergency shell.

                That's because btrfs has some self respect.

        • Keep in mind that Slashdot is no longer a place where the majority are competent. Over the past decade or more it has increasingly become a place for those who think they are experts because they have fixed all their friends computers ( each time reinstalling Windows without having a clue as to what the real problem was or how to fix it.) For example there was a guy recently who was convinced that since the systemd logs said "initiating shutdown" that it was systemd that was causing his system to sit down
      • Re: (Score:2, Funny)

        by MSG ( 12810 )

        Who rated this insightful? Humorous, maybe, but insightful? No. Come on, moderators. Unless AC is a Red Hat employee and knows what caused the outage, that's not what "insightful" means.

      • Openstack, more probably.

    • if they were running distributed... oh, wait.

      guess it was a Beowulf cluster of Clusters.

    • It's DeadRat. You must be one of those multiply-pierced twats who thinks booting off an umbongo live CD makes you some kind of guru.

      • It's DeadRat. You must be one of those multiply-pierced twats who thinks booting off an umbongo live CD makes you some kind of guru.

        It's my domain name you insensitive clod!

  • But seriously, we need to find out what happened. I hope it's a hardware issue and not software.

    • by Anonymous Coward

      One of the read heads on the primary tape feeder wasn't properly bathed in mineral oils this morn.

    • Just like the recent Amazon crash, I would bet the root cause is human.
  • $BadThing happened about $company||$person I like, therefore $conspiracy!!
  • They should have used Ubuntu.
  • by fahrbot-bot ( 874524 ) on Friday April 21, 2017 @01:24PM (#54277501)

    ... systemctl restart datacenter

    (Okay, maybe only if systemd ran as PID 2 ...)

  • Uh oh... sounds like the datacenter team accidentally deployed the systemd package we cooked up for the North Korean missile program.
  • Yes, I agree, it is BIG NEWS when a Linux distro has any kind of problem because it almost never happens. MS outages, not so much.

    San Francisco has a massive network outage - I must've missed your post about that.

    Come back after you grow up @msmash.

    • Yes, I agree, it is BIG NEWS when a Linux distro has any kind of problem because it almost never happens. MS outages, not so much.

      San Francisco has a massive network outage - I must've missed your post about that.

      Come back after you grow up @msmash.

      https://www.google.com/amp/s/h... [google.com]

    • Well, Red Hat is buying into a lot of crap that is only marginally Linux, and is far from what you could call obviously reliable. Like Openstack, Ceph, Gluster, that kind of thing. When you build your house high enough on a foundation of shit, it eventually sinks into it. Like, Openstack actually depends on MySQL for distributed consistency, how far do you think that frisbee will fly?

  • they should have been running Oracle Unbreakable LInux and none of this would have happened!

    *snicker*

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...