Apparently (and this is my understanding with no inside knowledge, so take it with a grain of salt), they don't have live video telemetry from the stage during decent. They have a variety of engineering data, but to get decent video, they need to get the stage back. Given that it blew up, I'm guessing that's unlikely. Last time, they had some spotty video relayed off a tracking aircraft, but they had to wait for the aircraft to land before anyone saw it. Maybe the same will happen here? Also, as a company, I have a suspicion that they aren't thrilled with releasing videos of their rockets exploding. While a lot of people here understand that that's likely inevitable, given how complex a task they're trying to achieve, the general public probably won't....
What server do you have that makes it through the BIOS so fast that the difference between systemd and SysV is meaningful?
And if your server is so critical that the ~2 minute difference (on a good day) in boot times is a serious business issue, you should really consider running redundant servers anyway since there are a variety of other failures that a fast boot time isn't going to help....
Partially guilty. I certainly believe that a 2 minute boot time is acceptable *to me*, but I'm not irritated by people who want fast boot. I'm irritated that their zeal for fast boot resulted in an extremely poorly engineered piece of software that breaks the ability of some of my machines to boot *at all*.
- Binary Logs: Sorry, but there is no advantage to not being able to easily look at a log file.
Technically, there is an easy way to look at the logs, it just requires a utility that is not cat or grep. But I get your meaning. Binary logs are certainly not my preference, but for tools that need to interact with and parse the logs, it is a necessity. There are ways to get your text logs back, though, and some distributions configure it this way by default.
...which will almost certainly break if the logs get corrupted at all. The great thing about text logs is that my brain can figure out what values are probably garbage and which ones are the remnants of the (now corrupt) log a lot better than a computer can. There are certainly ways of hardening binary logs, but why? syslog is a PITA to parse by machine not because it's text-only, but because the formatting is from the 1960's. I'm not arguing it has to be syslog format, just text.
- Failure to Log to the Console:
It is in the log.
...until you don't GET the log because it wasn't *WRITTEN*, which probably hasn't happened to you because you're probably booting a fairly standard configuration. But it took me 3 f'ing hours to debug the fact that my permissions were wrong on my NFS server when I was net-booting because I got no shell and no log (because the permissions were wrong). That would've take 20 seconds with logging to the console and knowing more about systemd wouldn't have fixed that.
- Failure to Drop to a Shell When It Breaks:
Once again, it is in the log. I'm not sure why being dropped to a busybox shell gives you a much better way to debug than just reading the log messages. If you want to test individual systemd services, you can do that with systemctl start/stop/etc.
Because sometimes it's a lot nicer to try to reproduce the failure right then and not have to reboot the system in order to get to the log. Remember, no shell means you can't read the log *at all* without rebooting.
However, if you use that command as root, it tells you not to run it as root. If you do it as a normal user, it doesn't have permission to read all the files to tell you what it's doing.
If that is true, it is a bug.
Maybe, but systemd --test --system prints "Don't run test mode as root" on OpenSuSE 12.3 and Fedora 20.... While trying a few other distro's to post this comment, I ran across "systemd-analyze --order --system plot" (which wasn't in the original distro that I was debugging, but it was ARM so maybe it was weird) which appears to be able to generate a graph in SVG, which, while huge, appears to help me a lot.
- Races: I no longer have any idea what order things are starting in.
If a dependency issue is causing your boot to randomly fail, then your systemd configuration is horribly broken.
...until something doesn't work. Yeah, the config I've got *IS* horribly broken. I'd like there to be some reasonable way for me to figure out how to fix it. And sometimes I don't *PREFER* an order, sometimes I *REQUIRE* an order. On my desktop? Couldn't care less because it "Just Works(tm)' and I rarely change the distribution's defaults. On my servers, it's more complex. Maybe "prefer" in this case means it will always honor that?
Have you every really had to deal with the complexities of the sysV init system? It is not pretty. "Jiggling the cord" until it works is about all you can reasonably do with sysV when you have a complex init.
Yes. I obviously have no idea what your experience is, and maybe you had a horrible one with SysV. I've had my share of battles with it and I'd never argue it's perfect, but it's a lot easier to debug than systemd. When everything starts in serial order....
It doesn't "mount filesystems" at a single time during boot the way sysV does. You can move service files to other filesystems, but then you need to tell the service config that you did this so that it can bring up that filesystem before starting the service. This is actually a lot easier to do with systemd than with sysV.
In SysV, almost *nothing* happened before filesystems, so I rarely had to touch *anything* in
Well, this is more of a rant than a series of legitimate complaints. You can't expect the systemd people to understand your issues if you haven't even taken the time to learn how systemd works first.
Yeah, it *IS* a rant. But that doesn't mean that the stuff I'm complaining about isn't a real problem. I've always hated the "let's make it complicated and blame the user if they can't figure it out" philosophy. I will waste hours figuring it out. Why is this a waste? Because in the end, I've gained *no* capability that I didn't already have except faster boots and since I boot my servers less than once a month, that gains me nothing. It's awesome that Linux is gaining share on the desktop. udev, for all its warts, was *sorely* needed since I don't wanna run "mkdev" every time I plug a USB device (etc). However, until systemd, the changes for the desktop played nicely (and in most cases added capability) to the server use cases. systemd just doesn't.
Oh, I knew I'd think of something else. systemd even managed to break shutdown on a netboot'd system since, apparently, it's weird set of dependencies don't check to see if the root filesystem is mounted over the network before shutting down the network.... Again, obviously distribution dependent, so if you know of one that can actually deal correctly with an NFS root, let me know...
What's wrong with it? Here's my starting list and I'm sure I'll think of more....
- Binary Logs: Sorry, but there is no advantage to not being able to easily look at a log file.
- Failure to Log to the Console: There is nothing more frustrating than watching 5 screens of "Failed, use journalctl to blah, blah, blah..." come by when you know that your root filesystem isn't mounted read/write. There went *ALL* your debug information.
- Failure to Drop to a Shell When It Breaks: If my boot is broken,I want a shell. Not a hang. There's a way to force it to go to a shell, but that's before it does *anything* so you don't get to debug the failure, you get to guess what the failure might be and see if you can debug *that*.
- No way to see WTF it's doing: There's supposedly a command to make it tell you what order (and presumably what'll happen in parallel) things are going to start in. However, if you use that command as root, it tells you not to run it as root. If you do it as a normal user, it doesn't have permission to read all the files to tell you what it's doing.
- Races: I no longer have any idea what order things are starting in. I've had a cluster where everything worked fine. Until the a week and a few reboots later and then it occasionally failed. Don't even start to tell me that "I must have my dependencies wrong". I *KNOW* they're wrong. But I have no tools to help me figure out what "right" is. Plus, have you looked at how many unit files systemd starts on a normal system? I can't hold that much of a graph in my head. With SysV init, unless I turned on some weird parallel mode, everything starts in the same order every time.
- Complexity: I'm not a professional sysadmin. I'm a developer who has to maintain development systems (as well as personal systems) part time. If I worked with systemd every day, I'd probably be able to figure out ways to make it work for me. But I don't. SysV is just shell scripts. I *DO* deal with *those* every day so it's pretty easy to debug.
- Complexity, Part 2: The previous version of init essentially had no bugs. Ok, I'm sure that's not really true but they sure didn't surface very often. Since the results of your Process #1 dumping core are catastrophic (ie, a kernel panic), ideally that process should do as little as possible. That is *CLEARLY* not the design philosophy of systemd. Further, it consumes a decent amount of RAM and the more RAM you consume, the more likely (statistically) you are of hitting a memory error.
- YACL (Yet Another Config Language): Ok, so this is really a minor complaint but I get to learn yet another way of writing config files.
- Filesystems: SysV init tended to mount local filesystems *very* early in boot (some of that broke when udev got involved, but you could usually hack around that) and network filesystems not long after. I'm not entirely sure where systemd mounts filesystems, but it breaks *HORRIBLY* if you move some of the files needed by a service onto a filesystem that's not a "normal" filesystem. I'm sure there's some way to set all the dependencies to make that work, too, but see above, I have no f'ing way of figuring out what should depend on what.
From all outward appearances, the developers have *no* interest in fixing much of any of those complaints. The whole "debug on the kernel command line" fiasco is a pretty clear indication that they "don't play well with others". In the end, I'll see what Slackware has or maybe move (back) to the BSDs.
The "great job" depends on whether you have Business or Residential service. Apparently, they're doing well on the Residential side. On the Business side (which I have), I just called to see when I can get IPv6 and their answer was "when we run out of IPv4, all our new customers will get IPv6 and the old customers will be on IPv4". Um, gee, thanks.... I'm assuming this person was misinformed, but the fact remains that my neighbor with residential service can get IPv6 and I, with business service paying quite a bit more, can't..... I hope they get their act together soon!
I think the difference is that the $10k/pound is likely the cost for launching a satellite. The 5000 pounds that NASA is launching is inside a pressurized container (according to Wikipedia, the dry mass of a Dragon is roughly 9300 pounds) so the total mass that NASA is paying for is probably closer to 15,000 pounds per launch. Plus they're getting back about 3500 pounds from orbit, which is also good because it allows for return of experiments (Soyuz can return a little, but not anywhere near that much). Also, I seem to remember that the $10k/pound figure was for the Space Shuttle, not Falcon and that article probably hasn't been updated in awhile.
In the end, by the time you include the various payload prep and recovery services, NASA is probably getting quite a good deal from SpaceX. The reverse is also true since NASA signing the contract gave other SpaceX customers confidence in their ability to get the job done and gave SpaceX an assured funding source to continue development. These are all good things!
It gets even more "fun" if you're trying to netboot since you never get to see any of the output. When I whined about this problem on Slashdot before, someone suggested adding a parameter to drop to a shell. Which is great, only then systemd didn't get far enough to actually *hit* the problem so I could debug it. So then I tried the flag to systemd that is supposed to get it to tell you what order stuff starts in, but it won't let you run that as root.... Googling got me nowhere. Eventually, I discovered that DBus (another solution in search of a problem, IMO) wasn't functioning correctly because somehow the DHCP server had the wrong MAC address for the host so the network didn't come up right (why isn't DBus talking over 127.0.0.1!!??!).
In short, systemd has me looking into how quickly I can switch to NetBSD. Although I should investigate Slackware as well.
In the article it barely mentions the issue that causes the 6 figures of expense, which is earthquakes. The museum exhibit has to be certified as safe in an earthquake (since it's in LA). Presumably, there is *TONS* of data explaining the exact forces that the Shuttle stack will stand up to using all original parts. If the parts are replicas, you'd need to certify that the replica wouldn't fail in an earthquake, which would involve quite a lot of engineering work.
The distros are going with it presumably because they think they need it to turn Linux into a desktop or notebook OS. However, they seem to be ignoring the issues it presents for servers. Let's take my *THREE HOUR* debugging session on systemd yesterday. I had a netboot system up and running. Client boots from Server and mounts root filesystem from Server. I changed from Server A to Server B. Due to an NFSv4 vs. NFSv3 issue, Client could no longer mount the root filesystem read/write. Simple, right? It would've been with SysV init because the errors during the mount would've been spewed to the console and I would've seen them. What *actually* happened is that a bunch of services failed to start. Instead of spewing the error message, systemd "helpfully" told me to run "systemctl status" on the service to see the error message. Except that I never got to a login prompt due to the errors. And I couldn't mount the filesystem read/write so it lost the logs.
Two+ hours later, I managed to disable enough stuff to get to a login prompt where I was finally able to figure out what was going on (never did get systemctl to show me the logs, probably because they couldn't be written to disk and it doesn't seem to hold them in RAM).
Please explain to me what the advantage of systemd is again? Because I'm *REALLY* not seeing it. It took something that was trivial to figure out and made it astronomically difficult. I no longer have any idea what order my services start in or whether that order is repeatable. Yes, SysV init scripts were really long. But once you learned them, you realized that you only had to modify 5 or 6 lines of them to get a new service going. I have yet to figure out how to even create a service with systemd or how I figure out what I'm depending on.
In short, for a server, I have yet to see a single advantage of systemd over SysV init. Maybe I'm missing something and someone will enlighten me, but I'm extremely skeptical.
Am I just resistant to learning new things? Maybe, but learning stuff takes time and my time is money for my employer. So if I'm not getting a return on my investment of time (in new capabilities or reduced debugging time or *something*), why would I invest the time to become an expert on systemd?
Plus, Radio Thermostat has a fully published API to program it, query it, operate it, etc. so if you don't like their Ap, or they go belly up, the thing is still useful (assuming you, or some open source project, can write the code). It's a pretty simple Web API with JSON. I think the term is RESTful, but I've never been clear on exactly what makes and API "RESTful" vs. just sending JSON to a URL....
In any event, documentation can be found here: http://www.radiothermostat.com/latestnews.html#advanced
Well, certainly the part where you take what materials science researchers have discovered in concrete technology and design structural members of a bridge certainly seems to fit that statement of what an engineer does quite nicely. Depending on how "cutting edge" the bridge is, I image there is more or less engineering involved vs. looking up the right sizes in a table, although I'm not a civil engineer so......
Astronauts are allowed a small (in both weight and size) amount of personal items, which have to be approved for travel (http://www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts121/launch/qa-hahn.html). They usually leave them there when they come back down (I've heard a few astronauts talk about it). They have also shipped up larger items (presumably including the guitar) using spare space on various spacecraft (like the Shuttle or the Dragon test mission). If you go read the Wikipedia article on Skylab, you'll see that one of the crew's basically mutinied over lack of rest/personal time. Since then, NASA has built rest and down time into the schedules for astronauts on space stations. Presumably the Russians do the same. On ISS, they've sent up leisure items so people don't go nuts. I have seen reference to an every growing DVD library on the ISS as well.
As for the camera/memory cards... That was probably on the ISS as part of the standard gear. Part of the mission is to take pictures of stuff on Earth. Since they now have an Internet connection, presumably they'll transfer the pictures and leave the memory cards up there until they stop working, when they'll be sent to an inglorious (and fiery) end on a Progress ship.
MIT is almost certainly using Kerberos for their authentication since a) they invented it and b) that's what they were using at least as recently as 2005. In any event, how Kerberos stores passwords depends on the exact implementation, but in at least some implementations (admittedly old) you could decrypt the password database on the Kerberos key server with a key stored in a file in
This has to be 2.3 *peta* FLOPS not giga FLOPS. For instance, in 2010, an Intel desktop processor could do 109 gigaFLOPS (reference: http://en.wikipedia.org/wiki/FLOPS).