I set up PostgreSQL in a FreeBSD jail last week. Total setup time, just a few minutes to create the jail and pkg install the latest PostgreSQL, and set up the db cluster. Added to DNS, and it's been working ever since without a hitch, all on top of ZFS. If you're not a slave to proprietary databases, it can be very simple and straightforward to migrate to FreeBSD.
Slashdot videos: Now with more Slashdot!
I don't have the time to point to the specific issues, but if you look over the last 18 months of the buildd-tools-devel archives you'll find them. Most of these are due the same root cause that broke tmux and screen before they were specifically patched to work around systemd.
But the specific issues are no longer worth discussing. The breakage has already happened. Debian has been broken, both in terms of the trashing of its historical reliability and robustness and in terms of the fracturing of its community. It's now past time to plan to move on to systems which aren't broken by design. If I use the new Jessie release at all, it will likely be an interim measure while it still supports sysvinit, and is just a stopgap while migrating elsewhere. While staying on Linux would be nice, it's not clear that will be viable, so pre-emptively migrating to FreeBSD will provide some medium-term security and flexibility. I started planning for the possibility of migrating away over 18 months back, and I've spent the last 12 months porting code, removing Linux/glibc-isms, adding support for BSD features and libraries, migrating systems and services, setting up BSD autobuild/CI infrastructure etc. This is now largely complete, just need to finish ZFS snapshotting for schroot.
Never forget that Linux wasn't made by big companies pushing their agendas. They largely picked up what others had created before them. Big UNIX tools and programs are as a rule crude and less featureful and polished compared with their free software counterparts. Linux was made by the thousands of people who created and polished the free tools and programs to make them the most featureful, portable, useful and usable of their class. As a group, we've been collectively given the finger, and so many of us will take the message and move on. We'll see how well the latter day Big Linux companies do without us in the long term. They may well have killed the goose that laid the golden eggs.
What stopped me? Many things. Here's a few.
The systemd debate reduced the Debian lists to an endless flamewar over three years long. debian-devel is just toxic; it's not useful for any constructive development discussion. I unsubscribed from almost all the lists a year back. I can't describe how wearing and demotivating this is. Reading the archives since then, it hasn't improved.
Most of the software I write for Debian is core systems programming stuff. Straight out of APUE (Stevens). Over the last year, I've had a stream of bug reports about things not working correctly under systemd. Some fairly fundamental POSIX syscalls and tools no longer have the same behaviour when running under systemd. By "design". That's a fairly huge compatibility break with every other UNIX-like system out there, and one which hasn't seen much attention. But I'm somehow expected to rework my code to work around the breakage systemd brought with it. Breakage which has nothing to do with me. Code which isn't even remotely anything to do with an init system and which is portable code running on many other systems. That's crossed a line. systemd can't and won't be supported.
I can work on sysvinit, openrc to a lesser extent. For several years it's been all take and no give with the systemd people. We can't do work on integrating openrc since this would require support for runscripts in systemd. What's the chance of that? Zero. Any changes, even minor ones, require superhuman effort to achieve. Essentially, it's an uphill battle to do anything and Debian is no longer a pleasant or productive environment to work in, primarily thanks to the horrible "our way or the highway" attitude of the systemd people. Since when was free software about dictating how everyone must do things? Silly me, I used to think it was about having the personal freedom to tinker with things as I liked to meet my needs. I'm a volunteer, and I give up vast amounts of my life to contribute to free software and Debian. This was previously a fun, collaborative, productive endevour for which my efforts benefitted many people. It's now deeply unpleasant and I don't like being abused, ridiculed and trodden on by the systemd people and their enablers. I'll move on to new and better things. I spent the last decade as the primary maintainer of the core Debian build tools, and later of sysvinit. I've been invested in and contributed heavily to Debian for the last 15 years. Not something easily let go.
We'll see how Devuan pans out. Until it does, I'll be carrying on the migration to FreeBSD.
Altruism only goes so far.
This isn't strictly correct. They have been talking about their own equivalent of launchd.
Why is the distinction important? Unlike systemd it has a clearly-defined and limited scope. If systemd had stuck to doing this, it wouldn't have become the creeping mess it is today. So long as the FreeBSD implementation is limited and sane, I see no reason why it won't be a general improvement. And given the clean and generally machine-editable nature of rc.conf and the BSD init system, it will likely not be too challenging to make it backward-compatible, which is another area systemd failed in by not being fully LSB compatible.
Really? I can.
I'm a Debian developer who has been moving slowly to using FreeBSD on more and more systems over the last year, displacing Debian use and development on those systems. I've started contributing in minor ways on the lists and the odd patch for the ports tree. I'll likely start packaging my stuff in ports and becoming increasingly more involved over time.
I contribute to things I'm actively using. For the past 15 years, that was Debian. Unfortunately due to the best efforts of the systemd people, it looks like that's unlikely to continue, though I very much wish this was not the case. But reality can't be avoided, and this is where things are today.
MacOS X and Windows are both second-class citizens. They work poorly and inconsistently, and are not well-maintained ports. You can't build a Windows or MacOS X version of a GTK+ application and expect it to work properly. With Qt, you can. Nowadays with GTK 3.x, even non-Linux and non-GNOME are no longer catered for properly. This isn't new. Windows support was mediocre back in 2004 and it's still mediocre today. MacOS X is arguably worse. Case in point: copy-paste was broken in Inkscape the last few times I've tried it on MacOS X, making it unusable. This is simply due to the primary upstream developers, those that are left at least, not caring at all about these platforms--they never have cared. Working "quite well" is a low bar; stuff needs to work all the time. It's true that it works for simple stuff--a few buttons and text entries. But at soon as you get to a more complex application that starts to use more functionality, you're in a world of frustration as you discover the extent of all the brokenness.
A decade back, GTK+ used to be my toolkit of choice, and I was using it for my day job for a while for use in actual products, writing tutorials and articles on it in professional publications etc. In retrospect, it wasn't that amazing then, the systemic issues it has were there since it began, but it's not even on the radar today for most developers. I'm now a Qt developer, like many others who had to get stuff done and found GTK+ sorely lacking. Building cross-platform OpenGL rendering stuff, it works on every platform without trouble and looks and behaves completely natively, and is a pleasure to work with rather than an exercise in masochism. In 2004 I was regularly filing bugs with patches against GTK+. A few months back, I saw some get closed/wontfix--10 years of tested, unreviewed patches fixing serious problems being ignored and abandoned. What a waste. If it was actually maintained properly, I might possibly still be using it. But if bugs which block your work don't get fixed, even if you put in the effort to fix, test and submit the patches yourself, you end up having to move to something which is actually maintained properly, such as Qt.
I'd argue that the issues are still with Microsoft. They could (and previously did, up to OpenGL 1.4) support OpenGL natively as they do with DirectX today. The hardware vendor shouldn't have to implement and provide a complete GL implementation, but they do have to, and this is one reason why GL implementation quality varies so widely. It could have been a very different story if they weren't a bunch of manipulative control freaks.
If Microsoft provided a fully-featured GL implementation and let vendors provide a much lower-level driver to do hardware abstraction only, the experience would be much better. This would be akin to Mesa, where the hardware drivers target Gallium and not the high-level GL/state machine side of things. A single GL implementation with multiple backend drivers (OK, I know it's not that simple in Mesa-land, but I hope it makes the point).
I used to be a "C" zealot who considered it unmanly to use other languages. I've got past that with age. C is simple, but it comes at the cost of having to do every last thing by hand. Even when it's unnecessary and the chance of making mistakes is high. After spending years on the nitty-gritty details, there comes a time when you have to ask: do I really need to implement the needed bits of a vector/list/set/map/hash or do I just use one that I know will work without any trouble. Chances are it's optimised better than my hand-rolled one would be. And it takes a few seconds to use, since I don't have to write all that wheel-reinventing code myself. I can focus on the problem I was trying to solve, rather than unnecessary groundwork.
Stuff like std::vector is often criticised by those who don't appreciate what it does. After optimisation, element access is as efficient as a C array, and the automatic memory management is no worse than what you'd need to do in C anyway. And it's type-safe and exception-safe. Same with std::string. This is also often more efficient than using the C string functions (though obviously can be less so if used badly). But it's safe and easy to use, which is the big win. How often have you screwed up C string manipulation. That's right, we all have, and even when it appears safe and working, it's often a disaster waiting to happen.
I'm replying to this because as I was reading through I was working on some code for work. We were using std::vector in a set of nested objects to keep track of their children using std::shared_ptr, and to allow cross-references with std::weak_ptr. Works just fine. But profiling showed that our usage patterns were suboptimal. We needed to keep track of objects by their insertion order (linear) but also needed to do fast lookups (set/rbtree). I could have hand-rolled a custom implementation, or made a custom container with embedded vector and set objects and logic to keep them synchronised. However, a little research and this was the solution:
template<typename T, template <typename ElementType> class Ptr>
boost::multi_index::ordered_unique<boost::multi_index::identity<Ptr<T> >, std::owner_less<Ptr<T> > >
Total time from finding the problem to researching and creating this working solution (including converting the existing code to use it): 2 hours.
So now I can do indexed_container<foo, std::shared_ptr>::type c and get a custom container containing std::shared_ptr<foo> indexed by both insertion order and by pointer value. With C++11 I could use a templated typedef and also avoid the struct wrapper. And multi_index_container template is usable for all sorts of esoteric multiple-indexing with indexing on individual/multiple fields or with custom functors or whatever you like. Why would I want to reinvent that when there's a pre-canned implementation that just works. It certainly has intellectual value, but when you just need to get something done, the above is using the available tools to best effect. Imagine how many hundreds of lines of C the above typedef would require to reimplement from scratch. And how many man hours would it take to implement? It's just not worth doing that.
This is all highly debatable!
However, if you had to choose between GObject-C and C++, the rational choice is C++. It's the better choice on all counts: safety, speed, correctness, robustness, maintainability. I had to learn this lesson the hard way by spending years banging my head against the wall using GObject when I could have just used C++.
It's easy to implement OO in C *badly*.
GObject shows that it's possible, but being possible does not make it sensible. It comes at the expense of an almost complete absence of type safety and const correctness since you cast it all away. And then you lose performance due to having to then check every single passed object is of the correct type (g_type_check_instance...). And if you screw up (which you will do), your program will crash. The C++ compiler would have done all of this for you, and you'd have a safer, more robust program at the end. It handled all those checks at compile time.
Creating classes with GObject is a maintenance nightmare. Consider ongoing refactoring as well as the initial writing; there's a lot of pieces to keep in sync to avoid disaster, and it's easy to slip up since you will often not be warned about inconsistencies. And when I was doing a conversion of GObject C to C++, I found a whole pile of latent bugs which the C++ compiler picked up but which the C compiler couldn't detect due to all the typecasting removing information. The conversion to C++ + STL improved the quality and robustness significantly, as well as making ongoing maintenance easy.
I was once a C zealot who believed GObject was amazing. I even worked commercially doing GTK+ programming. Hindsight shows how wrong I was. Use the right tool for the right job. And for OO, this does not incude C!
Like others have said, it all comes down to your own self discipline. Since you've posted the question, you're obviously aware you could be doing this better.
When you're going to a lecture, you're going there with a single purpose: to attend the lecture. Listen, take notes, ask questions. Learn. Think. Understand. All you need here is a pad of paper and a pen. Write notes. Think about what's being delivered. Technology has no place here, and utterly detracts from the experience. Writing is great: it helps you remember the material, and it also stimulates you to think about it as you are doing so. I used to make a neat copy of each lecture's notes in the evening too, maybe also doing a little extra reading around to flesh out the details and understanding, which further reinforces your memory and understanding.
Some of the comments here mention being bored in a lecture. If you're bored, it's most likely the problem is with yourself, not the lecturer (even bad ones) or the environment. Lectures are about delivery of information from the lecturer to their audience. But it's not just about rote learning and passively soaking up information. With the above strategy, I used to be familiar with all the knowledge, and had read around the subject, but most importantly I had digested it, considered it, *understood it* while many of my classmates, who hadn't done this, were still struggling to recall the basic factual details. Go to the lecture with a clear purpose: to learn and understand. Not to mess with pointless gadgets. At the end of a lecture, most lecturers will ask if there are any questions. Make it your goal to think of at least three pertinent and insightful questions to ask in every lecture or talk. Write down potential questions as you're writing the notes; it makes you critically consider the material even just by the act of copying it down. Even if you don't actually have time to ask even one of them, or they get answered later on in the lecture, it doesn't matter. It'll make you focus even more, rather than just passively absorbing the material. And it'll make you curious, and go and find the answers yourself, furthering your knowledge and understanding. And you'll do better for it, because you'll be training yourself to think and engage, and you'll potentially end up right at the top end of the class by doing so.
When I was writing up my PhD thesis, I was suffering from a combination of procrastination and distraction in an open plan office. I was moved out of the main office in the lab into an unoccupied office with a door, desk and my laptop and a big pile of printed papers. No wireless, no wired ethernet. On my own, no internet. My only purpose in that office was to read papers, look at data, and write, write and write some more. It was both liberating and extremely productive. Until I got wireless working under Linux, at which point it slacked off a bit (but I had the discipline to not do that much).
Take home point: technology is useful, but in many (most) cases it only serves to distract us from the real objectives we have. And I say this as a full time PhD software developer in the life sciences who is reading slashdot in the evening. You can make your life much happer and more productive by turning it all off, leaving it alone, and focussing upon what you really need to do. If some part of what you're doing needs it, then use it, but use it just for that task and don't let it distract you. You might be surprised at how much free time you have after you've removed all the pointless distractions and completed the real work. For things like email, ignore them entirely and read them maybe twice a day, then the incentive to continually check it is removed and you can efficiently process it in one go. If you need to read a book or paper, print it out and go and read it somewhere quiet and distraction-free, and make notes on it etc.; it'll again be better than reading it on a computer which has other distractions.
Hope that's potentially useful,
Sounds pretty bad, but I don't think this necessarily applies to the experience in general.
I get the bus and cycle to work on alternate days (in Dundee, Scotland). I've never driven to work; I'd either have to pay through the nose for a parking permit and still have to find a free space or park on one of the nearby streets which would require getting there around 08:00 to get a spot, neither of which are worth the time, money or effort. And the time difference is in practice negligable. I have to leave 10 minutes earlier, big deal. In winter I'd be wasting that time defrosting and warming up the car. The busses are generally kept quite clean and tidy, and I've get to see any bad behaviour; the one time only I thought someone had bad personal hygiene it was a patient going to the hospital who was obviously using some skin medication, which allowances can obviously be made for. Rather than drive, I get 30 minutes to relax, read a book, read a newspaper, whatever, in relative warmth and comfort, which I just wouldn't get in a car. And there are usually no problems with overcrowding or the punctuality of the services, it's run pretty efficiently (I use Stagecoach and National Express routes). Honestly, I find it way more civilised and convenient than driving myself around.
Maybe there's a difference between countries here? There's no sort of social stigma in using a bus here, it's just one of the options available here, and a good one.
There's no need to play silly word games over semantics. The meaning of what I was saying is quite obvious. You've quoted the licence disclaimer. I'm saying that I did a fairly extensive set of testing on a range of systems and configurations prior to every release and upload to ensure that it worked. This does not in any way invalidate the licence disclaimer; there is obviously a theoretical possibility that there may be extreme edge cases where it may not have worked, but we knew and had validated that it worked for all the common cases (and a number of uncommon ones as well). You could upgrade the package in confidence and expect your system to boot after the upgrade. In practice, problems did not occur because of the extent and quality of the test coverage, so that guarantee was indeed met.
Um, what I think it should be is entirely relevant. I was primarily responsible for maintaining sysvinit and the initscripts from squeeze through to the wheezy release and after, doing the testing and providing the guarantee that it would boot. I was the one who did the testing before uploading. Different VMs, different upgrade scenarios, bare metal on different architectures, Linux, kFreeBSD and HURD kernels. If I'd screwed up, people would have had unbootable systems and come shouting. The quality bar was higher then and we did pretty thorough testing; I'd like to think we did a pretty good job. I certainly was never responsible for systems becoming unbootable on upgrade.
Running systemd changes the semantics and side-effects of some POSIX system calls due to its use of cgroups and such. This is a problem.
If you have a program using just POSIX system calls written using e.g. APUE (Stevens) for reference, it should run pretty much anywhere. Yet, you might well find it no longer works properly under systemd. I'm apparently expected to update my code, which is completely portable, to work around the changed behaviour under systemd. That's completely backward. systemd has pushed a huge amount of unnecessary work onto all sorts of different upstreams. See screen and tmux for obvious examples; it also broke filesystem mounting and session management in my schroot tool. None of these tools should have anything at all to do with init systems.