Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:C had no real successor (Score 1) 641

I used to be a "C" zealot who considered it unmanly to use other languages. I've got past that with age. C is simple, but it comes at the cost of having to do every last thing by hand. Even when it's unnecessary and the chance of making mistakes is high. After spending years on the nitty-gritty details, there comes a time when you have to ask: do I really need to implement the needed bits of a vector/list/set/map/hash or do I just use one that I know will work without any trouble. Chances are it's optimised better than my hand-rolled one would be. And it takes a few seconds to use, since I don't have to write all that wheel-reinventing code myself. I can focus on the problem I was trying to solve, rather than unnecessary groundwork.

Stuff like std::vector is often criticised by those who don't appreciate what it does. After optimisation, element access is as efficient as a C array, and the automatic memory management is no worse than what you'd need to do in C anyway. And it's type-safe and exception-safe. Same with std::string. This is also often more efficient than using the C string functions (though obviously can be less so if used badly). But it's safe and easy to use, which is the big win. How often have you screwed up C string manipulation. That's right, we all have, and even when it appears safe and working, it's often a disaster waiting to happen.

I'm replying to this because as I was reading through I was working on some code for work. We were using std::vector in a set of nested objects to keep track of their children using std::shared_ptr, and to allow cross-references with std::weak_ptr. Works just fine. But profiling showed that our usage patterns were suboptimal. We needed to keep track of objects by their insertion order (linear) but also needed to do fast lookups (set/rbtree). I could have hand-rolled a custom implementation, or made a custom container with embedded vector and set objects and logic to keep them synchronised. However, a little research and this was the solution:


                template<typename T, template <typename ElementType> class Ptr>
                struct indexed_container
                { /// container type.
                    typedef boost::multi_index_container<
                        Ptr<T>, // value type
                        boost::multi_index::indexed_by<
                            boost::multi_index::random_access<>, // insertion order
                            boost::multi_index::ordered_unique<boost::multi_index::identity<Ptr<T> >, std::owner_less<Ptr<T> > > // sorted order
                            >
                        > type;
                };

Total time from finding the problem to researching and creating this working solution (including converting the existing code to use it): 2 hours.
So now I can do indexed_container<foo, std::shared_ptr>::type c and get a custom container containing std::shared_ptr<foo> indexed by both insertion order and by pointer value. With C++11 I could use a templated typedef and also avoid the struct wrapper. And multi_index_container template is usable for all sorts of esoteric multiple-indexing with indexing on individual/multiple fields or with custom functors or whatever you like. Why would I want to reinvent that when there's a pre-canned implementation that just works. It certainly has intellectual value, but when you just need to get something done, the above is using the available tools to best effect. Imagine how many hundreds of lines of C the above typedef would require to reimplement from scratch. And how many man hours would it take to implement? It's just not worth doing that.

Comment Re:C is primordial (Score 1) 641

This is all highly debatable!

However, if you had to choose between GObject-C and C++, the rational choice is C++. It's the better choice on all counts: safety, speed, correctness, robustness, maintainability. I had to learn this lesson the hard way by spending years banging my head against the wall using GObject when I could have just used C++.

Comment Re:C is primordial (Score 1) 641

It's easy to implement OO in C *badly*.

GObject shows that it's possible, but being possible does not make it sensible. It comes at the expense of an almost complete absence of type safety and const correctness since you cast it all away. And then you lose performance due to having to then check every single passed object is of the correct type (g_type_check_instance...). And if you screw up (which you will do), your program will crash. The C++ compiler would have done all of this for you, and you'd have a safer, more robust program at the end. It handled all those checks at compile time.

Creating classes with GObject is a maintenance nightmare. Consider ongoing refactoring as well as the initial writing; there's a lot of pieces to keep in sync to avoid disaster, and it's easy to slip up since you will often not be warned about inconsistencies. And when I was doing a conversion of GObject C to C++, I found a whole pile of latent bugs which the C++ compiler picked up but which the C compiler couldn't detect due to all the typecasting removing information. The conversion to C++ + STL improved the quality and robustness significantly, as well as making ongoing maintenance easy.

I was once a C zealot who believed GObject was amazing. I even worked commercially doing GTK+ programming. Hindsight shows how wrong I was. Use the right tool for the right job. And for OO, this does not incude C!

Comment Self discipline (Score 1) 312

Like others have said, it all comes down to your own self discipline. Since you've posted the question, you're obviously aware you could be doing this better.

When you're going to a lecture, you're going there with a single purpose: to attend the lecture. Listen, take notes, ask questions. Learn. Think. Understand. All you need here is a pad of paper and a pen. Write notes. Think about what's being delivered. Technology has no place here, and utterly detracts from the experience. Writing is great: it helps you remember the material, and it also stimulates you to think about it as you are doing so. I used to make a neat copy of each lecture's notes in the evening too, maybe also doing a little extra reading around to flesh out the details and understanding, which further reinforces your memory and understanding.

Some of the comments here mention being bored in a lecture. If you're bored, it's most likely the problem is with yourself, not the lecturer (even bad ones) or the environment. Lectures are about delivery of information from the lecturer to their audience. But it's not just about rote learning and passively soaking up information. With the above strategy, I used to be familiar with all the knowledge, and had read around the subject, but most importantly I had digested it, considered it, *understood it* while many of my classmates, who hadn't done this, were still struggling to recall the basic factual details. Go to the lecture with a clear purpose: to learn and understand. Not to mess with pointless gadgets. At the end of a lecture, most lecturers will ask if there are any questions. Make it your goal to think of at least three pertinent and insightful questions to ask in every lecture or talk. Write down potential questions as you're writing the notes; it makes you critically consider the material even just by the act of copying it down. Even if you don't actually have time to ask even one of them, or they get answered later on in the lecture, it doesn't matter. It'll make you focus even more, rather than just passively absorbing the material. And it'll make you curious, and go and find the answers yourself, furthering your knowledge and understanding. And you'll do better for it, because you'll be training yourself to think and engage, and you'll potentially end up right at the top end of the class by doing so.

When I was writing up my PhD thesis, I was suffering from a combination of procrastination and distraction in an open plan office. I was moved out of the main office in the lab into an unoccupied office with a door, desk and my laptop and a big pile of printed papers. No wireless, no wired ethernet. On my own, no internet. My only purpose in that office was to read papers, look at data, and write, write and write some more. It was both liberating and extremely productive. Until I got wireless working under Linux, at which point it slacked off a bit (but I had the discipline to not do that much).

Take home point: technology is useful, but in many (most) cases it only serves to distract us from the real objectives we have. And I say this as a full time PhD software developer in the life sciences who is reading slashdot in the evening. You can make your life much happer and more productive by turning it all off, leaving it alone, and focussing upon what you really need to do. If some part of what you're doing needs it, then use it, but use it just for that task and don't let it distract you. You might be surprised at how much free time you have after you've removed all the pointless distractions and completed the real work. For things like email, ignore them entirely and read them maybe twice a day, then the incentive to continually check it is removed and you can efficiently process it in one go. If you need to read a book or paper, print it out and go and read it somewhere quiet and distraction-free, and make notes on it etc.; it'll again be better than reading it on a computer which has other distractions.

Hope that's potentially useful,
Roger

Comment Re:Wonders of Public Transportation (Score 2) 257

Sounds pretty bad, but I don't think this necessarily applies to the experience in general.

I get the bus and cycle to work on alternate days (in Dundee, Scotland). I've never driven to work; I'd either have to pay through the nose for a parking permit and still have to find a free space or park on one of the nearby streets which would require getting there around 08:00 to get a spot, neither of which are worth the time, money or effort. And the time difference is in practice negligable. I have to leave 10 minutes earlier, big deal. In winter I'd be wasting that time defrosting and warming up the car. The busses are generally kept quite clean and tidy, and I've get to see any bad behaviour; the one time only I thought someone had bad personal hygiene it was a patient going to the hospital who was obviously using some skin medication, which allowances can obviously be made for. Rather than drive, I get 30 minutes to relax, read a book, read a newspaper, whatever, in relative warmth and comfort, which I just wouldn't get in a car. And there are usually no problems with overcrowding or the punctuality of the services, it's run pretty efficiently (I use Stagecoach and National Express routes). Honestly, I find it way more civilised and convenient than driving myself around.

Maybe there's a difference between countries here? There's no sort of social stigma in using a bus here, it's just one of the options available here, and a good one.

Regards,
Roger

Comment Re: Unix tool philosopy == Good Thing (Score 1) 647

There's no need to play silly word games over semantics. The meaning of what I was saying is quite obvious. You've quoted the licence disclaimer. I'm saying that I did a fairly extensive set of testing on a range of systems and configurations prior to every release and upload to ensure that it worked. This does not in any way invalidate the licence disclaimer; there is obviously a theoretical possibility that there may be extreme edge cases where it may not have worked, but we knew and had validated that it worked for all the common cases (and a number of uncommon ones as well). You could upgrade the package in confidence and expect your system to boot after the upgrade. In practice, problems did not occur because of the extent and quality of the test coverage, so that guarantee was indeed met.

Comment Re: Unix tool philosopy == Good Thing (Score 4, Interesting) 647

Um, what I think it should be is entirely relevant. I was primarily responsible for maintaining sysvinit and the initscripts from squeeze through to the wheezy release and after, doing the testing and providing the guarantee that it would boot. I was the one who did the testing before uploading. Different VMs, different upgrade scenarios, bare metal on different architectures, Linux, kFreeBSD and HURD kernels. If I'd screwed up, people would have had unbootable systems and come shouting. The quality bar was higher then and we did pretty thorough testing; I'd like to think we did a pretty good job. I certainly was never responsible for systems becoming unbootable on upgrade.

Comment Re:Unix tool philosopy == Good Thing (Score 0) 647

Running systemd changes the semantics and side-effects of some POSIX system calls due to its use of cgroups and such. This is a problem.

If you have a program using just POSIX system calls written using e.g. APUE (Stevens) for reference, it should run pretty much anywhere. Yet, you might well find it no longer works properly under systemd. I'm apparently expected to update my code, which is completely portable, to work around the changed behaviour under systemd. That's completely backward. systemd has pushed a huge amount of unnecessary work onto all sorts of different upstreams. See screen and tmux for obvious examples; it also broke filesystem mounting and session management in my schroot tool. None of these tools should have anything at all to do with init systems.

Comment Re: Unix tool philosopy == Good Thing (Score 3, Informative) 647

Very droll. But misses the point. Historically, Debian unstable was usually absolutely solid. Better than the stable releases of many distributions. I should know, I've run it on my desktop(s) for the last 14 years. I've had maybe two minor issues in that entire time. Its quality has plummeted in recent months as all this "modern" stuff has been jammed in without regard to proper backward compatibility.

You might this this is amusing. I'm upset that the distribution I've spent the last 16 years working on has been subverted by developers pushing software with major design and implementation issues, and no formal specifications for its many interfaces. For something which aims to become the base of all Linux systems, its current form is pretty amateur, and its lack of attention to detail in breaking existing installs on upgrade in various different ways is breathtaking. This is largely down to the difference in attitude between the older developers such as myself who spent huge amounts of time testing things worked on all sorts of different configurations, and the systemd crowd who simply tell you you're doing things the wrong way and must change, even if you've got a configuration which was supported for the last decade by Debian. The big change here is that systemd has broken compatibility with Debian's past supported configurations by not caring to support the full range of configurations the old sysv-rc/initscripts setup did; and its maintainers did not spend the necessary effort to ensure these setups were migrated and supported properly.

Comment PC-BSD (Score 2) 267

Also due to "a variety of reasons", I've also been looking at BSD on the desktop. Co-incidentally, I was just trying out the new PC-BSD release in a VirtualBox VM as this article appeared. It gave me a nice KDE desktop and so far looks pretty slick. The other stuff there like the package manager and control panel is enough to give Ubuntu a run for its money. I'll be interested in seeing how good it is in practice after a few weeks of real use.

Over the last year I've been slowly moving my software away from Linux. It's now mostly on FreeBSD or in the late stages of porting to BSD (adding BSD-specific features e.g. ZFS support, jail support). The desktop is really the only thing I still keep a Debian system around for. My last system will be a GNU/kFreeBSD jail instance on a FreeBSD server. I'll do a bare metal PC-BSD install in a few days and give it a try. If it works nicely, I think my last Debian unstable system will be removed in the near future. I was trying out (since 10.0) the newcons console and radeonkms stuff; mostly worked fine, and now with the new Xorg, no different than with Linux (maybe better, even, due to missing the worst parts of the freedesktoppy crap).

Linux in general, and Debian in particular, have been the major focus of my life over the last 14-16 years. It's been quite sad to let it go after the amount I've invested into it personally, but with systemd becoming unavoidable in unstable, it's no longer a system I wish to use or develop for, and developing went from being a joy to quite unpleasant. The enthusiasm I had was killed by several years of systemd flamewars and the last sparks were extinguished by bad interactions with a certain number of gnome and systemd people. It was clear over 18 months back this was an inevitable outcome unless something dramatically changed (which hasn't happened), and that my needs, goals and wishes were almost diametrically opposed to the new world order. systemd is the straw which broke the proverbial camel's back. Over the last few months I've had a few bug reports for my software. All due to systemd changing how the system works fairly fundamentally, and yet every upstream is supposed to work around this. This is code which is pretty much just using POSIX features directly out of APUE (Stevens). The lack of care for backward compatibility is unbelievable for such a fundamental part of the system, and altering the behaviour of basic POSIX features even moreso.

Comment Re:help them (Score 1) 89

They completely passed my mind while writing my reply. While I do have quite a lot of exposure to Java and Swing at work, I'm afraid to say my experience is not hugely positive. For the applications I maintain, the default (metal) look and feel on Linux is awful, though some distributions do replace this. While it certainly does work on all supported platforms, I'd have to say that I do prefer Qt by far, since its native look and feel is almost perfect on all platforms in my experience, and performance also enters the equation once you start doing OpenGL stuff, if not before. Stuff like jogl also uses JNI and this also affects platform support--needing to ship the native parts compromises the portability since you restrict the platforms you can deploy on. I do think that it's less effort, more robust and gives better platform coverage to build natively. Building the JNI wrappers for every platform is doubtless possible given sufficient effort, but a pain to build, integrate and test. Last time I tried to build this from source it turned out to be impossible; I do have concerns that the mess of dependencies that maven drags down is not independently rebuildable, and that's not even getting into the security concerns.

Comment Re:Not resigning from Debian (Score 2) 550

In the sysvinit scripts the philosophy is not so stark: the aim is to bring the system up to the greatest extent possible even in the presence of failures. If a single mount or service fails, we report the error and try to continue on. It's not perfect, but mitigation in the face of unexpected failure is never going to be.

If I make a typo when editing the fstab, I don't think bailing out is ever appropriate. The "nofail" flag is only useful for expected failures, and I think that the sysvinit script is correct in not using it as a basis for unilaterally failing startup when in almost all cases we could bring the system up to a state where the admin can log in and rectify things. This is going to be incorrect from the worldview of a perfectionist, but it's certainly the pragmatic choice.

Comment Re:help them (Score 2) 89

Difficult to say, it really depends upon the project, language and portability requirements.

Most of my projects are currently in C++ and/or Python. Qt 4.x and 5.x work well here for both on all current platforms (FreeBSD, Linux, MacOSX, Windows) including OpenGL support for all.

GTK+ 3.x is a non-starter. Its repeated breakage and dropping of functionality make it unsuitable for non-GNOME work (and even for GNOME developers it must be painful, as several of them have commented).

GTK+ 2.x might be a possibility for FreeBSD/Linux-only projects or projects where a subset of GTK+ which works on all needed platforms is actually functional. For anything complex, the chance of cross-platform breakage approaches certainty in my experience.
The GTKmm bindings are excellent but you are still wrapping a C API which introduces constraints e.g. for exceptions and I have run into bugs in the past; this is what the API should have been if C++ had been used from the start. I'd certainly be happy if they dropped the C implementation and made it the base implementation; it would resolve many of the quality issues with the library which are rooted in the inadequacy of OO-C compared with a proper OO language.

FLTK is superficially nice but its odd structure makes making new or derived widgets painful (from memory, not used recently). Certainly a possibility for simpler projects which don't need custom widgets.

wxWidgets is good in concept but fails in execution (it looks out of place on all platforms), and has to be a lowest common denominator in order to work with platform-specific toolkits. My trials showed up lots of widget layout issues if you wanted it to be good on all platforms, but with effort you might be able to make it work better.

GNUStep is a bit too niche for me to have tried using it in anger; last time I looked a decade back it was not completely compatible with the modern MacOS X evolution of OpenSTEP and required use of Objective-C. I wasn't sure it wasn't an evolutionary dead end at the time, and I'm not sure now.

Tk seems like a nice toolkit which has kept going over time and which does seem to work on all major platforms without too many major headaches. I'd certainly consider it if using a compatible language and didn't need the heavyweight widget and other libraries from Qt.

So I think very broadly I'd go with Qt > Tk > FLTK > GTK+2, though which would be most appropriate would depend upon the project.

Regards,
Roger

Comment Re:help them (Score 1) 89

I used to professionally develop GTK+ applications in the mid-2000s. It wasn't excellently maintained even then, but nowadays it's woeful. While it's nominally cross-platform the reality is it's always been in a state of partial brokenness on MacOS and Windows. You couldn't ever truly rely on it for cross-platform work.

I moved completely to Qt over the last couple of years. I still prefer the container model of GTK+, but the quality of the Qt toolkit is so much greater that a minor design difference is neither here nor there. If I was evaluating which toolkit to use for a new project today, Qt would be the default choice; GTK+ wouldn't even be on the shortlist. If I had to propose it for use to the rest of my team, I'd be laughed out of the room, and not unjustifiably.

Looking at the level of OpenGL support with this new work, it looks useful, but quite limited compared with what's in Qt. For example, with Qt the GL context has the ability to select which core profile you want to use and you can subclass your widgets with the GL functions you need to use, saving the need for GLEW or other GL wrappers. And it also has helper classes for wrapping shaders, accessing uniforms, and a whole bunch of other useful stuff which makes using GL vastly nicer than using the raw C API (though this is of course possible). I don't see any equivalent to these facilities in the GTK+ work.

Comment Re:i'd be careful with this one (Score 1) 123

10.0 used LLVM/clang 3.3 by default. Worked fine for me with the exception of not having support for some C++11 features (e.g. no typeinfo for std::nullptr_t). 10.1 is using LLVM/clang 3.4, which should solve those issues. That said, they would only be noticed if you're specifically using C++11; you'd also need a recent GCC if you stuck with GCC. 3.4 is a good improvement over 3.3.

That's not to say there aren't bugs in various packages which haven't been identified yet, but this is the same toolchain that's been default in MacOSX since 10.8 (and the standard library since 10.9), so it's not exactly untested. Personally, I've only seen minor issues with clang++ being stricter with some C++ syntax which needed a little tweaking, and these are generally seen at compile time, not runtime.

Slashdot Top Deals

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...