epoll is nice: one of the few Linux-specific system interfaces which is clearly better than the standard Unix counterpart (select/poll).
Not necessarily. epoll is better than select, definitely. However, see this article on how it compares to poll:
The answer seems to be either poll or epoll can be better depending on the situation.
I use info all the time. It's very geared towards emacs programmers like me. The best info viewer is built into emacs, and it lets me read documentation without leaving emacs and opening up a web browser. In emacs: C-h i
The thing to remember about info pages if you are *not* an emacs user is that they generate html documentation. In fact, all the gnu html docs are actually generated from info docs. For example, the glibc manual mentioned in the review:
So in a sense info is the best of both worlds in that it can be read from a web browser, or from a terminal based application like emacs, and it supports hyperlinks in both.
The problem is that Ubuntu's bug tracker is a black hole. Bugs don't even get triaged on a regular basis, let along fixed.
If you look on the forums, bugs are fairly quickly identified and fixed. Often problems and solutions make it into the bug tracker; however, that's where the pipeline ends. Fixes almost never get checked into mainline.
Ubuntu is still the best distro in my humble opinion, because of the wide variety of up to date software available for it. However, each release gets worse in terms of quality. Their bug intake it like the US national deficit. They ignore the problem in the hopes that it will go away, but it won't. Eventually Ubuntu will simply not be usable.
Really, if Canonical would admit they have a problem, and publicly start recruiting community members to triage and fix bugs, they might be able make a dent in the backlog.
>Sorry if that sounds kind if "hippy", but saying that the entire FOSS world is based around nothing but hatred for a particular
>corporation really cheapens the accomplishments of the people involved.
What Linus was saying is that this is true of *some* people, and that they typically think of themselves as being part of some political movement i.e. "Free Software" as opposed to "Open Source."
Obviously, if everyone was more interested in politics than software like the FS guys are, we wouldn't get anywhere. For this reason, Torvalds and other have advocated Open Source as a pragmatic and non-political alternative to Free Software.
Open Source is essentially an open and cooperative development model with an open license. It is a model focussed on the development of quality software for which source is available for tinkering.
"Free Software" on the other hand has little to do with software at all, but is a political dogma centered around Richard Stallman as supreme leader, focussed on fighting copyright and corporate interests.
Indeed projects organized by the Free Software foundation aren't that open at all, and follow the cathedral model of development. This has historically led to a number of forks such as the GCC and emacs/xemacs forks, and also failed projects like HURD. FSF projects tend to be beset with political infighting... because they are about politics as much as they are about software. Some people are more interested in being "top revolutionary" than writing good code.
I think it's clear the open source people tend to have less patience for that kind of nonsense and that's why projects run on the open model are more successful. That's why Linux succeeded where HURD failed. That's why FSF projects are consistently forking into projects run in the bazaar model. See GCC/LLVM for a more recent example of this.
However, the FSF guys, because they are into politics, love to generate lots of noise. That's why sometimes it seems like they run the show, when in terms of projects and useful code, they are a tiny fraction.
What is [Richard Stallman] doing in his grave? Last thing I heard he was still alive.
I think it's remarkable that even when Linus Torvalds refutes the kind of mindless Microsoft bashing that some people like to engage in, most of the top comments on slashdot are more of the same.
Here's a choice Linus quote from that article that was left out of the summary.
"There are extremists in the free software world, but thats one major reason why I dont call what I do free software any more. I dont want to be associated with the people for whom its about exclusion and hatred.â
It's pretty clear he is referring to. People like this are holding OSS back by making it into an "us vs them" political fight, instead of an open and cooperative mode of development. Some people are more interested in being political demagogues then in developing good software, and for that reason they can go to hell. As an engineer, I have no time for wanabe revolutionaries and their cronies.
Yes, I am referring to who you think I am.
The gameplay of versus is great... we don't need another game mode, we need *MAPS*. I can't believe they've been wasting their time on this. If they'd actually talked to anyone who *PLAYS* their game, they would know that people want new maps.
Everyone is tired of playing mercy hospital for the trillionth time... A new game mode isn't going to make mercy more exciting.
If you are planning to do software development, a CS masters or Phd is very valuable.
The CS PhD's I know have the best jobs. Most of them don't work as academics after getting their PhD, but work in industry.
CS research is usually funded by big software companies, so why wouldn't you think they'd be willing to hire you if they were willing to fund your research in the first place? It's not a PhD in art, CS is very practical even on the theoretical side of things.
In school, your teachers focus on higher level languages like Java that are easier to teach with because they ignore lower level programming issues. However, in the real world, people still need to deal with all of these problems.
Higher level languages sweep low level problems under the rug, but they don't make them go away. You may be surprised someday to find that your Java programs can and DO leak memory if you aren't careful. To be a top notch developer you really do need to understand how higher level tools are built from lower level components.
In industry, not being able to find developers who understand C and low level programming issues is a common complaint.
Also, C is probably one of the easier languages to pick up. Learn C well, and complement it with a higher level language like Python or, if you must, Java.
C++ is also valuable if you need to write lots of high performance code, but being truly competent at C++ is a fairly large endeavor compared to learning C. Being good at real C++ means understanding templates, the stl, and also knowing the nooks and cranny's of the language, which there are a lot of. Look up the "most vexing parse" to get an idea of what I'm talking about.
Apple has no interest in building something as low margin as a netbook and has said as much...
Really, the netbook is killing the whole PC industry right now. Developing the atom was the worst mistake intel ever made. It's killed their entire business model.
True, then again the sensible uses of MI boil down to mixin and interfaces, both of which are supported directly in D.
Actually, if you have interface inheritance and mixins, I'm not sure you need need even single inheritance in the traditional sense.
Really though, if you want to get minimalistic, you don't even need interfaces if you use structual typing like templates do at compile time, and like python does at runtime.
So you could boil your type system down to just mixins and implicit/duck typing.
>Why do you think that GC must pause all threads?
Because it must compact the heap periodically, and the heap is a shared mutable resource.
I'm not really up to date on the best GC algorithms out there, but my understanding is that though modern ones have various ways of *minimizing* the number of full compactions necessary (such as generational GC) there's no way to get rid of it entirely.
I believe both recursive globbing "**" and coroutines are zsh features.
I'm glad they included this. I think it's fair to say that lack of recursive globbing support was pretty annoying in the past. Using a command composed with find composed with xargs was the previous alternative... and pretty over complicated compared to **.
In the past I'd switched to zsh, but moved back because most shell scripting information online is built around bash. This makes sticking with bash that much easier.
The GC is the way to go for complex application. The reason is simple: the GC has a global overview over all memory usage of the application (minus special stuff like OpenGL textures). This means that the GC can reuse previously allocated memory blocks, defragment memory transparently, automatically detect and elimitate leaks etc.
Yes... all you say is true, but it's what you are not saying that is what disqualifies a GC'd language for being a successor for C or C++.
C and C++ are used for:
2. Video codecs.
3. Video games.
4. Embedded programming.
1. GC must periodically halt all threads to compact the heap. Imagine if your kernel did this. The *whole computer would halt*. If anything displayign realtime graphics did this, such as a video game, you would get stutter.
2. GC is lazy about deallocation, so it chews up more heap space. Usually at least twice as much as the equivalent non-gc program. This isn't that big of a deal in most cases. but it means you can't use it for embedded devices where you might only a killobyte or two of ram. Yes, there are many of these devices, you probably use several.
Look, GC is great from a usability perspective. I would always prefer to use GC. HOWEVER, there are cases where GC will *just not work*, and I wish GC zealots would get that through their heads. The GC hammer is nice, but *sometimes* a more complicated and powerful tool is necessary for a more difficult job.
D did another thing right: it did not remove destructors, like Java did. Instead, when there are zero references to an object, the GC calls the destructor *immediately*, but deallocates the memory previously occupied by that object whenever it wishes (or it reuses that memory). This way RAII is possible in D, which is very useful for things like scoped thread locks.
Yes, destructors in C++ are very nice.
I'm actually kind of confused about this. Are you saying that D keeps a reference count *in addition* to doing GC, or are you saying that the destructor is called when the object moves out of scope like in C++?
- no multiple inheritance (which does make sense when using generic programming and metaprogramming; just see policy-based design and the CRTP C++ technique for examples)
I could not agree with you more. I do not understand multiple-inheritance phobia. If you have a good grasp of how objects and classes work, then multiple inheritance is not only safe, but incredibly useful as a way of adding mixins to a class. This is doubly true in conjunction with meta-programming and templates.
Fear of multiple inheritance tends to stem from the diamond problem... but really if your class hierarchy is is 3 layers deep, that is your *real* problem, not multiple inheritance. You want to restrict the *depth* of the tree, not the *breadth*.
Obviously Sun used to do something similar with Java.
The Python IDE I use, Wing, also allows you to access their source so you can recompile on various platforms.
Historically, AT&T unixes were distributed with source.
Really, I've always found it weird that proprietary software companies seem to think it's important to keep the source code super secret, as if it were some kind of trade secret. Having the source available for recompile and modification is handy for the user, whereas the risk that someone will copy past your source code is somewhat minimal. After all, integrating different source bases is an enourmous amount of work, and fairly easy to detect after the fact.
"Ada is PL/I trying to be Smalltalk. -- Codoso diBlini