Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Compile time is irrelevant. (Score 1) 196

Honestly, if your compile times are that much, and that much of a burden, you need to upgrade, and you also need to modularise your code more. The fact is that most of that compile time isn't actually needed for 90% of compiles unless your code is very crap.

Hint: I said 'two hours to compile from scratch'. You can't avoid compiling all your source if you just did a clean checkout from SVN into an empty source tree; as you would, for example, before building a release or release candidate.

That, to me, sounds like "I don't trust my build system to do an incremental build". A lot of people don't, and end up spending $$$$ on building provably the same thing over and over again, dozens of times a day ... Stop doing that, and you can use a compiler which is ten times slower and still have shorter build times.

BTW, Git does the right thing WRT timestamps and Makefiles: when it changes a real file, it updates the timestamp to the current time, not the time the version was created. Now that I've used it, I don't understand why tools made the wrong choice for thirty years.

Comment Re:TFA does a poor job of defining what's happenin (Score 1) 470

OK, that explains why I've been getting away with assuming they wrap since the Clinton administration. I don't know if anybody ever explained it to me in C terms.

It's *your* responsibility as a C programmer to find out what the rules of the game are; you should accept that responsibility. However ...

I always assumed that behavior was baked in at the CPU level, and just percolated up to C. I never felt inclined to do any "bit twiddling" with int or even fixed-width signed integers because on an intuitive level it "felt wrong".

... that's exactly the correct way of thinking (informally) about it. There are so many different representations of signed integers, but only one popular one of unsigned integers.

Comment Re:Distribute (Score 1) 190

It seems you could easily distribute the load on multiple machines, each doing a subset of the regex.

There's something hilarious about having to distribute email filtering across several machines.

No; it's tragic that people reach for distributed, or "multi-core", whenever something runs too slowly. If filtering a mail according to a set of REs takes 15 seconds of CPU time like the OP writes, he's clearly doing something wrong, or hitting some limitation of procmail's design (being unable to amortize work, such as reading and parsing the REs, between mails).

Distributing wasteful work is not the right solution, in particular not if energy efficiency is a major concern like the OP suggests. (And I find it hard to believe that a 15s delay of mail delivery is his main concern!)

Comment Re:Or... (Score 1) 190

Fuck you and your RBLs. RBLs are a draconian solution that do immeasurable damage to those of us who (1) aren't spammers, and (2) choose to run our own mailservers on business-class IPs. [...] Oh, because someone in the same /24 block sent spam? Really? That's a good reason to block an entire /24 subnet?

That sucks. And more generally, I believe the anti-spam fundamentalists have caused as much damage to the use of reliable mail, as the actual spammers. They are much too willing to accept collateral damage. Sad, when you consider how much effort went into designing (a) guaranteed delivery or (b) guaranteed notification of non-delivery.

For this particular situation: I accepted early on that many misconfigured mail servers "work" like you describe, so I make sure that my ISPs lets me relay via them.

Comment Re:spamassassin (Score 1) 190

I don't think there's any such thing as "pretty much finished",

There is; software designed according to "do one thing and do it well" ... for example the Unix cat(1) command is probably pretty stable by now. Same with fgrep(1).

especially with a piece of software involved in the arms race that is spam vs. filtering.

... but yeah, well, I don't know Spamassassin but I suspect it has broader and more loosely-defined goals.

Comment The interview (Score 1) 252

At that point in the interview, I'd respond:

*Shrug* No. I don't like telling people what to do, and I suspect I'm not good at it. NB this doesn't mean I lack social skills. I can work with people; I just don't want to lead them.

Surely it's not hard to explain? It's how most people feel, after all.

Comment Re:We don't shun those who should be shunned. (Score 1) 479

Unless you're a Windows programmer, I'd stick with C, which is infinitely simpler, and provides you freedom to maintain competency in other languages,

Huh? So C++ somehow removes that freedom?

Let me also remind you that while C is simpler than C++, your C source code is more complex than your C++ code -- while doing the same things, using the same APIs and libraries. Personally, I am tired of implementing linked lists and hash tables for the Nth time and hunting for memory leaks and buffer overflows. Learning C++ was time well spent.

On the other hand, learning to use C is a good idea. It's the lingua franca in the Unix world at least.

many of which have far cooler features than C++ will ever be able to provide.

It is true that you're crippled if all you know is C++. Or C. Unsurprisingly, neither language is supposed to be the perfect tool for every job.

Comment Re:Back to BASIC (Score 1) 479

[C++] Talk about understanding value construction and destruction. And exception safety! Does anyone actually grok it to the degree that one doesn't need to think about it all the time?

I do. It's not hard. Not letting objects be in an invalid state in a context which expects them to be in a *valid* state is something that's important in any code in any language (except maybe functional languages?). C++ is not the problem here but part of the solution.

Comment Re:Other options not always an option (Score 1) 238

Lots of people here saying "Don't use Adobe" and suggesting alternatives. Reality is, for many of us, we deal with complex PDF forms and applications that integrate directly with Adobe Acrobat. In my business [---] "Axis of Evil" of insecure software.

That seems like an accurate, believable description of a common situation. I just wish I'd some time see someone *try to get out* of a lock-in situation like that. Or try to avoid creating more such situations. It has been well-known for decades that you can end up there, and yet organizations still plunge in, head first, all the time.

(Note that the lock-in isn't just about paying $$$ to the vendor indefinitely. It also means your data is cut off from the rest of the ecosystem; you can't benefit from inventions done elsewhere. No version control for your MS Word documents, and so on.)

Comment Re:Um excuse me ... (Score 1) 543

I've been in mixed work environments before where everyone just used whatever tools they wanted: Linux, Windows, Mac, Vi, Emacs, etc... I personally used IntelliJ IDEA on Windows because it had code analysis and safe refactoring. My productivity was at least 50x higher than other developers. I was told not to submit changes too fast because the code reviewers couldn't keep up. [...]

I cannot take a claim of 50 times higher productivity seriously. But, finally real examples! Here's my take, as an "Unix is my IDE" user:

Inefficiencies were everywhere: they took 30 seconds to check out a file from source control using a command-line tool, whereas I could just start typing with a barely noticeable pause on the first character as the IDE does it for me.

Ok, the systems I use mostly don't require a checkout. Still 30 seconds is hard to explain; I'd do it in 2--3 seconds from Emacs or the shell, if there's no need for a checkout comment. But for systems which do require an explicit checkout, I'd prefer to do it explicitly: plan my work so that I check out roughly the right files with the right comments before I start editing.

They used "diff", I used a GUI 3-way merge tool that syntax highlighted and could ignore whitespaces in a semantically aware way.

That's not even the same task, so I fail to see the point. Diffing is indeed something you need to do to be efficient. I like to do simple diffs from Emacs, but often you want something more like "what's my current delta against what's checked out, across the whole project?" and that's better done from the shell.

For (manual) merging, I find GUI merge tools terribly inefficient compared to the "merge and leave both versions of conflicts in the file" strategy of diff3/CVS/Git ... I helped introduce that as an alternative at work (where we used to have only GUI 3-way, sequentially over each conflicting file, in a fixed order).

There was one particularly funny moment when some guy walked up to me to ask me to look into a bug in a specific method. He starts off with the usual "now go to the xyz folder, abc subfolder, now somewhere in here there's a..." meanwhile I'm staring at him patiently because I had opened the file before he'd even finished giving me the full method name at the start of his spiel. Global, indexed, semantic-aware, prefix search with jump to file is a feature of IntelliJ IDEA, not Emacs or Vi. He's never even heard of such a thing! Thought it was magic.

If you have coworkers who believe that's magic, you do have a problem.

But you also have a point here: etags/ctags don't play well with "overloaded" names. Fine with C, not so fine with C++ and possibly worse with Java (since you don't have header files which summarize interfaces). As a C++ guy, I admit that is a weakness of my environment, and it does hurt my productivity somewhat.

Comment Re:Obligatory Linux evangelism (Score 1) 294

Any antivirus solution worth its salt will put a hook in the file open system call to scan each file as it is accessed. Regardless of the footprint and efficiency of the program, anything that runs each accessed file through an additional filter will incur a significant performance hit. Therefore, any antivirus solution worth its salt will incur a significant performance hit.

Hm, my experience is the same, but my theory of what's going on is different.

I use Linux at home (no anti-virus) but Windows with encrypted file system and anti-virus (forget which) at work. Seems to me the problem is I/O load when the anti-virus software decides to rescan everything. There's little CPU load, but everything else slows to a crawl anyway because disk access becomes almost ... inaccessible.

A fast CPU or gigabytes of RAM won't help that scenario. Perhaps it's possible to write or tune the anti-virus software to use a lower I/O priority -- but if so I don't understand why they don't do that!

Comment Re:what? (Score 1) 376

Right, like people are going to know/care the kernel version their smartphone is using. Come on! I've been working and developing software for/on Linux for 15 years and I don't know the version of the kernel I'm running. Hell, I didn't even know kernel versions had names.

They don't; not in any meaningful way anyway. There's a $(NAME) tucked away in the Makefile, and that's what changed from "Unicycling Gorilla" (never heard of it) to "Linux for Workgroups" in commit ad81f0545e. As far as I can tell, this string is not even included in a built kernel. You certainly can't make uname(1) or /proc/sys/kernel/ emit this name.

Slashdot Top Deals

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...