Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Um excuse me ... (Score 1) 543

I've been in mixed work environments before where everyone just used whatever tools they wanted: Linux, Windows, Mac, Vi, Emacs, etc... I personally used IntelliJ IDEA on Windows because it had code analysis and safe refactoring. My productivity was at least 50x higher than other developers. I was told not to submit changes too fast because the code reviewers couldn't keep up. [...]

I cannot take a claim of 50 times higher productivity seriously. But, finally real examples! Here's my take, as an "Unix is my IDE" user:

Inefficiencies were everywhere: they took 30 seconds to check out a file from source control using a command-line tool, whereas I could just start typing with a barely noticeable pause on the first character as the IDE does it for me.

Ok, the systems I use mostly don't require a checkout. Still 30 seconds is hard to explain; I'd do it in 2--3 seconds from Emacs or the shell, if there's no need for a checkout comment. But for systems which do require an explicit checkout, I'd prefer to do it explicitly: plan my work so that I check out roughly the right files with the right comments before I start editing.

They used "diff", I used a GUI 3-way merge tool that syntax highlighted and could ignore whitespaces in a semantically aware way.

That's not even the same task, so I fail to see the point. Diffing is indeed something you need to do to be efficient. I like to do simple diffs from Emacs, but often you want something more like "what's my current delta against what's checked out, across the whole project?" and that's better done from the shell.

For (manual) merging, I find GUI merge tools terribly inefficient compared to the "merge and leave both versions of conflicts in the file" strategy of diff3/CVS/Git ... I helped introduce that as an alternative at work (where we used to have only GUI 3-way, sequentially over each conflicting file, in a fixed order).

There was one particularly funny moment when some guy walked up to me to ask me to look into a bug in a specific method. He starts off with the usual "now go to the xyz folder, abc subfolder, now somewhere in here there's a..." meanwhile I'm staring at him patiently because I had opened the file before he'd even finished giving me the full method name at the start of his spiel. Global, indexed, semantic-aware, prefix search with jump to file is a feature of IntelliJ IDEA, not Emacs or Vi. He's never even heard of such a thing! Thought it was magic.

If you have coworkers who believe that's magic, you do have a problem.

But you also have a point here: etags/ctags don't play well with "overloaded" names. Fine with C, not so fine with C++ and possibly worse with Java (since you don't have header files which summarize interfaces). As a C++ guy, I admit that is a weakness of my environment, and it does hurt my productivity somewhat.

Comment Re:Obligatory Linux evangelism (Score 1) 294

Any antivirus solution worth its salt will put a hook in the file open system call to scan each file as it is accessed. Regardless of the footprint and efficiency of the program, anything that runs each accessed file through an additional filter will incur a significant performance hit. Therefore, any antivirus solution worth its salt will incur a significant performance hit.

Hm, my experience is the same, but my theory of what's going on is different.

I use Linux at home (no anti-virus) but Windows with encrypted file system and anti-virus (forget which) at work. Seems to me the problem is I/O load when the anti-virus software decides to rescan everything. There's little CPU load, but everything else slows to a crawl anyway because disk access becomes almost ... inaccessible.

A fast CPU or gigabytes of RAM won't help that scenario. Perhaps it's possible to write or tune the anti-virus software to use a lower I/O priority -- but if so I don't understand why they don't do that!

Comment Re:what? (Score 1) 376

Right, like people are going to know/care the kernel version their smartphone is using. Come on! I've been working and developing software for/on Linux for 15 years and I don't know the version of the kernel I'm running. Hell, I didn't even know kernel versions had names.

They don't; not in any meaningful way anyway. There's a $(NAME) tucked away in the Makefile, and that's what changed from "Unicycling Gorilla" (never heard of it) to "Linux for Workgroups" in commit ad81f0545e. As far as I can tell, this string is not even included in a built kernel. You certainly can't make uname(1) or /proc/sys/kernel/ emit this name.

Comment Re:Any day now (Score 1) 405

I said the same thing. Java's success was due to Sun's marketing department more than anything else. I guess the time had come for YALTEAL (yet another language to end all languages).

Yeah. There hadn't been any for a while in the late 1990s. People didn't yet take scripting languages seriously, and trying to use pre-standard C++ (without a standard library!) as if it was Smalltalk had failed some years earlier.

Also, those of us who didn't take Microsoft seriously were impressed that Java came from Sun, the brave defender of the Unix legacy.

Comment Re:The bottlenecks are elsewhere (Score 1) 295

Netflix OpenConnect pushes 20GBit+ on a FreeBSD-9 base with nginx and SSDs. Over TCP. To internet connected destinations.

Please re-evaluate your statement.

Ok. My posting was based on experiments with Linux where (at least in 2.6.x kernels) there's a fixed cost for accepting an Ethernet frame, feeding it into the IP stack and to a socket queue. This cost counts as "softirq" time, and cannot be spread across cores. I don't recall exactly which frame size I used, but in my experiments that core became the bottleneck at a few gigabits.

Somewhere between the system you describe and the one I describe, there's a significant difference. If it's not about heavy customizations and proprietary stacks, then I'm happy to be proven wrong! Even if it turns out that FreeBSD has a better networking subsystem than my beloved Linux.

Comment Re: The bottlenecks are elsewhere (Score 1) 295

You are so wrong it isn't even funny.

We're running app stacks at full line rate on 40GbE using today's hardware. A dual-socket sandy bridge server (I.e. HP DL380) has no problem driving that kind of bandwidth. Look up Intel DPDK or 6Windgate if you want to learn a thing or two.

Haven't checked 6Windgate, but Intel DPDK bypasses the IP stack and (as I understand it) pretty much turns the whole machine into an extension of the Intel NICs. Aren't your machines quite unlike traditional servers? E.g. in O&M and in networking APIs? But yes, I should not claim the only thing you can use 10Gbit for is to forward it using dedicated hardware.

Comment Re:The bottlenecks are elsewhere (Score 1) 295

Ten gigabits per second is 1,250 megabytes per second. High-end consumer SSDs are advertising ~500 MB/sec. A single PCIe 2.0 lane is 500 MB/sec. Then there's your upstream internet connection, which won't be more than 12.5 MB/sec (100 megabits/sec), much less a hundred times that. I guess you could feed 10GbE from DDR3 RAM through a multi-lane PCIe connection, assuming your DMA and bus bridging are fast enough...

More importantly, you can't make an IP stack consume or generate 10Gbit on any hardware I know of, even if the application is e.g. a TCP echo client or server where the payload gets minimal processing. The only use case is forwarding, in dedicated hardware, over 1Gbit links. 10Gbit is router technology, until CPUs are 5--10 times faster than today, i.e. forever.

Comment Re:Honestly don't know (Score 1) 190

I never checked if I was mentioned in the projects I submitted code to. I never write my name in my own code (subversion tells me who did what, and I only want useful comments, not distracting ones).

The "Copyright (c) 2013 Errol Backfiring" comment is a useful one, though -- if copyright is indeed yours.

Other than that, I tend to agree.

Comment Re:Timex Sinclair 1000 (Score 1) 623

You had so much room! I learned to program on an Ollevetti Programma 101 in 1971. It was essentially a programmable calculator with 120 possible instruction locations. It used RPN sort of.. and ...

There's a documentary about that one; it was on .se television a while ago. Hailed as a precursor to the home computers. But it's spelled "Olivetti".

Comment Re:When the incompatabilities ? (Score 3, Insightful) 112

Today the list of incompatabilities is small and unimportant. I wonder if one will make a really useful difference that would encourage developers choose one or the other; then users would really need to choose. At the moment which you use doesn't really matter.

If that's true, now is the best time to switch. Not when/if the vendor starts squeezing your balls.

Comment Re:It's my party and no one else is invited (Score 1) 212

What's clearly coming across here is that you're an established frat-boy who knows the arcane rules and implied hierarchy already

Politeness and respect when asking favors from people who don't know you, that's arcane rules these days? Wow ...

I'm with the grandparent. People claiming "I was badly treated by an OSS project" without even naming the project are at least as likely to have been the one who were at fault themselves. That also goes for "I was once badly treated by a Wikipedia editor" people.

Comment Re:All projects need your help. (Score 1) 212

I'm guessing you're just trolling, but here are some obvious examples:
http://office.microsoft.com/en-gb/
http://www.apple.com/ipad/
http://www.adobe.com/uk/products/photoshop.html

That makes sense; when I read your first posting praising the quality of proprietary software, my reaction was "that's funny; most proprietary software I've used sucked much worse than the free ones, had lower-quality documentation and everything". Microsoft's flagship products are an exception (although of course they don't do what I need, at least they are coherent and polished).

I haven't used an iPad or Photoshop, but they are also among the very few flagship products which get money and talent thrown at them.

Slashdot Top Deals

8 Catfish = 1 Octo-puss

Working...