Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:C/C++ faster but produces more bugs (Score 1) 670

That's the old JIT argument, and while in theory it might have some merit, in the last decade it shown not to. Christ, a great deal of stuff still targets i386 just to ensure it runs on everything, and yet those apps still out perform Java/C# apps. Why? Because the core instructions are still the core instructions and the old RISC rule holds true, most of the work is done using a few key instructions. Plus where the JIT argument breaks down is with things like DLLs. The DLL can be very specific for the computer, and old applications link in that DLL to do the work, and thus the work in question for the old application is done with the latest, computer specific, stuff. If you want speed, use pre-canned stuff, if you want productivity use something like python, if you want both, probably best use a language for each. For instance core logic in C, and GUI stuff in python. Or use C++ and except the complexity that adds. If you want to have both but are willy to compromise to have both, then maybe that is where JIT languages come in.

Comment Re:Give me Debian package management any day (Score 1) 82

In Debian (and no doubt other package managers) you can have A and B use different versions of a shared lib, but one uses the "somelib.so" and the other "somelib.1.0.so". Normally a version of lib is standardize as the version of a lib. Other versions are used with version number as part of the name. If there is a conflict, then yes you can have only one or the other, but I don't see how you get out of that. For instance /usr/bin/convert and /usr/local/bin/convert is still a conflict in my book, one (local) overrides the other even if it's not overwritten it. You could hack something up with chroot, but it all starts getting ugly. Unless I'm missing something of course.

Comment Give me Debian package management any day (Score 1) 82

I grew up as a RiscOS user, which had this kind of application folder system.
Package management is >much If you are going to have shared blobs of code like shared-objects/DLL you need package management, end of story.
You want one copy of each, or a least one of each version, and you want to update that one file. Even on modern machines, you don't want to statically link everything, even if you did, think about updates. If one of these files need a fix, it's much easier to update that one file, then update every program built statically against it. Use version number as part of the filename and have a sym link without the version number to the latest version. If something needs a specific version, it can be built to link to that not the latest. You can do as many version levels of this as required. Seriously, this is much better centrally managed and updated. You can even list all applications that use the file, even before anything is installed. I wouldn't go back to an application folder system if you paid me to. Windows has some central management with the manifest stuff and add/remove programs, but compared with Debian package management, it's an over-complex mess or a fraction of the power. Other package manager might be as good as Debian's, but so far, none have impressed me as much.

Comment Re:Why do people underthink memory usage? (Score 1) 258

There is a sweet spot though isn't there. Assembler is slow to develop in, but very fast. C can be used almost as a high level assembler, you can be pretty clear about how you want things to compile, yet C is much faster to develop in than assembler. Some languages (say python) are very quick to develop in, but if you care about speed at all, are the wrong choice. I would say C and C++ are the sweet spot where you get most bang for bucks in terms of speed cost and productivity gain. Afterwards, it does seam like diminishing returns, you pay more and more performance, for less and less productivity gain. Assembler to C is the biggest jump at the smallest cost.

Comment Re:Why do people underthink memory usage? (Score 1) 258

Memory is used, even if no application is using it. It's used to cache disk. Free memory is not wasted memory. Bloat is a relative term. If one application, with feature parity with another, uses much less memory than the other, it makes the other seam bloated. If Unity is using much more memory than Gnome3, as there is feature parity, then Gnome3 is bloated. It's an arms race we all win from because our computers get faster for free. :-)

Comment Re:first full bodied nonx86? (Score 1) 381

This is why MS are pushing .NET, moves them away from being in a death embrace with x86 (that and it gives a complete platform to lock people into).
I see byte code as a way closed source code can get the "run on anything" of open source, without opening the source.
With open source, you just compile it on the platform in question (ok it is more than that, but can be that).
If you have the source, and it's in the repository specific for a processor (and why not, not that many types of processors), why not just compile to native rather than a byte code indirection?
Thing is about this is that Windows was a slow and fat platform when it was native.... How is it going to compete on ARM when it's byte coded too?

Comment Re:The nebulous danger (Score 1) 257

But Mono aims to be compatible. If they give up that, that would be one less problem I have with them. However, MS haven't really started the swerving to control the standard yet, they are still very much in the growing the standard part of the cycle.

Comment Re:Lets deal with MS first eh. (Score 1) 205

For the last two years I've been doing shell stuff. It is not well enough documented. With Win7 there seams to be more stuff now undocumented. Weather it's on purpose or because they don't care, it's not enough. The key I found for XP was finding the torrent with most of the shell code for Windows 2000 (and it is awful code), for Win7, it's back to guess work and trial and error. Some classes and defines/flags/enums are not documented at all, but generally the biggest problem is interaction information about how all these classes work together. Maybe for this level of complexity, source is just better documentation, I find so, but I except I'm odd in preferring to read code than docs.

Comment Refresh rate isn't the problem (Score 1) 423

The problem is clearly knowing where to look. If you try and look anywhere but where the focus point is, it's uncomfortable. The first time I watch something in 3D, it took me a while to learn not to look in the background at anything, to learn to let my eyes be led. Even then, if there is a lot going on, it's not always clear where you should be looking. A mate of mine, a 3D fan, has just gone crazy and bought a massive, high end 3D TV, and even he admits he can't watch it in 3D all the time as it gives him headaches. BR>
3D will only work perfectly when each eye is presented with an image for where it is looking. I'm not being a luddite, if I was still doing 3D modelling on the computer, I think this could be very useful, but I'm not going to buy into this technology for every film. Films are long and the current 3D technology is too much work to watch. Doing proper 3D, with a proper image for each eye (i.e. hologram) isn't happening any time soon.

Comment Re:The nebulous danger (Score 1) 257

It's not just a patent problem. It's the old chasing tail lights problem. MS have a long history of encouraging people to commit to follow them, and then swerving like mad to loose them. Throwing patent rocks out the back is just one dirty trick that they have used in the past. It's all well documented, you won't have to look hard. One of the better sources is as always Groklaw.

Slashdot Top Deals

"Kill the Wabbit, Kill the Wabbit, Kill the Wabbit!" -- Looney Tunes, "What's Opera Doc?" (1957, Chuck Jones)

Working...