Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Why aren't there more contributors to this proj (Score 3, Informative) 252

If there was a compatibility layer to run OSX applications on Linux, that might actually be a viable option. OSX has most of the big things people want: MS Office, Adobe Photoshop and friends, AutoCAD, etc. Conceivably, such a compatibility layer could be easier to write, debug, and maintain than WINE, since there is a lot less legacy baggage (and the underlying architecture is much closer to what Linux expects). But I am not aware of any such project so far

Well, there's the Darling project. I get the impression it's very much a work in progress, however.

Comment Re:Ethernet is only 33 years old (Score 1) 159

Did y'all know that the original spec for Ethernet was to be a wireless network???

One of the earliest networks allowing collisions and using collision detection was the ALOHA network, and that was wireless, but that also wasn't Ethernet. Are you thinking of ALOHAnet?

I can't find a copy of Metcalfe's "Alto Ethernet" memo, but this Wired article has a diagram from the memo that does include "radio ether" but also includes "cable ether" and "telephone ether".

Comment Re:I never got "packaging systems" (Score 1) 466

Why is it SO hard for people who use linux to understand that there are multiple runtime libraries because windows has been around so long there are multiple versions of the shell environment. To ensure that the program runs correctly on the target machine the runtime is included. This in turn relates to the kernel which linux does not handle gracefully at all. I don't know how many times I've wanted to install an app on linux but it is dependent on features from a specific kernel. Windows does this to some degree but by shippping a runtime its possible to translate the instructions of the application in question to an older or newer kernel.

"Dependent on features from a specific kernel" as in "doesn't work with 2.6.22, works with 2.6.23, doesn't work with 2.6.24", or "dependent on features from a specific kernel" as in "doesn't work with 2.6.22, works with 2.6.23 and later"?

The former either means they introduced a feature in 2.6.23 and yanked it in 2.6.24 or that it's dependent on implementation details from a specific kernel. The first of those might be done less in Windows, but that's a question of whether the OS's developers treat "preserving compatibility" as being more important than "not leaving cruft around". The second of those can show up in applications for any OS if the developer isn't careful.

The latter means "gee, they introduced a new feature in 2.6.23, which my program uses"; that happens in Windows, too - try unconditionally using an API or an API feature introduced in Windows Vista and then see whether your program runs on XP. One trick to handle that, at least in the case of a routine being introduced in a newer version of Windows, is to do a LoadLibrary() on the library containing the API and GetProcAddress() to try to get the address of that routine; if it fails, disable the feature requiring that routine or work around its absence in the code. That same trick can be done on UN*Xes, including Linux; replace LoadLibrary() with dlopen() and GetProcAddress() with dlsym().

"Windows does this to some degree but by shippping a runtime its possible to translate the instructions of the application in question to an older or newer kernel." sounds more like changing the system call interface to the kernel and changing the routines that use it to match. That's not restricted to Windows; one goal of the SVR4 shared library mechanism (which is what Linux's shared library mechanism is based on) was to allow that to be done transparently to compiled applications, by having applications dynamically linked with system libraries, so that an application binary gets the appropriate version of the library for the kernel version. OS X's shared library mechanism works the same, and Apple doesn't even support statically linking with its libraries.

Comment Re:More Flexibility? (Score 1) 466

I'm face palming now 'cause omg-config most certainly part of the installation procedure for apps. .pc files?

pkg-config is part of the installation process for libfoobar-devel packages. It's not part of the installation process for libfoobar packages; you may need the .pc files for a library if you're developing code that uses it, but you don't need them if you're running prebuilt binaries that use it.

Comment Re:More Flexibility? (Score 1) 466

100% of windows applications have to go through the kernel to load dlls

As do 100% of Linux applications, *BSD applications, Solaris applications, HP-UX applications, AIX applications, OS X applications, etc., because accessing files such as shared library files on those OSes involves the kernel.

However, at least as I read the Windows Internals books, the actual loading of dlls other than ntdll.dll is done in user mode by LdrInitializeThunk.

On most current UN*Xes, the process of launching an executable image, except for 100% statically-linked images, involves the execution of the run-time linker, with the executable image itself handed to the run-time linker as a parameter in some fashion (e.g., being opened as a file, with a file descriptor for it being available to the run-time linker); the run-time linker, running in user mode, loads the shared libraries. (See the PT_INTERP program header element in ELF or the LC_LOAD_DYLINKER load command in Mach-O; those specify the image file to use as the run-time linker.)

and so it presents a standardized interface for doing so. Linux does not have this.

It might be easier to use a different mechanism for loading dynamically-linked libraries on Linux (or other UN*Xes) than on Windows, but it still takes work.

Are the Linux apps that don't use the standard Linux mechanism (ld.so) 100% statically-linked images, or what?

Comment Re:The good old days (Score 1) 466

Call me skeptical, but chdir() is a UNIX system call, not a command line program. The command line program is called cd.

(Actually, it's a shell builtin, not a program; it has to be, as a child process can't change the parent process's current working directory.)

It's called "chdir" in V6 UNIX, which is what that script was from. See the SH(I) man page in section 1 of the V6 manual.

And why the hell are you byte-comparing a.out with /usr/bin/yacc (which supposedly doesn't even exist yet)?

Beats me. Why don't you ask this guy?

Comment Re:I never got "packaging systems" (Score 1) 466

Second is the pretty-good reason: compatibility and correctness. You can definitely have multiple major versions (e.g. the runtime associated with VS2008 and 2010) installed simultaneously, and I think you might be able to have multiple patch versions of the same library installed simultaneously. I think the former is true in Linux too (libfoo.so.1.0.0 vs libfoo.so.2.0.0,

Well, you're not likely to have multiple versions of the C runtime installed, because, in most if not all UN*Xes, the C runtime is part of the equivalent of kernel32.dll (libc.so, libSystem.dylib, or whatever it's called).

But, yes, you can have multiple "major" versions of libraries present. The SVR4 shared library mechanism, upon which the Linux and *BSD shared library mechanism are based, and the SunOS 4.x shared library mechanism upon which the SVR4 mechanism is based, gives libraries "major" version numbers, which change when the library ABI changes in a binary-incompatible fashion, and "minor" version numbers, which change when the library ABI changes in a way that preserves binary compatibility with older library versions but might add features (routines, flags to routines, etc.) that, if used, won't work with those older versions.

However, if your application uses libfoo version 2, but it's linked with a library that uses libfoo version 1, that's a problem. (Replace "a library" with "libpcap", and replace "libfoo" with "libnl", and you have one of the problems that makes me want to have libpcap on Linux talk directly to netlink sockets without the "help" of libnl, but I digress....)

but the latter isn't so much. It may well be that Program A installs version 1.0.0 and Program B installs version 1.0.1239, where on Linux the latter would likely be packaged to upgrade the former.

If libfoo is done correctly, any program linked with version 1.0.0 should Just Work with version 1.0.1239, and Program B should only upgrade to 1.0.1239 if there's a bug in 1.0.0 through 1.0.1238 that breaks Program B so it requires 1.0.1239 or later, and Program A should just require 1.x.x and not install 1.0.0 if 1.0.1239 is installed.

If you take the Linux approach, then programs which rely on the old behavior of the buggy code will break. This is sometimes good (e.g. bad security-related fixes), but is often not. Or it doesn't have to be a bug fix, it could just be some behavior change within the specification. By keeping multiple versions around, the Windows approach keeps the individual programs happier.

How you weight these various advantages and disadvantages is up to you. I'm not really trying to argue that the Windows approach is better, just explain why it is as it is and give a fair description of what goes on.

Yes, that's the question of the extent to which the real "specification" upon which clients depend on is the official specification or the full behavior of the implementation, and the extent to which you're willing to tell developers of code that doesn't fit the former specification but fits the latter specification to go pound sand if you "break" their code. Sometimes you end up not telling them to go pound sand, e.g. the "7090 compatibility mode" in the IBM 7094 (in which mode the index number field in instructions is interpreted not as an index register number but as a bitmask with bits corresponding to 3 of the index registers, with all the index registers specified by the bitmask being ORed together to generate the index) or the hacks in various OS X libraries in which the library detects that program XXX is using the library and falls back on the old buggy behavior (I think Raymond Chen's "The Old New Thing" has examples of similar hacks on Windows).

Comment Re:What am I missing? (Score 2) 255

See, I think they should fall up. Antiparticles are predicted by the negative energy solutions of the Dirac equation.

But they still have positive energy. (Think of them as "holes" in a sea of negative-energy electrons; kick an electron out of that sea and you get a positive-energy negatively-charged electron and a positive-energy positively-charged "hole", i.e . a positron.)

Comment Re:I must be stupid (Score 4, Informative) 255

However, this is not how we have traditionally defined anti-matter; the original definition was actually due to the fact that the universe has significantly less mass than it should, and "anti-matter" was hypothesized as an explanation.

Actually, the original modern definition of anti-matter was "Dirac's relativistic equation for the wave function of the electron had negative energy states as well as positive energy states, which was a bit weird, so it was proposed that all the negative energy states were filled, and if you knocked an electron out of one of the low-energy states, a "hole" would be left behind, and that hole behaved like an electron, except that it has a positive charge". It was later seen in the real world (particles moved in a magnetic field as if they had the mass of an electron and a +1 electrical charge). See, for example, the Wikipedia article about the positron.

Comment Re:Oh, good (Score 1) 219

Correlation does equal causation, both statements are true. Just missing some conditions there. Say you repeatedly hit your head with a hammer, it would be right to correlate it with the pain in your head. But if your were walking, saw a shooting star and felt a pain in your left knee, no that does mean the shooting star caused it.

"Repeatedly" is the key word there. A one-time incident with a shooting star and a pain in your left knee doesn't give much of a "correlation"; you need a few more data points for that.

And a more precise version of what should be meant by "correlation is not causation" is "if A and B are correlated, that, by itself, is insufficient to suggest that A causes B, given that the same correlation would show up if B caused A or if C caused both A and B". The "conditions" in your first example are what let you conclude that "A causes B" is the most likely case.

(If somebody were able to make their headache go away by hitting themselves on the head with a hammer, that might be a case of "B causes A" there, but that would be a case of the pain coming first and the hitting-yourself-on-the-head coming later; if somebody were to have a neurological disorder that 1) caused pain in the head and 2) caused an impulse to hit himself or herself on the head with a hammer, that would be a case of "C causes A and B", but, in that case, the pain would probably happen before he or she hit himself or herself on the head.)

Slashdot Top Deals

The best book on programming for the layman is "Alice in Wonderland"; but that's because it's the best book on anything for the layman.

Working...