Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:No It really hasn't (Score 1) 1215

Linux has too choice in ways that should be like do I really need 6 text editors as part of the base os?

Also make so you move a little slower then distribution X 2012 2012.5 2013 with out a seamless update system.

All come with gedit only

All what come with gedit only? The KDE-based distributions probably don't come with gedit only.

Comment Re:Linux has too many distributions (Score 1) 1215

Linux has too choice in ways that should be like do I really need 6 text editors as part of the base os?

You might not, but the user base as a whole might want it. At least some of the major desktop environments have their own text editors (Kate for KDE, gedit for GNOME), and may be set up so that's what you have by default; having extra ones doesn't cost much in terms of disk space (if it costs a lot in terms of brain stress at having to deal with having a choice, Linux probably really isn't for you - but, then, given both Notepad and Wordpad, Windows might not be for you, either).

Comment Re:mac os to much hardware lockdown with high pric (Score 1) 1215

mac os to much hardware lockdown with high prices and limited choice.

why no $1000-$1500 desktop that has desktop video cards, RAM and cpus? at least 2 HDD bays?

Why is AMD cpus need a custom kernel? Linux and windows don't do that.

The answer to the third question is "because there's hardware lockdown", i.e. Apple have chosen not to offer OS X for non-Apple machines and have chosen not to use AMD CPUs in their machines and, therefore, as they didn't need to support AMD CPUs in XNU, have chosen not to bother supporting them.

The answer to the first and second questions is "because, for whatever reason, Apple isn't interested in offering them" (combined with "because there's hardware lockdown", so you can't have machines like that from anybody else running OS X unless you make the machine a Hackintosh).

Comment Re:Why aren't there more contributors to this proj (Score 2) 252

It has no chance of dethroning Windows. Zero. Zip. Nada.

Look, no one will ever be as good at being Microsoft as Microsoft is. ReactOS may be eventually be 99 44/100 % Windows compatible. It may look like Windows, feel like Windows, and act like Windows almost all the time--but it won't be Windows. And sooner or later, anyone running it will run into some instance where Windows does this but ReactOS does that. Now, when this happens (when, not if) developers will say, "That's interesting, we should fix that." But regular users will think, "Serves me right for trying to use this cheap knockoff. Guess I'll just get the real thing." And if anyone asks them about their experience with ReactOS, that's pretty what they'll say.

That's exactly why Linux failed to replace UNIX. A knockoff can never succeed.

A knockoff that's competing with a family of OSes that aren't 100% compatible with each other at the source level, that run on machines that are typically more expensive than the primary class of machines on which the knockoffs run and that don't even have the same instruction sets as each other (so that binary compatibility is out of the question), and on which a lot of the software is either open-source or written in-house so that it can be compiled and run on the knockoff, could succeed.

A knockoff that's competing with a single OS that has a ~90% market share and that has a huge collection of binary-only packaged applications that might depend (explicitly or implicitly) not only on documented behavior but also on undocumented behavior is a different story.

Comment Re:Why aren't there more contributors to this proj (Score 3, Informative) 252

If there was a compatibility layer to run OSX applications on Linux, that might actually be a viable option. OSX has most of the big things people want: MS Office, Adobe Photoshop and friends, AutoCAD, etc. Conceivably, such a compatibility layer could be easier to write, debug, and maintain than WINE, since there is a lot less legacy baggage (and the underlying architecture is much closer to what Linux expects). But I am not aware of any such project so far

Well, there's the Darling project. I get the impression it's very much a work in progress, however.

Comment Re:Ethernet is only 33 years old (Score 1) 159

Did y'all know that the original spec for Ethernet was to be a wireless network???

One of the earliest networks allowing collisions and using collision detection was the ALOHA network, and that was wireless, but that also wasn't Ethernet. Are you thinking of ALOHAnet?

I can't find a copy of Metcalfe's "Alto Ethernet" memo, but this Wired article has a diagram from the memo that does include "radio ether" but also includes "cable ether" and "telephone ether".

Comment Re:I never got "packaging systems" (Score 1) 466

Why is it SO hard for people who use linux to understand that there are multiple runtime libraries because windows has been around so long there are multiple versions of the shell environment. To ensure that the program runs correctly on the target machine the runtime is included. This in turn relates to the kernel which linux does not handle gracefully at all. I don't know how many times I've wanted to install an app on linux but it is dependent on features from a specific kernel. Windows does this to some degree but by shippping a runtime its possible to translate the instructions of the application in question to an older or newer kernel.

"Dependent on features from a specific kernel" as in "doesn't work with 2.6.22, works with 2.6.23, doesn't work with 2.6.24", or "dependent on features from a specific kernel" as in "doesn't work with 2.6.22, works with 2.6.23 and later"?

The former either means they introduced a feature in 2.6.23 and yanked it in 2.6.24 or that it's dependent on implementation details from a specific kernel. The first of those might be done less in Windows, but that's a question of whether the OS's developers treat "preserving compatibility" as being more important than "not leaving cruft around". The second of those can show up in applications for any OS if the developer isn't careful.

The latter means "gee, they introduced a new feature in 2.6.23, which my program uses"; that happens in Windows, too - try unconditionally using an API or an API feature introduced in Windows Vista and then see whether your program runs on XP. One trick to handle that, at least in the case of a routine being introduced in a newer version of Windows, is to do a LoadLibrary() on the library containing the API and GetProcAddress() to try to get the address of that routine; if it fails, disable the feature requiring that routine or work around its absence in the code. That same trick can be done on UN*Xes, including Linux; replace LoadLibrary() with dlopen() and GetProcAddress() with dlsym().

"Windows does this to some degree but by shippping a runtime its possible to translate the instructions of the application in question to an older or newer kernel." sounds more like changing the system call interface to the kernel and changing the routines that use it to match. That's not restricted to Windows; one goal of the SVR4 shared library mechanism (which is what Linux's shared library mechanism is based on) was to allow that to be done transparently to compiled applications, by having applications dynamically linked with system libraries, so that an application binary gets the appropriate version of the library for the kernel version. OS X's shared library mechanism works the same, and Apple doesn't even support statically linking with its libraries.

Comment Re:More Flexibility? (Score 1) 466

I'm face palming now 'cause omg-config most certainly part of the installation procedure for apps. .pc files?

pkg-config is part of the installation process for libfoobar-devel packages. It's not part of the installation process for libfoobar packages; you may need the .pc files for a library if you're developing code that uses it, but you don't need them if you're running prebuilt binaries that use it.

Comment Re:More Flexibility? (Score 1) 466

100% of windows applications have to go through the kernel to load dlls

As do 100% of Linux applications, *BSD applications, Solaris applications, HP-UX applications, AIX applications, OS X applications, etc., because accessing files such as shared library files on those OSes involves the kernel.

However, at least as I read the Windows Internals books, the actual loading of dlls other than ntdll.dll is done in user mode by LdrInitializeThunk.

On most current UN*Xes, the process of launching an executable image, except for 100% statically-linked images, involves the execution of the run-time linker, with the executable image itself handed to the run-time linker as a parameter in some fashion (e.g., being opened as a file, with a file descriptor for it being available to the run-time linker); the run-time linker, running in user mode, loads the shared libraries. (See the PT_INTERP program header element in ELF or the LC_LOAD_DYLINKER load command in Mach-O; those specify the image file to use as the run-time linker.)

and so it presents a standardized interface for doing so. Linux does not have this.

It might be easier to use a different mechanism for loading dynamically-linked libraries on Linux (or other UN*Xes) than on Windows, but it still takes work.

Are the Linux apps that don't use the standard Linux mechanism (ld.so) 100% statically-linked images, or what?

Comment Re:The good old days (Score 1) 466

Call me skeptical, but chdir() is a UNIX system call, not a command line program. The command line program is called cd.

(Actually, it's a shell builtin, not a program; it has to be, as a child process can't change the parent process's current working directory.)

It's called "chdir" in V6 UNIX, which is what that script was from. See the SH(I) man page in section 1 of the V6 manual.

And why the hell are you byte-comparing a.out with /usr/bin/yacc (which supposedly doesn't even exist yet)?

Beats me. Why don't you ask this guy?

Comment Re:I never got "packaging systems" (Score 1) 466

Second is the pretty-good reason: compatibility and correctness. You can definitely have multiple major versions (e.g. the runtime associated with VS2008 and 2010) installed simultaneously, and I think you might be able to have multiple patch versions of the same library installed simultaneously. I think the former is true in Linux too (libfoo.so.1.0.0 vs libfoo.so.2.0.0,

Well, you're not likely to have multiple versions of the C runtime installed, because, in most if not all UN*Xes, the C runtime is part of the equivalent of kernel32.dll (libc.so, libSystem.dylib, or whatever it's called).

But, yes, you can have multiple "major" versions of libraries present. The SVR4 shared library mechanism, upon which the Linux and *BSD shared library mechanism are based, and the SunOS 4.x shared library mechanism upon which the SVR4 mechanism is based, gives libraries "major" version numbers, which change when the library ABI changes in a binary-incompatible fashion, and "minor" version numbers, which change when the library ABI changes in a way that preserves binary compatibility with older library versions but might add features (routines, flags to routines, etc.) that, if used, won't work with those older versions.

However, if your application uses libfoo version 2, but it's linked with a library that uses libfoo version 1, that's a problem. (Replace "a library" with "libpcap", and replace "libfoo" with "libnl", and you have one of the problems that makes me want to have libpcap on Linux talk directly to netlink sockets without the "help" of libnl, but I digress....)

but the latter isn't so much. It may well be that Program A installs version 1.0.0 and Program B installs version 1.0.1239, where on Linux the latter would likely be packaged to upgrade the former.

If libfoo is done correctly, any program linked with version 1.0.0 should Just Work with version 1.0.1239, and Program B should only upgrade to 1.0.1239 if there's a bug in 1.0.0 through 1.0.1238 that breaks Program B so it requires 1.0.1239 or later, and Program A should just require 1.x.x and not install 1.0.0 if 1.0.1239 is installed.

If you take the Linux approach, then programs which rely on the old behavior of the buggy code will break. This is sometimes good (e.g. bad security-related fixes), but is often not. Or it doesn't have to be a bug fix, it could just be some behavior change within the specification. By keeping multiple versions around, the Windows approach keeps the individual programs happier.

How you weight these various advantages and disadvantages is up to you. I'm not really trying to argue that the Windows approach is better, just explain why it is as it is and give a fair description of what goes on.

Yes, that's the question of the extent to which the real "specification" upon which clients depend on is the official specification or the full behavior of the implementation, and the extent to which you're willing to tell developers of code that doesn't fit the former specification but fits the latter specification to go pound sand if you "break" their code. Sometimes you end up not telling them to go pound sand, e.g. the "7090 compatibility mode" in the IBM 7094 (in which mode the index number field in instructions is interpreted not as an index register number but as a bitmask with bits corresponding to 3 of the index registers, with all the index registers specified by the bitmask being ORed together to generate the index) or the hacks in various OS X libraries in which the library detects that program XXX is using the library and falls back on the old buggy behavior (I think Raymond Chen's "The Old New Thing" has examples of similar hacks on Windows).

Slashdot Top Deals

There are two ways to write error-free programs; only the third one works.

Working...