


What's Wrong with Unix? 1318
aaron240 asks: "When Google published the GLAT (Google Labs Aptitude Test) the Unix question was intriguing. They asked an open-ended question about what is wrong with Unix and how you might fix it. Rob Pike touched on the question in his Slashdot interview from October. What insightful answers did the rest of Slashdot give when they applied to work at Google? To repeat the actual question, 'What's broken with Unix? How would you fix it?'"
Several frustrating points (Score:5, Insightful)
In my opinion, here are some headaches that have plagued a wary UNIX engineer or two:
IEEE and Posix, X/Open, etc. provide a basis for standardizing UNIX interfaces, but adherence tends to be spotty
Difficult to implement a microkernel architecture
XPG3 aside, a de facto "common API" has never really been acheived
In many cases, code scrutiny is difficult or impossible
Progress and innovation tends to occur within the context of aquisitions (i.e. UnixWare)
The COFF symbolic system is terrible (OK, I know it's a deprecated, but still...)
PIT initialization (time management)
Kernel tuning (anyone fiddled with the /etc/conf/cf.d subdir on OS5?)
These are just a few things, in my experience. That said, UNIX has had some great days.
Re:Several frustrating points (Score:3, Interesting)
Re:Several frustrating points (Score:4, Informative)
I dunno, maybe you're just trolling (and a number of replies that follow would qualify you as a good troll), but I'd say that installing FreeBSD is not any more difficult than, say, Slackware or Debian. It is more challenging than your Mandrake or RH install, I think (have not had a chance in the last 3-4 years to try either).
That said, with enough preparation and a chapter from the Handbook [freebsd.org] printed out and within a reach installing stock FreeBSD should not be a problem at all.
The question you should, however, ask yourself is Why do I want to try FreeBSD? If it is just because you've heard it's cool -- you may be much better off trying a http://www.freesbie.org/ [freesbie.org] instead. It's a live FreeBSD ssytem, sort of like Knoppix.
If you want to give FreeBSD a spin because you want to understand UNIX-land better or have needs for the stability of the platform, then rough starts should not be anything to discourage you.
In either case -- all the best and have fun!
Re:Several frustrating points (Score:4, Insightful)
Re:Several frustrating points (Score:5, Insightful)
Re:Several frustrating points (Score:5, Insightful)
I strongly agree. Snide comments such as "BSD isn't for you," especially if the person trying to install it seems interested in learning about it, isn't going to help the Unix installed base grow. Such trolls hurt the *nix community in general because they are turning away prospective users.
If anything, us Unix users should be trying to convert as many people as we can to our OS, not turning them off and turning them away.
Re:Several frustrating points (Score:5, Insightful)
That's complete nonsense. Installing and running Unix hardly counts as one of the more difficult intellectual tasks. It's hard, sure, if you're used to something different, but the description 'windows people' includes novelists, artists and nuclear scientists who just don't give a damn about the stupid OS their computer runs.
Would you like it if an artist made fun of your pens and call you and your friends BIC people? Well, that's how stupid this sounds.
Re:Several frustrating points (Score:5, Insightful)
It's an Operating System. Some people enjoy using it. I do; I love the things I can do with my unix boxes so easily that come so difficult on other systems (Windows.)
You can use it if you want to. There's so many great people working on making it better, easier, etc, that in the end it MAY very well be just as easy to handle as Windows. It's not there yet. What's the rush? So you can install it easier before you know the system?
You're inexperienced in the Internet world if you think that the Linux userbase is a bunch of "inconsiderate pricks." You should see some of the Windows help forums, or the help forums of... anything else, really. There's a lot of pricks out there, you can't avoid that. I have not found this to be any greater with Linuxish forums, mailing lists, etc. In fact, I find that Linux help groups are a lot BETTER then most; there's usually quite a few people that are really knowledgable and willing to help.
Your experience with being called a n00b could be due to the fact that you've been asking the same tired old questions, without reading any of the redily available information online or using the search function on forums. There's a lot of people that WANT to help you - even though you're a complete stranger - but these same people don't want to trudge through the same questions they've already answered a hundred times over.
If you just want to "USE THE COMPUTER" then just USE WHAT YOU KNOW HOW TO USE. Nobody is forcing you to use it. You get to justify the reasons all by yourself, and if you can't justify the learning curve to the benefits, then why do it?
Really, it doesn't matter. I'm not trying to get everyone to use Linux. I'm not telling my sister to install it. Neither is anyone else, really. You might hear from someone how great they think their Linux system is, and even say "you should give it a shot!" but it doesn't matter if you use it or not. Moving forward, when all the peices fall into place and your Linux distribution of choice is at the right level of comfort for you, we won't even have this discussion.
So relax; let the people developing this great system do their thing. When the state of the system is right for you, you'll know it. It'll happen, and until then do yourself a favor and don't worry about it.
Quote from you: " Not to mention that that this is the whole reason why linux will never be a mainstream desktop operating system..."
You really should add "today." at the end of that. Tomorrow, who knows?
Getting help from Linux gurus (Score:5, Insightful)
You don't ask a question directly; rather, you write something like "Linux sucks because it can't do X but Windows can.".
To use your USB mouse example, you probably went on a board or IRC somewhere and wrote: Note that you asked a reasonable question and thanked people in advance for their help.
This is a recipe for disaster.
The board gurus will pounce on you like a
Instead, you should have written something like: You will have Linux gurus crawling out of the woodwork to show you that, yes, Linux does support a USB mouse, and the reason you couldn't get it to work was probably one of the following: X, Y, or Z, and here is how to work around or fix the problem, and here is where you can find additional information, and here is where you can get drivers or other needed software, or a more user-friendly front end, etc., etc.
Note that their attitude will be as snotty (or snottier) as with the nice method of asking, but you will get the information that you require.
Note to mods: The above may appear to be flamebait or an attempt at humor, but this method actually works.
Try it!
Re:That's not just unix. :P (Score:4, Interesting)
Wow, I find that interesting. The only job I've worked at where Windows and Linux techs worked in the same office, I found the Windows techs to run the greatest gamut of character: being the most haughty and exclusive to the most inviting and inclusive. That's not to mention how much the Linux techs taught me on the job as well.
What I found amoung the Linux techs was a greater investment of time, learning, and adapting to a well designed system that requires more out of you. Heck, you can't get away from being a proficient tech without being able to at least type 35+ words/minute. But, that is just an entry level skill to make the rest of your learning easier.
Because of the learning curve, and the trouble-shooting skills the tech position required, involved in Linux/Unix I can see why some people would take themselves more seriously than meet; thereby deserving of the titles prevously bestowed upon them. Too bad for them, and for you, that they do not instead convey the satisfaction and enjoyment that comes from learning something that does have such a steap learning curve and currently has an underdog image (which really has nothing to do with being satisfying as far as I'm concerned, I just really enjoy the OS itself).
Hmmm, oh well. I guess when it's all said and done, the satisfaction that I get from knowing, not even on a mastry level, of Linux, makes me not really care what others think of me, but I don't want to put them off either because I'm too self-absorbed to give them the same sort of help I too have received in the past from others in the Linux community.
Re:That's not just unix. :P (Score:5, Funny)
Something like that... (Score:4, Interesting)
OS X takes the bullshit out of getting Work Done, and that's nice- but in the process, the platform has transitioned from the domain of artists and eccentrics to the khaki-clad GAP-shopping technoratti richass motherfuckers, who have no use for any of the reasons the platform has continued to exist over the past 20 years.
My OS-that-runs-Art doesn't exist anymore. Apple's replaced it with an OS that does everything but Halflife... and to get that, they had to round off some of the edges.
Re:IMHO, none of that matters to the typical end u (Score:5, Insightful)
Actually there are a number of examples which put the lie to your charge, apart from the obvious case where a linux admin doesn't even install a GUI. (linux gives you that flexibility) But a number of commercial vendors provide programs which run on any modern linux distro with X windows, e.g. netscape - but in practical terms, any modern linux distro ships with both qt and gtk apps. So any app built on either native xlib, qt or gtk will run on any modern linux system.
Linux has a pretty poor cache and swap system, combined with zero user level control over cache and swap. As a result, over time, the OS runs slower, and s l o w e r and s... l.... o..... w...... r....... until you restart, and then it's back to being fairly snappy until it fills up memory again with things it shouldn't be caching,
LOL, mod parent up funny - linux memory management is actually pretty decent. I don't buy into the hype about running slower and slower and finally needing a reboot, that sounds like too much microsoft thinking. Our mail servers which are currently on a 700+ day uptime are processing messages just as fast as they were when first booted.
Sorry, your story just doesn't hold up.
Re:You guys should maybe step back... (Score:4, Funny)
One of them said he'd had _enormous_ trouble with the MCSE tests, until he figured out the "correct" answers to the questions wasn't the right answer that'd actually solve the problem, it was the answer you'd been taught on the MCSE course.
KSpaceDuel (Score:5, Funny)
I would suggest to the KSpaceDuel team that they meet with the KAsteroids team to discuss usability issues. There should also be a cap on how fast you can go, since it is possible to speed up so fast that your spacecraft appears to be moving very slowly (sort of like a tire in motion).
Re:Several frustrating points (Score:5, Insightful)
Re:Several frustrating points (Score:5, Insightful)
I've been happily using Linux on my home PC for about 4 years, but the filesystem layout has always been an annoyance.
Without a package manager, it's practically impossible to remove a program; even with a package manager, you can't even determine how big a given package is! (if you know how to with Portage, I'd like to know). A better filesystem layout (perhaps the way MacOSX, GoboLinux or RoX does it) would make package managers obsolete.
A lack of standard configuration layout is another thing: why should people have to learn hundreds of config file formats? Yes, comments help, but it'd be nice if they weren't needed. Why not come up with one standard text-based config format/filesystem layout and get everyone to use it? This would also save programming time, as you could create a library (with a name like libconfig or something similar) and not have to worry about parsing configuration settings. The Windows Registry Hell can be avoided by using a text-based format(e.g. like Java properties files or XML).
A standard configuration layout (with suitable metadata) would also go a long way to allowing a standard graphical system configuration utility (Whatever happened to linuxconf? I loved that app!), making Unix/Linux that much more accessible to ordinary people.
Replies, flames, etc.
Re:Several frustrating points (Score:5, Informative)
even with a package manager, you can't even determine how big a given package is! (if you know how to with Portage, I'd like to know)
equery size package
equery is part of gentoolkit
Re:Several frustrating points (Score:5, Insightful)
1. Reiserfs etc are the results of 30 years of research that, well, hadn't happened 30 years ago. the i-node/u-node business was the best there was. Then.
2. Multics had general, configurable, role-based, magic ACLs; UNIX lost them on purpose becuse it wasn't well suited to a big games system and word-processor, which is what UNIX was meant for originally.
3. When I was a kid we hardly HAD processes, much less IPC. Having named pipes was a helluvan innovation.
4. That's not the operating system, that's book-keeping.
5.
If you were to go back to System 3 UNIX, you'd have most everything you're asking for here. It wouldn't be as powerful, but it'd be uniform.
Re:Several frustrating points (Score:5, Insightful)
Re:Several frustrating points (Score:4, Insightful)
No decent scripting language? In Unix?
In UNIX, sure. Show me the default scripting language in UNIX v6. Bourne Shell is the closest thing you get.
Yes. It literally means "etcetera". It is intended to hold all the junk that didn't fit anywhere else. It was a sloppy solution; instead of finding a place for all those scripts, binaries and conifguration files they all get dumped in
Oh and you're the only one talking about the Window registry.
How is message passing IPC better than sockets or shared memory or named pipes?
1) The sending and recieving process don't need to know about each other before hand 2) You can easily broadcast events to all listeners 3) Much easier to send arbitary data 4) Much easier to manage; no need to mess with sockets APIs 5) Much safer; no need to share memory between process.
ACLs are coming, but I believe that POSIX permissions make privilege management very simple, very straightforward, and very effective. ACLs may provide finer-grained permissions, but nothing that cannot be done via groups and permissions.
You can believe what you like about POSIX permisions but those of us who have to deal with big systems know that they suck. They suck big, coarse grained, poorly thought out rocks through straws. The are very simple, very straighforward, but that makes them useless for proper security because they're too simple. If you think that ACL's have no advantage over POSIX permisions you're wrong a second time on this.
The SUID is still a horrible solution, and come to that so is the "All or nothing" attitude of the All Mighty UID 0. ACL's solve all of that.
Re:Several frustrating points (Score:5, Informative)
> of Linux in the business community.
This is not at all insightful. It is uninformed at best. Posix ACLs exist on ext2/3,xfs,reiser,jfs. These ACLs are also completely supported by Samba (and have been for many years).
-Mark
Here's a start: (Score:5, Interesting)
Yes, the link is hosted on MS servers, but before you ignore it for that, at least notice that the forward is by Dennis Ritchie and it was contributed to primarily by Unix geeks. It's about 10 years old, but large portions of it are still relevent today.
Re:Here's a start: (Score:5, Insightful)
I think most of us on the Unix Haters list were Lisp machine or VMS hackers who were pretty upset that a piece of utter crap was winning the O/S standards wars at the time.
The forward by Dennis is actually an anti-forward, more of a backward. At the time he was working on Plan-9 which takes all the best ideas from UNIX and junks them, leaving only the unrefined crud that is best ignored.
The book is somewhat uneven in its criticisms, I don't think that the gripes abous X-Windows hit the mark as well as when they are explaining the file systems lossage.
Ultimately the problem with Unix is that it is built the way that cars used to be built before Henry Ford, its a computer O/S for folk who like to spend their time tinkering with their system and like endless opportunities for low grade intellectual stimulation because thats an end in itself for them.
Unix still has the same major architectural deficiencies. The inter process communication is not up to much, the concurrency model is weak, the user interface is eratic and there is no consistency. Documentation is a complete joke.
Re:Here's a start: (Score:5, Insightful)
Re:Here's a start: (Score:5, Insightful)
I prefer MSDN [microsoft.com]. Call me when Unix has something that even approaches the ease of use and the amount of readable samples, explanations etc. of key APIs.
And no, the System V paper manuals don't count.
Amen! (Score:5, Interesting)
That's probably the single biggest problem I see with nix machines. Lazy filesystems have always reminded me of experimental planes developped by the cold war military to up the world speed record. Planes which would basically self destruct if they god forbid hit a pothole while taxying out of the hangar. RAID is obviously not a solution, and I find that backups - while essential for mission critical applications - should not be used as an excuse to allow for making a file system that is as brittle as this.
As a broader comment, I just find that UNIX is a brittle OS. Before every zealot jumps on this statement I should clear up what I mean: the OS components are extremely lean, they do exactly what they're meant to do, but there's absolutely no inherent 'imune system' to the OS. su can go ahead unlink the root node, a power failure and the file system goes to hell, there isn't any cohesive way to manage machine state. Every daemon runs in its own little planet, unaware of everything else.
The article the other day on /. about Sun's attempts at self healing software address parts of this actually. And other really cool apps like tripwire address other points too. But in general, the OS itself is completly stripped of an immune system.
When Microsoft first introduced the Windows File Protection service, I was really pissed off they did something which should have been done via proper security measures (which common users were short circuiting by running as admin). But the more I face the idea, the more I realize that it's not a bad idea after all, proper code signing, system level integrity checks, basically a path towards actual 'self healing systems'.
In general though, everyone has a long way to go still...
Re:Several frustrating points (Score:3, Interesting)
From reading stuff and watching discussions what I got is that the problem with microkernels is that they're hard to properly implement and still have fairly bad performance. In fact, I hit those same problems when trying to code an extremely modular application, that I tried to write as an exper
Screen is too black... (Score:5, Funny)
Re:Screen is too black... (Score:4, Funny)
OS X (Score:5, Insightful)
Problems that remain are being able to create one seamless environment with shared memory and such, but the rest of the *NIX world is still having those problems as well.
You can argue about the specifics and details of many things, but in terms of a UNIX workstation, OS X pretty much has it all for our needs.
Re:OS X (Score:5, Interesting)
That coupled with the ablity to stay connected to the rest of the business world via MS Office for Mac and Adobe tools along with fine opensource apps such as Blender, and Apple only software like Final Cut Pro has been great.
What has happened to Unix is that Apple has developed the better *iux desktop system that coupled with the new G5's has been the final death nail into SGI coffin and put the hurt on SUN. Back in the days at McDonnell Douglas (now boeing), much of the engineering development was done on extremely expensive Sun workstations that could easily run $20k a peice. Today, a lot of development and code is being written on $3000 - $4000 PowerMac G5's.
While Apple remains expensive for many consumer users, in engineering and scientific fields, the PowerMacs with OSX are extremely inexpensive. Many of my friends in scientific fields have flocked to Macs with OS X in the past three years.
Re:OS X (Score:5, Funny)
Re:OS X (Score:5, Insightful)
Re:OS X (Score:3, Interesting)
Hopefully they will have a decent port/package system for tiger, hopefully not every update will require a reboot, hopefully updates will not require to agree to EULAS, hopefully their GUI helpers will not clobber your carefully crafted conf files.
I keep hoping anyway. Till then I have chosen to go back to freebsd for my sever needs. The xserves are now for java
Re:OS X (Score:3, Interesting)
Exploding? Do you have a citation somewhere?
Remember a huge percentage increase off a small installed base is still a small installed base. i.e., if you atart with 1 computer, a 10000% increase is adding 100 machines.
Re:OS X (Score:3, Interesting)
There is nothing that I talked about in my post that did not take serving up web pages into account. In fact, the standard OS X is robust enough to withstand a Slashdotting on even low end hardware. Witness a little old G3 iMac hosting a vision education resource [utah.edu] we have here. This little iMac is running a standard OS X license (not server) and hosts upwards of 45,000 hits per day from all over the world. Not huge traffic, but pretty impressive for a desktop OS and a 400 Mhz
And Apple is Open, what's the problem. (Score:5, Insightful)
The only stuff they don't give you is the source code to Aqua and their in-aqua userland apps, which makes sense, because giving that stuff away would be business suicide.
When Apple said they were going 'open source' it didn't say they were going to release the source to their core apps, like the Finder and iPhoto, but they've been very generous about contributing the code they borrowed and modified back to the community.
It should also be noted that Apple gives back to the projects they work on, GCC has come quite a way on the PowerPC since 3.0 thanks to Apple.
In my opinion, Apple's strategy is one I'd like to see some vendor take with Linux, you take the kernel and mod it for high-performance desktop apps, get GTK+ running on an accelerated OpenGL framebuffer, tweak and simplify a slew of apps and SELL it. As long as the mods to existing software make it back to the community, it's a net gain for all of us.
Problem already fixed, for a while now (Score:5, Interesting)
2. Regardless of 1., as of Mac OS X 10.3.x, Apple now has "Mac OS Extended (Case-sensitive)": a fully case-sensitive, fully supported case-sensitive HFS+ filesystem. It's not exposed in the GUI of Disk Utility on Mac OS X client (as Journaling wasn't on Mac OS X 10.2.x client), but it can be enabled via the command line:
sudo diskutil eraseVolume Case-sensitiveHFS+ DiskName
man diskutil for more info. This is exposed in the GUI of Disk Utility on Mac OS X Server 10.3.x. If you would like your primary volume to be case sensitive, you can use/borrow a Mac OS X Server CD to boot your machine, format your primary volume as Mac OS Extended (Case-sensitive), and then install Mac OS X (or copy back all of your data with a utility such as asr or Carbon Copy Cloner).
Case preservation (as opposed to case sensitivity) was never advertised or presented as a "feature"; it was an artifact of HFS.
needs some VMS stuff (Score:5, Interesting)
Re:needs some VMS stuff (Score:3, Informative)
If I have write permission to the directory, then I can actually call "unlink" (UNIX system call which will delete a file).
Lacking write permission to the directory, I can't delete a file (or create a file). If I have permission to write to the file, I can destroy it's contents, but I can't stop the file from existing.
Re:needs some VMS stuff (Score:5, Interesting)
If you really want the kind of behavior you are talking about (although I can't imagine why), you can do it by making a hard link to the file in question into a directory which is "safe" from the user you are protecting against. They are still able to move the file around, modify it, etc. But if they delete it, the second hard link still remains, so the file is not actually deleted.
Re:needs some VMS stuff (Score:5, Funny)
Re:needs some VMS stuff (Score:5, Informative)
Program Installation Locations (Score:5, Insightful)
EVERYTHING right now goes in
Right now, if I want to uninstall a program, I have to remove it from about 10 different places, many of which aren't obvious (/etc,
Find a way (maybe symlinks
Re:Program Installation Locations (Score:3, Informative)
Package "foo", version "N" goes in "/usr/local/packages/foo-N".
The current version of "foo" has a symlink to it from "/usr/local/packages/foo".
"/usr/local/bin" contains symlinks to the appropriate files in "/usr/local/packages/*/bin"
Upgrades (and downgrades) are trivial.
Re:Program Installation Locations (Score:4, Funny)
Somebody find this man a package manager.
Re:Program Installation Locations (Score:4, Insightful)
or...
On my Fedora box I have rpms made for Red Hat, rpms made not for Red Hat (go figure), source installs with configuration scripts, source installs with instructions, source installs with nothing whatsoever, programs with install scripts that install to the directory tree how they see fit, programs with install scripts that install nowhere (./), and python sources that just sit there coming straight out of a tar. Meanwhile I have nethack sitting around in
Re:Program Installation Locations (Score:3, Interesting)
Then ALL the files in that installer are referenced in the final RPM, and to remove all the stuff you simply remove it using the rpm tool.
On debian it's much the same but with a deb instead of an rpm.
To install a deb file, just do "dpkg -i filename". Seems a lot like rpm, doesn't it.
Re:Program Installation Locations (Score:3, Interesting)
PATH=$PATH:/opt/*/bin/
or something along the lines could make life much easier.
The throuble is really that the current filehierachie was designed to only contain a basic unix system, (ls, rm, libc, etc.) not a fullblown multimedia desktop, as what most linux are today.
Stuff like the LHS don't even try to fix the mess, they just standardize it. Most likly we will be stuck with the
Re:Program Installation Locations (Score:5, Interesting)
In plan9 you don't have a "$PATH variable", instead you have several directories (/whatever/arch-dependent-bin,
Re:Program Installation Locations (Score:4, Interesting)
The problem I have with an "installer" system is that immediately developers will extend it to do things it shouldn't be doing. "Hey, you know, when we install this program we should have it send gmail invites to six people, FTP a pretty picture of a llama while we construct suitable advertising panels, and create three new users with the authority to start, stop and pause the data subsystem."
Other than the llama thing, people have done all that crap and more with Windows installation tools. They blindly overwrite shared system files (leading to DLL hell,) they muck up the registry, they install hundreds of class IDs for internal-use-only COM interfaces, plop in unrelated browser helper objects, add random directories to the front of the system path, launch odd services that do god-knows-what, wedge in a startup task or two and then demand you reboot your system.
It's taken Microsoft many years to realize they couldn't control the installers, and so with XP they changed the OS to try to defend itself from renegade installations. It would be extremely sad to see a UNIX equivalent.
Re:Program Installation Locations (Score:5, Informative)
How is this not better than the current Unix way of doing things?
Re:Program Installation Locations (Score:5, Informative)
Re:Program Installation Locations (Score:4, Funny)
umm, you have to use a mouse?
Re:Program Installation Locations (Score:4, Insightful)
However I've installed Firefox on ten different distros (probably more now) and never once seen an icon for it appear automatically in my GNOME menu. Why is this so broken? APT, Synaptic, RPM, yum, etc. are all basically broken from my point of view, but we put up with them because it's worth the fuss. Millions of computer users can't even find a new icon on the DESKTOP, much less dink around with non-standard filesystem heirarchies (which distro do you use?) and symlinks.
Pet peeve of the day (which happens to be relevant to this thread) : Windows downloads are only a fraction the size of equivalent Linux apps. Try OO.o, Firefox, etc. My Xandros 3 install had to download 40MB (using the lovely APT) which doesn't compare well to a 4MB download for Windows.
Seriously, you should look into using something more current than Windows 3.11.
To compare apples to apples:
OO.o: [openoffice.org]
Windows - 45MB
Linux - 77MB
Firefox (with installer):
Windows - 4803KB [mozilla.org]
Linux - 8422 KB [mozilla.org]
Thunderbird:
Windows - 5877 KB [mozilla.org]
Linux - 10113 KB [mozilla.org]
I've heard enough about bloody shared libraries that evidently NEVER get shared, and instead I end up with five different incompatible versions of glibc/GTK/whatever and it's also annoying to wait while APT downloads an EXTRA 300% of the listed download size. If making *NIX installers like Windows means that I'll have all the advantages, and all of the downfalls, then I'll take it thank you very much. It's a great deal better than what we've got now.
Re:Program Installation Locations (Score:5, Informative)
Re:Program Installation Locations (Score:3, Informative)
Re:Program Installation Locations (Score:5, Interesting)
GoboLinux is a Linux distribution that breaks with the historical Unix directory hierarchy. Basically, this means that there are no directories such as
To allow the system to find these files, they are logically grouped in directories such as
To maintain backwards compatibility with traditional Unix/Linux apps, there are symbolic links that mimic the Unix tree, such as "/usr/bin ->
www.gobolinux.org
Re:Program Installation Locations (Score:3, Interesting)
Re:Program Installation Locations (Score:5, Interesting)
In the true UNIX world, application software has always been such that it can be installed stand alone underneath ONE directory, quite simply because in the true UNIX world not every (other) user has root powers and the people who do have them understand that they don't want to mix shared application files with local OS files the way toy OS-es such as Windows and (sadly) some Linux distros do.
Where I work, we install evereything in networked directories called /our-company-name/software/package-name/version. Then we wrap everything in shell scripts that automatically select the correct platform (HP-UX, Solaris, Linux) on the fly and that automatically set every single environment variable the softare needs. Then we add links to make a specific package version current and publish the key binaries of packages that many people use through 1 common bin directory. Not a single file needs to be stored and/or managed locally (crucial, considering the amount of machines involved).
And now comes the best part: I (yes, I developed the setup and do most of the maintainance) do not even need root powers for anything.
Re:Program Installation Locations (Score:4, Insightful)
Deleting the install directory doesn't do jack.
That's why windows has uninstallers.
How long did you say you used windows? Seems like you ought to know this by now.
Re:Program Installation Locations (Score:5, Insightful)
I can't work out if you're trolling or just genuinely ignorant. Under Windows, everything goes in your selected installation directory... except for the bits that don't. Some have to go in the system directories and there are usually registry entries made. In contrast, if you tell a Unix application to install in a given directory, it generally does, and doesn't pollute the file heirarchy outside of your chosen location. If you're installing it from an RPM or dpkg, then it usually does the same, but it's effectively using a shared install directory between multiple apps. But why do you care where it puts the files? Use the package manager to tell you which files came with which package, and to remove the package if you're done with it.
The exact location doesn't matter to me. (Score:5, Interesting)
On another note, there are reasons why apps on UNIX become installed in shared directories--it is because path management can become tedious--the PATH environment var becomes too long, or else you have to sprinkle links about your filesystem. In the GUI world this isn't really an issue, but some of us still like the command line and write scripts and typing
BTW, it seems you have MS Windows confused with the Mac (the only modern PC platform I know of where the "copy a folder" install method is still commonplace). Win apps most certainly do NOT install in a single directory--nearly all use the central, monolithic, non-human-readable REGISTRY to store configurations, and typically throw
Re:Program Installation Locations (Score:5, Interesting)
1) Most of the folders have a PURPOSE. /bin has vital system binaries (sh, login, and so on), /sbin has binaries and daemons vital to starting up the system, /etc has files containing startup and default settings, /var has variable information (like logs), /tmp is for temporary files, and so on.
Why is this powerful? Well ...
- Want your machines to behave similarly on startup? Replicate /etc on these machines or have them mount a shared /etc on top of the original early in the boot process. /tmp be on a ramdisk. /var /usr/share and friends NFS shares.
- Want to have faster access to temporary files? Make
- Want to limit log sizes so they don't fill up the disk? Make a seperate partition for
- Want to shared data across a bunch of *nix boxen? Make
In general, You can do interesting things by combining the fact that directories are usually per-purpose rathar than per-program. Granted, in the desktop world, this isn't so much useful, but it makes cluster management and system maintainence SO much easier.
2) The issue you complain about can be taken care of by a package management system or some arangement of symlinks.
configuration (Score:5, Interesting)
Re:configuration (Score:3, Interesting)
I disagree. Most programs I encounter have systemwide configuration files and per-user configuration files. The systemwide ones live in
"Not to mention things that require some serious config files, like sendmail, apache or X. Creating a cross-platform powerful configuration l
Re:configuration (Score:5, Interesting)
Ideally there would be a uniform way for programs to retrieve configuration information from a centrallized location.
Ideally local users and machines would be able to merge their prefs and config with the master to override certain prefs.
Ideally the hierarcy of administrators would be able to prevent entitities under them from overriding certain configuration options.
Ideally all of that could be done with plain text files which are automatically checked into a version control repository so you can roll back any change in a jiffy.
Like elektra? (Score:4, Informative)
Ideally all confi files would follow the same format and syntax (god no please don't say XML).
Ideally there would be a uniform way for programs to retrieve configuration information from a centrallized location.
Ideally local users and machines would be able to merge their prefs and config with the master to override certain prefs.
Ideally the hierarcy of administrators would be able to prevent entitities under them from overriding certain configuration options.
Ideally all of that could be done with plain text files which are automatically checked into a version control repository so you can roll back any change in a jiffy.
There was a project on sourceforge that adresses some of the points you raise. Originally it was called "Linux-registry" I believe, now it's called Elektra [sourceforge.net].
I don't know how far they've come or anything about the project, but it looks like something that You'd want to have a look at.
Re:Like elektra? (Score:4, Informative)
my answer (Score:3, Funny)
A. All those slashes and dots.
Q. How you would fix it?
A. um, slashdot
Of course!
In a word... (Score:3, Insightful)
Printing - more specifically, Postscript Printing.
This sillyness of having to generate postscript so Ghostscript can generate PCL so you can print is just wrong - empty brained, someone forgot to wake up wrong.
PCL is available on every major printer on the market today - it IS the standard. PostScript is a has-been. Dump it today.
That is what is wrong with *nix and what I would do to fix it is require all software to support PCL printing directly.
Re:In a word... (Score:4, Informative)
- PS you can very easily convert to PDF - none for PCL!
- there tons of tools which enables you "4 pages in 1", accounting quotas etc. etc. - none for PCL!
- try to display PCL file
- WHICH PCL? PCL5? PCL3?...
There is simply NO reason to give up - tell me one single argument (except VERY slight speed-up) which will balance the loosen flexibility and necessary to rewrite all existing tools (CUPS, print drivers etc.)
Re:In a word... (Score:5, Informative)
>
>PCL is available on every major printer on the market today - it IS the standard. PostScript is a has-been. Dump it today.
Huh? I think you've got that backwards.
PCL requires that most of the "brains" exist on the "computer" side of the "computer/printer" connection. A PCL printer needs less "brains" than a Postscript printer because all the processing is done on the "computer" side of the connection.
Not to put too fine a point on it, but a PCL printer is to a Postscript printer what a Winmodem is to a hardware modem.
For printers, the PCL tradeoff made a lot of sense sense when embedded CPUs were (extremely) limited in computational power compared with desktop CPUs. Rather than have your $1500 486-33 sitting idle as it dumps a pile of Postscript code to another $1000 68020 in the printer, I'll use my $1500 desktop CPU to turn my document into PCL that can be parsed by the $1.99 Z80 or whatever's in my $100 PCL printer.
Now that your $25 disposable cell phone has a 200 MHz core, that tradeoff is no longer a requirement. Embedded systems smart enough to interpret and run Postscript code are no more (and no less) expensive than those capable only of PCL.
Methinks you've got the PCL/Postscript design tradeoff backwards.
Does it reliably enable true modern computing? (Score:5, Insightful)
While I agree that the core OS has not moved much in decades, I also see very little motivation for this as much of the required functionality has moved up the stack to the application layer.
Plan9 is what's right with UNIX (Score:5, Informative)
cynical view (Score:5, Insightful)
Unix is great!, unless:
- You just want a plug and pray answer
- You just want a word processor
- You just want
If someone is only looking for a single application, it is hard to shove such a versitile system down their throat.
Solution:
Create a truely modular UNIX/OS that does not depend on any single environment(init/SYSV). Make a pluggable API-level interface that you can plug anything from a single application to a complete system environment into. Then get someone to develop EXACTLY what you want.
Idiotware without the bloat.
Laughing all the way,
-- Kei
Re:cynical view (Score:3, Interesting)
And yet Linux is becoming an increasingly common choice for all sorts of embedded, special-purpose devices.
A lot of people don't really understand what UNIX is. At its heart, it is just a philosophy, not a system. A way of thinking about and solving problems which has remained relevant and useful for decades. All real-world UNIX systems have lots of crap bolted on, out of necessity, but th
Re:cynical view (Score:3, Insightful)
I've never met a computer that was really "plug and play". They always seem to have issues, at least for me. About the only thing that worked right away was my microwave. Even new cars don't seem to work perfectly from the start. We all might want something that you plug it in and it works, but the popularity of cheap digital cameras that are notoriously unreliable seems
Has to be said (Score:3, Insightful)
Easy! (Score:5, Insightful)
Sure, man pages exist, but even once you learn that man does what help really should the man pages are generally written by programmers for programmers.
Newbie guides generally don't get any further than a small command summary, which doesn't really show any strengths of unix over using a gui [or windows!]
The best thing I think would be to provide more "whole system" examples/help rather than help for each individual command. Take some nice simple topics [how to add many users, how to determine network utilization programatically, how to determine open ports and what process is using them...] which are painful to do on windows and use a variety of unix tools to solve them.
Unix is too powerful (Score:3, Informative)
It's a canonical example of something that tries to be everything to everybody, but ends up being too hard for anyone to use.
There's only one thing wrong with UNIX: (Score:5, Funny)
Laugh.
It's a joke.
The C language (Score:5, Insightful)
Whats broken with unix? (Score:3, Informative)
But there are lots of subsystems that aren't exactly perfect.
Examples that come to mind:
*File permissions only go to user/group/others rather than individuals, and poor record locking on network shares. Lack of automounting as an intrinsic feature of the operating system.
*Windowing subsystems that network, but cant handle 3d networked graphics effectively, or support the more advanced hardware features of graphics chips locally particulaly well.
*Software packaging systems that develop conflicts. (Probably more of a linux problem, actually)
- I am aware that all of these have workarounds or are being worked on -
The kernel of most unix's (and, for that matter linux) are fairly well tuned to a variety of things, although they are subject to a number of internal revisions to try and do better multi tasking & multiple processor scaling, for example.
Where these systems will probably fail the most is when the underlying hardware changes alot - for example handling larger memory spaces and file systems, or perhaps even moving to whole new processes (eg., code morphing cpu's such as transmeta's, asynchronous cpu's). These designs are quite radically different and we have developed down a specific cpu/memory/harddrive model so far that its quite difficult to look at major changes, as they aren't as easily supported by the operating systems.
Just my 2c, and from a fairly casual observer status - it would be interesting to hear what the main developers think on all of this.
Michael
Simple... (Score:4, Funny)
SOLUTION: 2 MT airburst over Lindon, UT
Oh, with UNIX, not for UNIX. Never mind.
As you were.
8-bit UI unusable in a 32-bit world (Score:3, Insightful)
As a result, we've got upper- and lower-case flags doing completely different operations (-r and -R for "remove" and "restore," for example), we've got case-sentitive filenames which just make it so easy to tell the difference between "Index," "iNdex," "inDex," "indEx" and "indeX."
UNIX was designed when plain text was king and the only nudies you ever saw were ASCII art.
As a result, there's no way from looking at the filename to tell what program the file should be processed with.
UNIX was designed under the guidelines of "do one thing well, do it quickly and get out of memory."
Those design decisions permeate UNIX and the *NIX community even today. When I read the newsgroups, I still see tips on how to do things that involve piping a file through 17 filters to do something that can be done on Windows with four mouse clicks.
So how would I fix these problems?
1) Make filenames and command flags case-insensitive. The few cycles you spend doing case comparisons will quickly pale in comparison to the time savings you experience in tech support situations where a touch typist accidentally hits space too soon and types "emacS."
2) Several files that do not have extensions usually have some information about their default parser in line #1. Either parse it, or start using file extensions in *NIX.
3) Start making UI's that only initially expose the 20% of the UI that 80% of people will use. There's no reason for a CD-burning package to have a checkbox on the main screen about verifying post-gap length for 99% of the people in the world.
Anyway, that's my opinion.
No! (Score:3, Insightful)
"1) Make filenames and command flags case-insensitive. The few cycles you spend doing case comparisons will quickly pale in comparison to the time savings you experience in tech support situations where a touch typist accidentally hits space too soon and types "emacS.""
That problem is so much easier to fix than changing 20+ years of UNIX design.
UNIX is case sensitive for a reason. Do you think you can just go through all the source files, replace
In no specific order: (Score:5, Insightful)
-ancient directory organization which doesn't take modern computer usage into account (more powerfull single workstations)
-bad historically grown naming ("home", "usr", "var", etc.) and incosequent File System Herarchy Standard
-crappy vendor support
-unix printing still sucks big time (see 'vendor support')
-grafics system and font handling
-inconsistent standards of configration
-histrically grown elitist utility naming (large anoyance)
That's all I can come up with right now. Note that some of these are dealt with by certain unix variants. Printing and pretty much everything else is a breeze on OS X for instance. Configuraion and installation with Debian Linux is very smooth and goes great length to keep those countless OSS utilities manageable. And Solaris 10 seems to have the one or other card up its sleve to deal with security risks that result in the allmighty root.
Coming to think of it: Can't we just have an OS with OS X ease of use, Debians installation system, Solaris 10 low-level features and Windows Vendor support? We'd all be set and 100% satisfied.
Non Free. (Score:3, Insightful)
Fix it from the bottom up (Score:3, Insightful)
For example:
Two things: (Score:4, Interesting)
Files: this is one thing Windows has right. There should be all sorts of capabilities built in to Unix: append-only files, append-only by user, unchangeable permissions, and so on. FreeBSD's flags are the way to go, but like I said: they should be built in to Unix, not an extra add-on.
And a subset of that is coarse permissions of files. Why in God's name do we still enforce root-only opening for ports built in to Unix, not an optional add on. Something like "chgrp www /dev/tcp/80; chmod 600 /dev/tcp/80", rather than having to open as root then drop privileges (hope you did that right!), would be amazing.
What's Wrong with Unix (Score:3)
Now, if you asked me "What is wrong with Linux?" I would have several answers. Same with "What is wrong with FreeBSD?" so you don't think I'm just a BSD bigot. But "unix"? It's hard to pin anything on the generic term "unix".
COM and the Shell (Score:4, Interesting)
Unix may have some form of COM, but it is far from the kind of support that is available under Windows. It is the reason clipboard and document embedding is such a pain under Unix, and why the shell 'feels' clunky and basic operations such as drag and drop between applications isn't possible.
So bring in a standard COM system, and standardise the shell interfaces and you will have kde and gnome applications that can integrate with the shell without having to have separate progams.
A good set of standards. (Score:4, Insightful)
Linux isn't friendly for:
* Installing apps
* Guiding the Joe user to a friendly painless installation of the OS itself
* customizing
* configuring
in other words... everything.
As many linux fans that there are here, the only *great* thing that Linux has, is its security and stability. Everything else is more or less, a mess. The apps, they're great! But only AFTER you manage to intall and configure them.
And on the other side, we have a wonderful MS Windows in which everything (BUT security and stability) is great, but security and stability is a mess. I admit it, Linux infrastructure is very well thought... but the rest? The problem is that Linux (or unix for that matter) was made "by nerds, for nerds". Windows was made "by executives, for Joe users". What we need is an OS made "by nerds, for Joe users".
And that means not rejecting as "blasphemy" everything that MS Windows has. There are many good points in windows, but (i'm generalizing, but this is my impression) linuxers are too busy defending their "way of life" against the competition, that they can't improve it. They have formed themselves a mindset saying "Linux is perfect. We don't need no stinking windows thingies. Anyone who says so has been too much in contact with the evil windows, and must be deprogrammed". If someone dares say "but..." he's just rejected as some microsoft borg slave.
And they've repeated this lie so many times that they've ended up believing it. They make this whole bunch of "user-friendliness" *patches* for Linux, so they can believe that it's good the way it is.
Well, guess what. It isn't. Give me a Linux with the user-friendliness of windows (and I DON'T mean the GUI - i mean the versatility, plug-n-play, ability to easily install new apps without the
What I mean is:
Linux (as a whole) is a good set of implementations. What it needs is a good set of standards, and ONLY THEN, develop good implementations of these.
Want an example? We have KDE, QT (is that spelled right?), and I forgot if there was any other.
So there are apps compatible with QT that can't run on KDE, and viceversa.
Maybe you guys haven't still seen the big picture, but what I see of Linux development is more or less this:
a) Some guy makes a good thingy for Linux.
b) Many guys follow him
c) Another guy makes another good thingy that does the same than the first one, but it's incompatible.
d) Many guys follow him.
e) GOTO a)
From a religious perspective, compare with Roman Catholicism and protestantism. Roman Catholicism would be Windows (one pope called Bill Gates who dictates what is true and what isn't) and Linux would be the protestant denominations incompatible with each other. Some survive, some die... etc.
Sociologically, protestant denominations are very similar to Linux implementations. They share one very limited creed (the Bible / the Linux Kernel), but how that applies in their lives (the implementations) vary. SO MUCH that they can't be united (I remember the SCUMMVM team - or was it another? - splitting because a guy liked one editor, and the other guy liked another editor. And they argued so much about this that the whole dev team dissolved.
Linux needs a "pope". Or a government council (like the W3C) which says which way apps will interact with each other, with the kernel, and with the hardware.
Let me rephrase it: Linux needs STANDARDS. Linux needs something like "a W3C" government which publishes a standard, uniformed API of doing things. Like what the w3c did with the DOM (and so we can prevent things like the "browser wars" happening in Linux.
One of the reasons WinXP flourished is that it had a standard way of doing things. Make them compatible with the API (even if its security is as solid as a gruyere cheese), and they r
My list. (Score:5, Insightful)
Here are the general problems I have with Unix and Unix-like operating systems:
(Note that this isn't to say that every Unix-style system has a bad threading model -- some of them are pretty good, and others are getting better. But it's currently difficult to write decent cross-platform multithreaded Unix code when some Unicies you know in advance have really crappy threading subsystems).
Okay -- now don't get me wrong -- there are a lot of things to like about Unix and Unix-like environments. But those are the items I personally have problems with in the general case (and again, not all Unicies exhibit all of these issues. In particular, Mac OS X doesn't suffer from any of them, and is my current OS of choice for doing development and as my personal workstation desktop environment).
Yaz.
Re:you mean those guys that had their things cut o (Score:4, Informative)
Re:User Friendly (Score:4, Informative)
Re:mmap (Score:3, Informative)
I don't think the solution is to start removing functionality. The solution is to use that functionality in the correct way. A program can receive a signal at any time. This is a cold, hard fact. If your program uses operating system features that could lead to exception condition
Re:link and file managment (Score:3, Interesting)