Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:The future of operating systems (Score 1) 255

Really? Mine just says " Generated by NetworkManager".

So then you do know what program is editing your resolv.conf.

How is preventing DHCP from writing resolv.conf "breaking DHCP"?

Preventing the DHCP client from doing what it is intended to do is breaking DHCP. You can achieve the result you want by just editing /etc/network/interfaces. It's in the man page.

Another way of doing it is editing the the connection in NetworkManager, and instead of selecting DHCP under ipv4 settings, select DHCP (addresses only).

Comment Re:The future of operating systems (Score 1) 255

Because it adds nothing but another layer of kruft to fail. Yes, I could jump through the hoops and overcome 15 years of habit to no good end,

Editing a different config file from the one you are used to is not "jumping through hoops."

or I could just sudo apt-get uninstall resolvconf every time, and after every upgrade, just in case it tries to put it back again (and it has, at times).

That, on the other hand, is. Why not just take the most painless route to get what you want? Seriously, it's not that big of deal. There have been tons of similar changes to the Debian userland over the years (pam.d, update-rc.d, modprobe.d, ...). All of them entail moving config files and using scripts or includes to keep master copies up to date. Yes, it might be frustrating to find something you already know has been changed, but it usually takes only about 5 minutes to get up to speed with the new setup.

There is a pretty good rationale for resolvconf on the developer website, if you actually really care about the why.

Comment Re:The future of operating systems (Score 1) 255

You would know this if you had ever used it with a network configuration of any complexity.

As someone who uses NetworkManager all the time on simple to complex configurations, I can tell you that it works just fine. In fact, for the fairly complicated kind (static routes, custom dhclient hooks, vpns, etc), NetworkManager makes it a lot easier than it ever used to be. It seems to me, linux users should be more capable of learning and adapting to new things than a typical computer user, but from the comments in this thread that is obviously not the case. Everybody just learned how to manually use the ifup script back in 1995, and anything different is just too complicated. Oh noes they moved the config file? Too bad there isn't anything like a man page that might tell you where it is....

In short, NetworkManager is crude hack

Far from it. NetworkManager is an actual robust network management tool, unlike the init.d/ hacks that existed before.

Comment Re:The future of operating systems (Score 1) 255

I never know what utility is overwriting my resolv.conf but

Really? The first line of the file says "# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)". And if you just read the man page, you are told which files to edit to make changes to the resolver config.

The reason turning off write doesn't work is because resolvconf runs as root, of course. And it's not a good idea anyway unless you like breaking DHCP.

Comment Re:The future of operating systems (Score 1) 255

Servers, almost by definition, don't move around much, and those that do will need a slightly more robust configuration by an intelligent operator, rather than having the static (or semi-static) configurations clobbered by a "helpful" utility.

Perhaps you just need to learn how to work with said utility. I can complain that they screwed up apt because I can't edit /etc/apt/sources.list directly anymore. Or I can just realize that they moved custom configurations to /etc/apt/sources.list.d, which actually solves two problems. It allows you to more easily revert changes, and it prevents the package manager from clobbering the config file every time it updates the package. For system connections, NetworkManager does very little differently than the old /etc/init.d/networking script. It's just the config file that has moved to a different place.

but they replaced it with resolvconf, which makes a glorious pain in the ass out of itself by deciding it's smarter than you when it comes to your /etc/resolv.conf config file

And what's wrong with just editing the files in /etc/resolvconf/resolv.conf.d/? That one's in the man page. You can also edit /etc/network/interfaces like you always have. You just can't edit /etc/resolv.conf directly.

Comment Re:The future of operating systems (Score 1) 255

I find NetworkManager annoying because I don't know how it works, and can't change it with a terminal

It depends. If it is a system-connection, you can edit in a terminal as normal (/etc/NetworkManager/system-connections). If it is a user connection, you have to use dconf. A little annoying, but not unbearable. Have a look at the gsettings tool. Makes it scriptable fairly easily.

Comment Re:The future of operating systems (Score 1) 255

Networking starts up only after someone logs in? Really?

Uh, no. A wired connection starts up the same way as it always has. A wireless connection can be configured to start up before login, easily. In addition, you have all of the extra features NM provides, like VPN and DSL dialing support, which you could conceivably need on a server. Sure you can do it manually, but NM makes it so much easier.

It's fixable -- just uninstall network manager (and resolvconf)

What's wrong with resolvconf? It's just a script that updates the resolver. No different really than a lot of the other maintenance scripts (ex: update-rc.d).

Comment Re:Microsoft vs. Microsoft (Score 1) 151

Sure sure, lots of cronyism to go around. Not arguing that. Just trying to say there was a technically superior product availble. Even joint-developed by MS as you point out! It failed, along with a number of other notables (WordPerfect, Novell, Netscape, Eudora), not because they were technically bad, but because of market manipulation by MS.

Comment Re:Educators know that Gates is bad for education (Score 2) 151

Though I am sure you'll find some nonsense revisionist reason to blame MS for CDE sucking

Um, no. CDE sucked yes, and no it wasn't MS fault. But CDE wasn't ever competition for Windows. It ran on the old proprietary Unices on custom hardware and was never in the running to be a consumer OS running on off-the-shelf x86 hardware. OS/2 was, though, and it definitely didn't suck.

MS did have market forces working for it, but you totally ignore the missteps, bumbling and stumbling by the competition while MS executed well, across DOS, Windows, Office etc.

There was far more of the former than you are acknowledging. IBM did, in some ways, have its head up its ass by not recognizing the potential for the x86 market much earlier, but they were responsible for the BIOS that made DOS possible. OS/2 failed because it was never bundled by OEMs and had limited native software (Lotus vs. Office). The early history of MS is characterized almost entirely by inferior, buggy, software replacing technically superior software, either because MS was able to get sweet bundling deals with OEMs, or because they were able to undercut their competition in price. And because they also made a strong effort to be incompatible with everything else, there was no turning back after you switched to MS.

Take Netscape for example, it was good the first few versions and then later IE 4-5 was actually objectively better.

IE did some things better than Netscape and Netscape did some things better than IE. It mostly came down to preference which one you would choose. And this would have been fine except for the fact that IE started implementing HTML behaviors in ways that weren't documented anywhere (standards-based or otherwise). So web developers, then, had to choose which browser to develop for (completely contrary to the principles of the web), and then market share suddenly became important (cue problems with bundling). If IE vs. Netscape had only been about technical merits, let the best browser win, nobody would have cared. But much like the Office file formats, MS became the de facto standard that nobody could compete against because it was intentionally incompatible with everything.

For the latest example of such a thing, see Sony stumbling with the PS3, while the XBox overtook it in sales.

And for another example, see MS stumbling with HD-DVD, while BluRay won the HD battle. I would say the PS3 is not doing badly. The division is profitable and total units sold is not far behind Xbox360. Meanwhile, PS3 has not suffered from things like the "bricking" issue. While Sony may have stumbled by committing to the Cell processor, I think their biggest problem is lack of good developer tools. You can't really criticize the hardware. It really is fantastic.

Comment Re:Educators know that Gates is bad for education (Score 1) 151

There was no competition

Actually, there was. OS/2 Warp, by many accounts, was superior to Win95 in every technical way. It even ran Windows software reasonably well. The major problem: it was never sold by OEMs bundled with PCs. You could buy it on the shelf and install it yourself, but most people wouldn't do that. And then later, the compatibility started to suffer as well, as the popular software (Office, IE) started using APIs that weren't implemented by OS/2. What OS/2 really needed was it's own software ecosystem (very similar to the challenges linux has today), but that never emerged.

Comment Re:It's not broken. (Score 1) 1154

*sigh*

I don't deny that there are certain usability problems with linux, and I don't deny that you have been frustrated at some point while using linux. But the problem with these types of discussions (this seems to be the third on /. in the last week) is characterized perfectly by your comment. Well, yours and the GP. One person claims that he/she is perfectly happy and nothing is wrong. The other says everything is broken and complete crap with no actual specific or constructive detail. Let's take a few examples from your post.

- In many cases it doesn't work out of the box.

What does this mean? What doesn't work out of the box? Is it a piece of hardware? Is it a piece of software? Is it a part of the UI? What are you talking about?

The reason this is important is because, depending on the specific problem, it may span from trivial to very difficult and likely requires different groups of people to be involved in the solution. A piece of hardware may not work out of the box. Well, there are a lot of approaches. Arguably the "best" solution would be to get that piece of hardware working, but for different reasons it may not be easy or possible (ex: hardware is complex to reverse-engineer and vendor refuses to support linux). So then what are alternate approaches. Well, picking your hardware more carefully is one. This can be done by the users, but is probably better done by the OEMs. Guess what? Almost every Windows or OS X workstation sold is put together by an OEM. They do the work to assemble the proper hardware and make sure it works with OS. If you try to install OS X on a random collection of hardware, or even if you just try to use a random piece of hardware with a mac (like a USB wifi adapter, say), there is a pretty good chance it won't work. Does that matter to most mac users? No, because they buy their mac from Apple, and all of the hardware they care about just works.
Summary: hardware support is a problem for every OS. The primary solution has been OEMs selling bundled systems. So with better OEM support, this wouldn't likely be a problem on linux either. Unfortunately, there is little the linux community can do to gain OEM support. The commercial distributions have had limited success in the past (ex: Ubuntu partnering with Dell), but it's still not great.

On the other hand, a different problem warrants a different analysis. Pulseaudio doesn't work? That is purely a distribution problem, and I couldn't agree more. Distributions should not ship broken software or software with broken configurations. I have been frustrated by this in the past, with many distributions, and have become much more conservative about my choice of distribution as a result.

- Their requests for help are met with instructions to apply themselves toward learning more about how the tool is/was made and toward improving the tool itself.

Who? Where? When? Why?

It's important to know where you are asking for help. Complaining about hardware support on the kernel mailing list will likely result in exactly the response you describe. Why? Because kernel developers are busy people often employed by companies like Red Hat to hack on specific problems. It is not that they don't want to help. It is just that they aren't there to solve your problems for you. They may not be able to duplicate the problem you are experiencing on their machine, or they may not have the hardware you are having trouble with. So they will ask you to help them diagnose the problem, by doing tests and sending them logs. They may send you a patch and tell you try it out. That is how the kernel list works. It is a list for the kernel developers to communicate with each other.

On the other hand, the Ubuntu forums are usually very friendly. You will find people there who will give you step-by-step instructions for a lot of things. I would find it hard to believe that someone on the Ubuntu forums would tell you to write some code and submit a patch upstream. Unlike the kernel developers, most of the people on the Ubuntu forums are there to help you solve your problems.

- The defaults are almost always wacky. No distro or desktop has really ever shipped with good (non-ideological/non-developer) defaults to this day.

Back to needing more specifics and more clarity. Which defaults are considered wacky or "developer-oriented". This is one where I just have to purely disagree, because I have been pretty happy with most defaults for a number of years (Ubuntu fixed most of these problems for me). The only thing that is somewhat lacking is multimedia, and that is really just due to licensing. In other words, there isn't much a distribution can do because it is illegal for them to distribute certain codecs or *ahem* dvd reading software in the US. So that stuff has to come from servers outside the US, and can't be on the distribution cd. Most distrubutions have made it fairly trivial to get the proper multimedia support after installation, but you are right in that it is not the default. Problem is, I don't think it can be, and I don't see how you can label it as "ideological-driven". There are ideological distributions, but they aren't all that way.

- Create a desktop kernel fork. Linus & co. are not in the business of writing/maintaining a desktop kernel. Their goals are larger (and smaller) than that. The desktop kernel can track the mainline kernel, but shouldn't adopt every latest ABI or other change—just do a major update every 3-5 years.

Not a good idea, in my opinion. First of all, there isn't much of a difference between a desktop kernel and a server kernel. There are certain problem with the kernel, in general, that disproportionately affect the desktop over the server. Believe it or not, but this has been debated and flamed quite a bit on the kernel dev list over the years. As of Linux 3.2, though, desktop performane (ie: responsiveness, latency) is pretty good, and I don't think there is much improvement to be gained by forking the kernel. Meanwhile, doing so would put you out of sync with kernel. Maintaining a kernel is a HUGE amount of work. There is a reason why most distributions prefer to, at a maximum, just apply custom patchsets to the mainline kernel. And you would be 3-5 yrs out of date with respect to hardware support and performance optimizations, which I don't think is good for the desktop. Part of the reason why desktop linux is getting better is because of changes made in the kernel in recent years (preemption, scheduling improvements, filesystem improvements, integration of things like KMS), so getting rid of all that just so you can have a stable ABI is worthless, in my opinion.

Comment Re:It's not broken. (Score 1) 1154

These are not people who equate ease-of-use with "pretty translucent buttons" either. These are people who just want to upload their photos to the desktop, edit them, organize them, and email them to friends, for example. Or type a letter, take it to the library and print it.

Please describe exactly what about those task was difficult. Because one of Ubuntu's primary missions is to make those kinds of workflows easy. And I think they do it pretty well. Shotwell for your photo example (works almost identical to iPhoto) and LibreOffice for your letter example. Only thing I can think of that might have inhibited the latter is needing to specify the Microsoft Word file format while saving, but I don't think that is too difficult to learn.

When people complain about linux desktop usability, they aren't usually complaining about tasks like you describe. Off the top of my head, the complaints I hear the most often (and I think they are reasonably valid) are software availability, hardware support, software installation, and system administration tools, usually in that order. People are usually first looking to run software they need or already know (Microsoft Office/Adobe Photoshop) and they usually have some complaint about the alternatives offered in Linux. After that it is usually some hardware that doesn't work (a multifunction printer or wireless adapter, for example). If those two are good, the next complaint usually has something to do with software installation, and this usually boils down to some software/version not packaged in the repository of their distribution. Finally, certain system administration tasks can be awkward or difficult to some users (like configuring the graphics adapter or managing the permissions of a directory tree).

So, yeah, there are some usability problems with linux. I think most linux users are aware of them. But the solutions aren't trivial.

Comment Re:What exactly does it do? (Score 1) 249

BtrFS has not been completed yet. ReFS is shipping. ReFS will not have all the features of the completed BtrFS, but for now ReFS offers features not available in any shipping Linux.

I don't think ZFS is production quality on Linux yet either. Storage Spaces under Windows is nor shipping.

I guess I should have qualified...many features are available and stable with BtrFS today, on Linux 3.2. If you need something more, like ZFS, it is available on BSD or one of the free Solaris distributions (if you're setting up servers, chances are you will be using a mix of the three). However, the architecture and intent of ReFS vs. BtrFS/ZFS is not really the same. And if we're talking about filesystems, one of the strengths of linux is access to unique special purpose fliesystems, like GlusterFS, NILFS, and XFS, if you have needs that are better suited by one of those. On Windows you really only have NTFS and I guess now ReFS.

Dynamic Access Control actually ups the ante for SELinux, grsecurity apparmor etc. While it still protects access to resources it does so based on potentially very fine grained policies which can express rules based on a very wide range of properties. And it brings claims based security all the way into the primary access control of an OS. Linux does not sport claims based security.

Ok, but let's see how it actually gets used. I don't know if you've actually ever used SELinux...there's a reason why almost no distribution ships with it enabled. It's a huge pain in the neck. Red Hat ships it with generic policies that kind of work, but don't really make use of its full capabilities. If you are storing military secrets, fine, but for most general purpose computing it just gets in your way. Creating even more fine-grained control just seems to me to be a feature set nobody will ever use.

Sure. I am not aware of any effort to bring something like VSS to Linux, though.

If you mean snapshotting, it is available in a number of different formats: at the block level (ZFS, NILFS), file level (BtrFS, OCFS2), volume level (ZFS, BtrFS, LVM2), and filesystem hack level (RSnapshot). I don't see what difference it makes whether it is a local or remote filesystem. It will work in both cases.

Yes, an automagic always-on, bi-directional VPN on steroids. No calling, no VPN client installations. Just take the laptop outside the perimeter and it is still connected, still secured, still managed.

Well, to be fair, you do still need to set it up. It doesn't just happen. The capability sounds a lot like IPsec to me, and this has been available on Linux for a long time. Windows too, but it seems they have added better integration with Active Directory.

Uhm, not quite. But unless you experience the new Server Manager you are not going to understand. It has this "declarative" feeling - comparable to controlling your network with declarative network policies as opposed to relying on scripts running on each node to set thing up.

Maybe you're right, I won't understand without actually using it. But based on your description, this sounds exactly like Chef. I would put this firmly in the "Microsoft playing catch-up" category, because this type of management has long been a strength on Linux.

Comment Re:What exactly does it do? (Score 0) 249

Ummm...?

* New resilient file system ReFS (think BtrFS when completed)
* Storage Spaces (think ZFS storage pools)
* Dynamic access control (claims and policy based access control). Think SELinux, grsecurity. Access control based on what application the user is running (sandboxing), from what type of device the user is accessing the resource, on other user attributes than security groups (e.g. who is the manager, what department does the user belong to etc), access control based on attributes of the file (e.g. classification, select words of a Word document)

So in other words, by your own description, things that you can already get in linux.

* SMB 3.0 - higher performance network transfer, transparent failover, SMB scaleout (multiple servers serve same shares and aggregates bandwidth), SMB Direct (efficient remote direct memory access), SMB Multichannel, Volume Shadow Service (VSS) for SMB file shares, SMB encryption, SMB Directory Leasing (negotiates and updates local caches of metadata over slow networks)
* Block sized data de-duplication

Ok, things linux doesn't have yet, but are on the way.

* Hyper-V 3: ethernet cable live migration (neat trick) lets you migrate VMs off one server onto another server over the network without the servers sharing anything. Many Hyper-V manageability improvements. Crazy scalability, e.g. a 63-node Hyper-V cluster runs 4000 concurrent VMs simultaneously. Hyper-V replica.

Ok, Microsoft's own hypervisor technology. To get this on linux you would need to purchase a proprietary solution, like VMware or Xen.

* RemoteFX improvements, e.g. virtualized GPUs (can use local or remote shared GPUs during RDP sessions), remote low-latency multitouch.

An admitted weakness in linux at the moment.

* Direct Access over IPv4. Think hassle-free VPN.

I really don't understand what this is. An automagic VPN? Doesn't sound all that special. NetworkManager has been able to do system-wide VPN connections for a while now.

* Server manager: Yes, a Metro (oops - "Modern") style management app for multiple servers. Integrates with response files and powershell workflow scripts to manage multiple computers (servers/workstations) at once - e.g. install new software, perform configure actions.
* PowerShell 3 with new features such as resilient remote connections (you can detach from a remote session and pick it up later/from another device), workflow scripts which can perform actions with suspend/restart/repeat semantics. No, not just "suspend process" - but actually persisting the state of a script to be continued later, e.g. after a computer restart (or from another machine).
* Thousands of new PowerShell cmdlets (many/most automatically derived from WMI providers) to control virtually anything on local or remote computers.

So the equivalent of what you can already do on linux with a combination of SSH, Puppet/Chef, and Screen. Admittedly an improvement for Windows, but this has always been a strength with linux.

All in all a meh, in my opinion. If you really have a need for the high-end features, perhaps Microsoft is offering at a competitive price. But otherwise doesn't seem to offer much that you can't already get with a linux, bsd, or solaris distribution.

Slashdot Top Deals

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...