Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment Re:False (Score 1) 132

Because one vendor will champion one linux distro over another and the installer for this or that linux distro will prioritize based on a different mechanism or none at all, from RHEL3 to RHEL5 (WS,AS,ES) it varied even between update releases.. and sometimes it was based on PCI order sometimes on MAC order, other times based on the dhcp server that responded quickest on an interface. All over the map. The problem was there were "no standards" for enumeration because the hardware manufacturers had no "standards" or directives to ensure a single detection mechanism would be supported. This had a lot to do with the installer, Anaconda versus the Debian installer versus lots of homebrew or custom installers (they all seemed to have their own way or no way to preconfig names uniquely) In the olden days like 4 years ago a multi-homed server was almost unheard of.. and linux was thought of more as a desktop or embedded OS which again pointed towards a singular or small number of NICs.

It was actually recommended in those days to disable or add additional NICs after initial install.. or to not use network interfaces for provisional media.

Another recommendation was.. put all the NICs on the same subnet for install and sort them out later after you had a running network.. and or manually write udev rules to "lock" them after install.

Comment Re:False (Score 1) 132

Just one other common everyday thought.

Suppose you get all this perfecto.. and someone installs a PCI ethernet card.. kinda invalidates a lot of server assumptions.. but also workstation assumptions about ordering if you don't initialize the hardware and perform some probing.

Another thought I wasn't clear on.. was you can run a postinstall script to cleanup udev and figure out your network situation.. but if your depending on that ethernet for install media.. it will never finish.

Ultimately you probably "lock" the UDEV rule to name based on an invariant like MAC address.. but even that's not invariant.. it has the same vulnerability as IPv4 to number exhaustion or accidental hi-jacking. It works ok so far, but in theory when motherboards are a few atoms across and ubiqious as H2O molecules.. it will become a problem.

Comment Re:False (Score 1) 132

Persistency doesn't exist on initial install.. you need to know information before you configure or "name" the interface.. problem is how do you get that information.

Usually by probing and or trail and error exploration, or you can order by PCI discovery, MAC address, or ARPing an interface to discover other servers or gateways MAC address so you can preferrentially order them.. but each situation is unique. After you have the information used to order, you "can" embed that in the UDEV rules so that the naming always comes up the same way.. but you can't do that initially. The randomness is generally a matter of revision to the motherboard hardware, assignment of MACs burned into the Ethernet controllers, or re-arrangement of the physical wiring of the network plugged into each port.. in this age when VLANs are popular on home networks.. its really a huge problem if your trying to automate things.. look at Microsoft APIPA or whatever its called this hour..

One manufacturer actually built in a delay so that the non-preferential ports came up (initialized) after the preferred one to give it a head start. UDEV is the right idea.. but its half and idea and not the complete solution.. its needs a partner in crime to complete its mission.. there needs to be a pre-boot kernel phase that can run specialized routines custom to the environment and user.. EFI is a thought.. but its not widely available or deployed yet.. EFI-X not withstanding.

So you need a smart enough bootloader to "help out".. writing custom bootloaders is no small task.. so an api for writing short routines is much preferred. Debian, SuSE, Red Hat all use bootloaders with enough smarts to help sort things out.

This is also important when using virtual machine motion to migrate from one node in a vm cluster to another, or performing P2V the rules change and UDEV becomes dislodged with its certitude that its the correct authority for naming information. Effectively it becomes "outdated" and your back to doing your own plubing with your own flashlight essentially you become the sophisticated automated script for reconfiguring your server.. try that for 150 or 300 blades or virtual servers.. it gets old fast.

Comment There is a way.. it has been solved before.. 2005 (Score 1) 132

Matt has been working on this problem a long time. I believe he used to be with DELL and tried to get a group of hardware vendors to solve the problem by (1) default ordering the eth0 interfaces by MAC address (required some code support in the OS) and then (2) getting the hardware vendors to agree on a universal rule that MAC addresses were assigned to devices in preferrential order, hopefully coordinated with the packaging on the motherboard or the server itself. The problem has long roots, at one time I believe the "convention" was to order eth0-ethX by the PCI discovery order on the PCI buss.. but while stable for one machine.. as soon as a new rev of that motherboard came out or a new chipset emerged.. blewy.. right in the middle of a build on thousands of servers. Eventually Peter Anvin with Syslinux bootloader and PXE fame began making available some plugin modules to pre-probe or assess the situation before loading the kernel and passing a custom kernel argument to the booting OS.. be that linux, windows.. whatever.. which looked very very promising. Even worked with WinPE.. awesome. But the problem is still very generic and germane to lots of hardware.. even hard drives or storage LUNs.. to avoid violating the rule "do no harm" a system booting with unknown hardware is left untouched.. just in case it formated with a pattern of file system that is not recognizable.. like GPT or HPFS+ It gets even more mind blowing when you consider java virtual machines and virtual cloud vm support systems.. how do you know what you don't know.. if you didn't build the entire system? Fair warning.. I used to work at HP.. but we tackled this problem.. and did win the day.. but you could only do it in a software+hardware company+opensource sort of way.. Long term.. i think the best way would be a plugin for Syslinux that supported snmp read-only probing for basic ethernet, pci bus and possibly storage device signatures. In thoery this would solve the entire problem and be supportable far into the future.. but I never got a chance to execute on it. GrubPXE also shows a lot of promise in the same area. Bootloaders and Partiton management are vastly underrated for all systems.

Slashdot Top Deals

The finest eloquence is that which gets things done.