Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Intel

Red Hat, HP, Intel Join in Itanium Linux Alliance 131

joel_archer writes "According to this Yahoo! article, Red Hat will begin selling an Itanium version of its Advanced Server Linux in partnership with HP. This is one of partnerships currently underway between these two companies. HP is a key partner for anything Itanium-related, the company invented the design underlying Itanium before handing it off to Intel to develop and manufacture. Bolstering that effort, Red Hat and HP have signed a deal under which Advanced Server will be certified on and available with all of HP's Intel-based ProLiant servers--not just Itanium systems, but also lower-end Xeon and Pentium versions and superthin 'blade' systems."
This discussion has been archived. No new comments can be posted.

Red Hat, HP, Intel Join in Itanium Linux Alliance

Comments Filter:
  • <rant>
    I hope that since HP now owns Compaq, the Proliant will become a better machine than when it was strictly Compaq. (If fact I hope all Compaqs are better...)
    </rant>
    • Are you on crack? (Score:2, Interesting)

      by glrotate ( 300695 )
      Compaq Proliants were the best x86 servers out there. Advanced diagnostics, excellent 64bit pci support, fantastic RAID, terrific redundancy, pain free hot swap capabilities, and a pretty decent SAN sollution.


      I have installed countless proliant servers and they are very high quality boxes.

      • Hear, hear... but you missed PCI hotplug. (Now that's cool!) Many, many IT people deploy Proliants because, like it or not, "Proliant" is a very respected name in the datacenter.

        I think the big win in this article is not Red Hat on Itanium, but rather having Linux available directly from the manufacturer on the Proliant line. There's a lot of people that will feel a lot better now that they can get HP backing both the hardware and the software on their brand new (expensive) server. Plus, it gives us something to bring to management ("HP is offering it, it must be good"), etc.
    • Trust me, you didn't want HP to stick with the NetServer line of x86 servers. How would you like to be uncrating and installing 50 or more new blower fans (like I have to...), because the ones currently cooling your LP2000r dual PIII 1ghz systems are failing on average within 3 months of deployment?
  • I guess it's fine and good that Red Hat is getting in on this. But there's very little need for 64-bit desktop systems (as demonstrated by the Itanium and Alpha's consumer-market failure). 32 bits is plenty for virtually every application of a desktop computer, and will be for quite some time.
    • What part of "Advanced Server Linux" don't you understand? Oh, I suppose the "Server" part.
    • Because Intel is scrambling to have something in place for when AMDs Hammer comes out. There might not be a big need for the 64 bit versions, but to leave the field to the competition is a no-no.
    • Please don't become like this:
      "640K should be enough for anyone." --Bill Gates

      linkage [interesting-people.org]
    • no, these processors/architechtures are just not cost effective for desktop use. its not there is little need it is that the need does not justift the cost.
    • You 'desktop' system is probably doing quote a bit of >=80bit processing especially if you running games (the fpu),
      what 64bit's gives you far more than bigger numbers, you should also get an architecture change, otherwise no-one would be bothering to develop 64bit systems, most markets that require that kind of pression are fairly well saturated.

      In your example the only real problem with 16bit systems is memory addressing limit, so why didn't intel make a 16 bit processor with 32bit addressing registers? it's far easier than making a 32bit processor.
    • Well, the prevailing thought is that 64-bit architechture is going to be the standard in all desktop use in the next 5-7 years. But first we need to have a trickling down of the software from high to low-end. Also, 64-bit can handle a march larger load than 32-bit, so it is and ideal for high-end servers that handle hundreds of thousands of requests a day, which is what HP is selling here.
      • Well, the prevailing thought is that 64-bit architechture is going to be the standard in all desktop use in the next 5-7 years. But first we need to have a trickling down of the software from high to low-end. Also, 64-bit can handle a march larger load than 32-bit, so it is and ideal for high-end servers that handle hundreds of thousands of requests a day, which is what HP is selling here.

        register width and address space have nothing to do with load. also, only poorly written s/w doesn't run on 64bit archs. all but of few debian packages run flawlessly. 64bit cpus have been around for about ten years now.

        the main advantage of 64bits is the address space. no more trying to cram physical ram, io space, and kernel mappings into 4gigs. 64bit int ops are obviously a lot faster. the big win with that is, you can have 64bit file and sector offsets w/o slowing things down.

    • I think you're making a mistake in assuming that performance in the past predicts exactly the performance in the future. One of the biggest problems with 32-bit systems is memory addressing. 32-bit systems can only address 4 GB RAM, and modern desktops are easily capable of having a couple GB RAM, especially when swap is thrown into the mix. Hell, my laptop even has 1.5 GB RAM (512 MB h/w, 1 GB swap).
    • Sofware that currently uses 32 bit values to compute hard drive space is probably broken right now as the maximum value a unsigned 32 bit nubmer can represent is about aproximately 4 gigabytes. Have you ever written code to convert between an int64 and long? It's not complicated but boy can it ever be annoying.

      time_t this happy 32 bit little time value is ubiquiteous accross all platforms and is also broken after 2038. Switching to a 64 bit version makes the problem go away.

      32 bits starts to get a lot smaller when you're dealing with signed values. ~2billion isn't that big a number for a lot of computations.

      In short, switching to 64bit will solve a lot of little niggly programming problems for free.

      Operating systems with standardized ways of writing software will probably make the transition fairly seemlessly (UNIX: already available). Operating systems with rampant use of hard coded 32bit values (DWORD) for handling pointers and system resources will have a more painful transition (Windows: delayed).
    • No one will ever need more than 640KB.
    • The more will we use Java them more we will think about 64-bit. There will be never enough resources for that freaking memory-leaking crap.

      XML (especially DOM) is another way to get all my resources utilized. Although it takes it for good reasons.

      Photoshop was another big demander of 32-bit. With 3D animation studios 64-bit will be a much better choice.

      Also I expect a new wave (generation) of AI applications: in games, in financial applications and in search engines. And probably in speech interface.

      Finally, it's not far away I will have 32GB of RAM on my home server. 32-bit is not good to access it :)

    • RedHat advanced server is for servers and not consumer oriented pc's. Your right with the 32-bit argument for desktops but their is a real need for 64-bit database servers where large address spaces can make a difference in performance. No matter how big and fast these systems are, they are never fast enough for most database apps that run fortune 500 companies. Most of these customers just buy expensive clusters of databases or run separate databases for separate departments. Intel only has the low end of this market because their chips are only 32-bit and they lack the i/o of the bigger unix boxen from Sun and IBM. Intel wants this market badly. Also there may be some custom server apps that deal with large data sets that could benefit from a 64 bit platform. These systems costs $100,000 to over a million and are huge money makers from pc makers.
    • ppl said the same thing before 32 bit processors ame ut, and when intel made 1ghz processors. software will find uses from the extra power, eventually.
    • Of course, it's not a desktop system.

      Windows NT uses 2GB for the kernel address space and 2GB VM for process space. they made this 3GB for process space, and 2GB for kernel in Advanced server, some preocesses were choking out.
  • by peterdaly ( 123554 ) <petedaly&ix,netcom,com> on Wednesday June 19, 2002 @11:55AM (#3729562)
    So, HP wants to officially offer Linux on all of it's Itanium, and lower servers? Itanium is going to replace much of HP's higher end server line as well if I remember my facts correctly.

    This sounds very similar to IBM's linux on all IBM "backend server" offerings. You have to remember, these will be all of what used to be the offerings of both HP and Compaq when considering the market scope of this.

    BTW - Oracle just matched BEA System's price/performance record for the java application server benchmark. Oracle ran with an all Linux solution on HP Proliant hardware.

    HP is pulling an IBM...how interesting.

    -Pete
    • I like the idea - with both IBM and HP vying to be the best to offer Linux based solutions (including hardware/support/added development), this is going to make Linux better - and with two companies competing for my $$$ the way my girlfriends compete for my manhood, the consumer wins again.

      Good God, but I *love* compitition.
    • BTW - Oracle just matched BEA System's price/performance record for the java application server benchmark. Oracle ran with an all Linux solution on HP Proliant hardware.

      I thought Oracle's licensing doesn't allow you to publish the results of such tests... well, I'm sure they don't mind if Oracle comes out the winner...

  • I wonder if this will affect any enthusiasm or special testing/optimisation that RedHat might have planned for the AMD side of the 64-bit fence. I can't imagine Intel not putting just a little bit of pressure on RedHat to be more forceful in their...patriotism.
  • finally (Score:3, Funny)

    by RealisticWeb.com ( 557454 ) on Wednesday June 19, 2002 @11:55AM (#3729566) Homepage
    I don't mean to troll, but advanced server is just now getting to 64 bit archatecture? Would someone please tell me how long *NIX has been doing this, and how far behind win-tel is?
    • UNIX: about a decade across their whole market.

      Wintel: a few years across a very small part of their market.

      Companies like Sun and SGI will be able to use arguments about maturity and experience for a while, because the arguments are simply true. However, after a few more years, the 64-bit market will become very interesting with lots of competition from Intel, as long as Itanium doesn't become the Itanic. Personally, I really want companies like Sun and SGI to be successful (their hardware is awesome), but I fear that commoditization of 64-bit computers will take its toll.

      What first-order things will traditional 64-bit computer makers compete on? 64-bit address space? No. Raw CPU speed? Probably not.

      Instead, they will have to find new ways to differentiate themselves. For example, Sun continues to integrate more reliability and availability features into their servers. Unfortunately, it is harder to communicate to the "masses" that such features even exist, because the "masses" are still stuck on the first-order things: speed and address space. Eventually, I think the 64-bit market will become like today's 32-bit market with 64-bit "Cheap Crap"-brand wintel servers dominating over more worthwhile, but more expensive, 64-bit servers from Sun, et. al.
      • it's more a case of FAB's

        Intel have the worst design in terms of cost but this is offset by their buckload's of money and moveing production lines to a .13 process
        (also getting SA1110 so that they can run that on the old ones and keep them profitable)

        AMD has got out of the game partnering with UMC

        SUN used TI and fujitsu and fujitsu are out of the game now

        HPaq use Intel and are out of the game

        SGI use TSMC now......

        IBM are so ahead of the game it's scary just as well they only intrested in Power (-;

        Moto are long since dead and are useing TSMC

        so it comes down to....

        Intel vs IBM
        and maybe SGI & AMD depending on how things work out

        so in terms of what counts its the northbridge and CA seems to have done well custermising Intels referance IA64 northbridge to handle 128 procs HP has done its own and thats whats going into that big Linux machine they sold

        oh yeah

        Redhat layed of a bunch of people today dont see that in the news

        regards

        John Jones

      • "Wintel: a few years across a very small part of their market."

        Lintel: Not part of the market (yet)

        Hey, did I just coin a new word, or is 'Lintel' an old one?

        • I think a 'lintel' is a piece that spans two posts, like in a doorframe or at Stonehenge. Isn't ancient Roman architecture sometimes referred to as 'post and lintel'?
      • UNIX: about a decade across their whole market.

        Actually, it's a lot less than a decade for most UNIX vendors.

        DEC had 64-bits first; 1992/1993 I believe, with SGI not too long afterwards. So the two guys with lowest marketshare were pretty fast out of the blocks. But where were things a few years ago? By late 1998, all the RISC vendors had at least one 64-bit piece of hardware, with half of Sun and HP's product lines moved over, IBM just starting, and SGI shipping all 64-bit hardware. But various players hadn't finished all the OS-level stuff to support that. (Source for all that here. [osdata.com]) The transition to 64-bits wasn't done for UNIX players even 3.5 years ago, so "across their whole market" is really way too strong a statement. Wintel ran on 64-bit Alpha support long ago, but actual 64-bit APIs were still in development back in that timeframe; I haven't seen how far along they are now.

        At one point in my career, I analyzed 64 bit marketing for several projects. Basically, saying "we're 64-bit, they aren't" was never a very compelling argument to begin with. Sure, in a few cases (very large databases, but not very very large databases) it made a difference, but at the end of the day, it didn't win any hardware players a lot of business.

        Saying "64-bit is better" is easy, showing that 64-bit is worth paying more money is typically hard.

        You're right that 64-bit Intel will likely win over 64-bit RISC long-term is right. But Intel is having huge problems executing on 64-bit Intel stuff. Itanium was a loser. We'll see how competitive McKinley is.

        Right now, and I suspect for some time to come, Sun and SGI will continue to sell better hardware primarily based on "more reliable", "more scalable" kinds of features within the hardware (as usual, features requiring OS support), not leaning too heavily on the 64-bit argument.

        --LP
    • I don't mean to troll, but advanced server is just now getting to 64 bit archatecture? Would someone please tell me how long *NIX has been doing this, and how far behind win-tel is?

      What, pray tell, does "Win-tel" have to do with Red Hat Linux Advanced Server? Wintel [Microsoft Windows on the Intel platform] isn't typically a Linux platform unless a virtual machine technology is utilized, so please enlighten us as to your point.

      For one who doesn't mean to troll, you certainly manage to do somewhat of a satisfactory job of it.

    • Red Hat Linux has been on Itanium for a couple of releases - Red Hat Linux Advanced Server is a newer product, which is now going to be released for that platform. You can read more on the IA32 version at Red Hat's web site [redhat.com].

    • ACtually you're an idiot there has been a 64 bit version of windows since 1993, its called windows NT and it was jointly developed by DIGITAL and Microsoft for the Alpha processor. [montagar.com] ALthough Microsoft no longer supports the Alpha processor there was an unreleased version of windows 2000 made for it. DIGITAL did make a 64 bit version of UNIX before NT came out, so yes you are infact right in some manner. But there was only a few year difference
    • linux= made by hackers for hackers.

      commercial unix=made by corporations for corporations.

      The problem with Linux is that its made by hackers and most of them own 32 bit machines so guess which platform it peforms better on? Hmm I guess 32 bit x86 or the powerpc platform. Sure they are a tiny few who own an alpha but the move from 32 bit to 64 bit wont bring any performance enhancments for most desktop apps. The alpha port of linux from what I heard is the least stable and has the worst optimization from the gcc compiler. At least this was the case several years ago so I don't know if the compiler has been fixed but you get the picture here about linux vs unix.

      Corporations who develop Unix have the money to pay for nice 64 bit servers for development and testing and can tell a programmer to do this and do that to make sure everything is 64-bit ready for server use. Hackers out of there own budgets do not. However sun's cheap ultra-100 box is a start for consumer affordable 64-bit workstations.

      Unless more interest is given for 64-bit applications from the gnu c compiler team as well as kernel hackers, unix will stay ahead of linux in the server arena. Advanced server for itanium is a start in the right direction since Linux has got alot better and since redhat can pay for 64-bit testing and development.

  • by Anonymous Coward
    Pretty remarkable machine, although if you consider that software is usually almost as expensive as the hardware it's no surprise that it's a bargain. But I'll match the reliability with any of the other machines we've used in the past. Everybody agrees that it's faster than the last one (well, of course!) So yeah, I'd recommend one of these to anybody who wants to increase the overall speed and reliability of their platform without breaking the bank.
  • Is this old news? (Score:4, Interesting)

    by CanadaDave ( 544515 ) on Wednesday June 19, 2002 @11:56AM (#3729575) Homepage
    Mandrake has had an Itanium product for a while now.

    http://www.mandrakelinux.com/en/ftptmp/1024501320. d32fd091334bd166624816e3d84d319a.php#others [mandrakelinux.com]

    It looks like HP, Intel, and RedHat have been in the mix since 1999.

    http://sverre.home.cern.ch/sverre/Linux_IA64_proje ct.html [home.cern.ch]

  • Redhat is using this partnership to increase their revenues and clean up their profit margin according to this article [theregister.co.uk] on the Register, which coincides with this Yahoo News item.

    Redhat may or may not be your favorite distro, but at least they're doing something to increase Linux marketshare, and apparently are doing it successfully.

  • Microsoft are gonna be REALLY happy that Intel are having a relationship with other OS vendors.. ;)
    • well i think microsoft already has contracts with them, so its not like redhat is the only one. and what difference does it make? every1 has x86s, and every1 makes operating systems for them. microsoft makes operating systems for less platfoms then linux anyway. if they wanted to control all procssors, they'd mkae windows for ppc. bottom line is ( it doesnt make a difference who intel makes contracts with. ultimately, ill be abel to walk to the computer store and buy a 64bit processor, and so will any and all developers. linux owuld get there, and there is no way microsoft could have stopped that. now they might care about hp teaming up with redhat, but for processors, there is no way they can control what platforms will run on it.
  • or does it seem like Red Hat is kinda moving away from the little guy, getting Linux on the desktop effort, screw big corporate culture movement? I always thought Linux companies were supposed to be a little more rebellious and distrusting of big companies.
    But then again, this deal is what a lot of Liunux companies want, is to make money on superior technical expertise, not on super expensive software that is not free(as in speech).
    • The coporate world is just like the Dark Side I'm afraid. Once you start down its path....
    • by Theodrake ( 90052 )
      I recently download the 7.3 iso's for free. I still get free upgrade support from Redhat. I usually run the upgrade on weekends or late at nite, because I have been bumped off during the week if there are high loads on the system. But I am usually able to sustain d/l speeds of 150 KB/s. Not bad for free.
    • or does it seem like Red Hat is kinda moving away from the little guy, getting Linux on the desktop effort, screw big corporate culture movement?

      I'm not sure that RedHat was ever there. The good thing about RedHat is that they do seem committed to Open Source i.e. they actively participate in contributing to many GPL projects and don't keep anything closed. As long as they continue to do this I don't really care who they collaborate with.
    • by Jason Earl ( 1894 ) on Wednesday June 19, 2002 @12:26PM (#3729772) Homepage Journal

      RedHat still releases the software they write under the GPL, and their software is still widely available for "free." RedHat has put their money where there mouth is and is making good on their claim to charge for support and not for software. RedHat has seeded the business community with high quality Free Software, and is now reaping the benefits of their work as business start using this software and (more importantly) start paying for support.

      Anyone who links Linux with some sort of lame counter-culture anti-business meme is just being soft headed. RedHat gives away software because it makes business sense to do so, plain and simple.

      • Amen brother. I don't use their distro on my personal boxen, but RedHat has been and continues to be very, very good to the community. They deserve mucho kudos from one and all.

        Sheesh, if and when they finally dump that RPM crap I might actually use it some day.

        .
  • I will get way out on the proverbial limb and say that any Unified Theory of Linux [slashdot.org] will turn to vapor within a year.

    I'd like to hear some discussion about this point; it seems to me that while us nerds like the Many Flavors Of *NIX, the Suits want One Thing to manage, which from a business standpoint seems to make a lot of sense.

    So, feel free to disagree, or mod me down, or whatever. :P
  • by Anonymous Coward
    Itanium's the thing with the rediculously-constructed VLIW philosophy. Right? The one that runs like shit unless your compiler is close to being sentient.

    I haven't really heard much about Itanium, and had assumed this meant it was dying, because unless people compiled to Itanium it wasn't using its full potential. However, this Itanium Linux thing is a very, very good sign for Intel; even if Windows NT may not be at fullspeed for Itanium, that's ok, because we can have Linux distributions where all new software is compiled targeted & optimised for the difficult Itanium instruction set. This was, i thought, always one of the great underused advantages of open source software-- it makes hardware platform irrelivant-- and why i'm glad to see things like Gentoo emerging. (Err.. the pun was not intentional. Sorry.)

    However, though, i must ask: How well is GCC doing insofar as itanium specialization goes? Last i checked, there was a hyperoptimized intel compiler, but not a lot of people were using it because it wasn't integrated with anything else. Is this still the case? Is gcc up to speed with the intel benchmark compilers, as far as optimizations go? And if not, is it possible for a linux distro like this to use `intelpropeitarybs` in place of `gcc`? Is there work still to be done?

    If i download something off freshmeat and ./configure;make all; it on an Itanium Linux box, will it be in the end as optimized for that architecture as it can be?
    • by Anonymous Coward
      HP has it's own custom compiler for Linux Itanium. In fact HP controls the design team for the official Itanium C compiler, of which the Linux version is a showcase example. That was HP's part of the deal with Intel, to supply the VLIW compiler technology. The point is, they don't need GCC. Although since Red Hat is the de facto owner of GCC, I wouldn't be at all surprised if some HP technology eventually shows up in GCC as a direct result of this partnership.
  • I have a friend at HP, I'm building two data centers in the next year, I usually use RedHat...

    E| b3 D3 L337357 R007 of all baby! 8^)
  • Rather that slash and burn Alpha, HP would have made all of its customers far more happy by agreeing to take EV8 to silicon. I'm sure this was in the realm of the possible.

    Yes, cross-license with Intel up the wazoo and sell your employees to Intel if you like, but deliver to your customers what they need to keep their datacenters for the next decade, and also bring a stunning and seminal SMT product to market.

    While we're on the subject, unifying HP-UX and Tru64 into a "TruHP" might have scored a few notches on the cluestick. Let's face it: a lot of things about HP-UX just plain suck (especially the packaging system, as Tru64 announced it was moving to RPM). HP is just beginning to implement dynamic kernel tunables and even their whole enterprise file system is outsourced. I am totally underwhelmed. When they lose the performance edge, I will have no sentimental attachment to this kludge.

    Just like IBM and Sequent, HP has knifed products that work for products that don't. May Opteron be the undoing of you all.

    • "HP has knifed products that work for products that don't." Do you know why they do this? It's because Carly Fiorina is an IBM mole, and her mission is to destroy HP. http://www.uncoveror.com/fiorina.htm [uncoveror.com]
      • The real reason the Itanium effort between HP and Intel died is because the "best" way to do things in that chip were already patented against the Alpha chip. After years of trying to work around the patent problems, HP was tossed out by Intel to go it alone BUT the real kicker is that the Itanium II chip will be based on the Alpha chip to utilize all the "best way to do it" technology the alpha uses. When it comes to compilers that's the one thing that Digital (creator of the Alpha) did right, they produced all the compilers before releasing the chip to market. Optimized 64bit computing has been around for years on the Alpha. Too bad stupid old Compaq threw away the Alpha before they learned what they had IE: the egg from the Golden goose. If my mole is right, it will live again in the Itanium II.
        • When companies aquire other companies to create the illusion of growth, these things happen. Now Compaq has been gobbled up themselves. Ironic. It's good that Alpha will have a second coming as Itanium II. I wonder how many other great things have been simply discarded as a result of predatory mergers and aquisitions. I wish Palm would put BeOs back on the market, I wish someone who would have done something with it, IE not Gateway, had bought Amiga after Commodore went bust. I am starting to rant now. Guess I better stop.
  • Good! (Score:2, Interesting)

    by bteeter ( 25807 )

    Maybe now my shares of RHAT will actually gain some value! :-) Red Hat has been making a lot of critical partnerships like this recently.

    Say what you will able Red Hat vs. other Linux distributions - it is the partnerships and support of other Enterprise-sized vendors that is going to make or break Linux. Being that Red Hat is smart enough to make these partnerships, my money is on them to be "the" premier Linux vendor for the corporate market.

    Take care,

    Brian
    --
    100% Linux Web Hosting Solutions [assortedinternet.com]
    --

  • Intel better watch out our MSFT will start writing efficient code, and end the continuous cycle of getting a new CPU to run the new MSFT OS.
  • Funny to see how SuSE [suse.com] is not part of that alliance. They were the first to ship an Itanium distribution, back in June 2001 [www.suse.de]!

  • In case you didn't know, the only HP Itanimum workstation available is the i2000, which HP are actually no longer shipping with Red Hat Linux (ironic !). Yes, it comes with both HP-UX 11.20 (shortly to be 11.22) and XP 64-bit edition "for free" (i.e. cost is bundled in), but no Red Hat Linux for Itanium.

    Have a look here [hp.com] if you don't believe me - this means you have to fork out [redhat.com] $495 (yes, you read that right) for Red Hat Linux on an HP Itanium box compared to nothing extra for XP, HP-UX or indeed other Linuxes (Mandrake, Debian and SuSE all seem to have ISOs for Itanium available).

    Surely HP must now resume shipping Red Hat Linux with their Itanium boxes [they did used to ship RH with the boxes until quite recently] ? Or is $495 considered peanuts compared to the cost of the boxes ?

  • hey how about United Linux running on multi-proc alpha servers and all of compaq's Himalaya servers oh yeah i forgot the whole we're merging thing. well in any case its still a very big step forward for linux. between this and IBM working on linux we should start seeing alot more people getting involved in the movement ... which is a good thing for the most part.
  • HP bailed on IA-64 because HP couldn't run its development operation efficiently enough to meet the tight controls it now must use to survive.

    HewPaq is no longer a frontline R&D organization, it's a computer kitbuilder.

    --Blair
    "Not that there's anything wrong with that."
  • Sacreligous I tell ya!!
  • redhat + hp, this is the end.
    the beginning of the free distributed linux's end
    lets take a closer look at redhat.
    they make it more human, (this is the way they discribe latest gnome + latest X, in the magazines), more complex, more colorfull (the green [OK] i guess), with the idea to get money from it.
    well now with the partnership with hp this is possible.
    we have 10 more years with the BSD license for full free code.
    long live bsd.
  • HP/Redhat should first fix their drivers. The DL380 integrated array controller does not support tape drives on RedHat.

  • that RedHat is turning into a monopoly. There for the longest time I thought the 'm' in monopoly stood for 'M'icro$oft. But now I am starting to think that it just stands for 'm'oney.

    I truly do hate saying this, since I am and have been for the past 3+ years been running soley RedHat (none of that dual boot crap like most of you out there). But Redhat, which has done great things in the advancement of both a workstation and server linux has outstreched its arms on this one. First with Oracle, now this...

    I only hope that it enhances the product, not the price (like other companies, M$, have done).

Machines that have broken down will work perfectly when the repairman arrives.

Working...