Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Kernel using 64GB physical memory? 19

Andreas Spengler asks: "Can anyone confirm that in the newest development kernel version 2.3.23 Linux seems to be able to address 64GB of physical memory? Has anyone tried this out?" If this has happened, then it is a good thing.
This discussion has been archived. No new comments can be posted.

Linux Kernel using 64GB physical memory?

Comments Filter:
  • 2.2.28 claims to support that much too

    note that thats the restriction on x86 proccessors, i would imagine alphas have either supported this for a long time, or support more


    i've not seen an x86 board yet that can use more than 1 gig of ram, but then i really haven't seen any enterprise boards
  • by Anonymous Coward
    From the Using Linux with (more than) 1GB of RAM [eu.org] Howto:

    Since kernel 2.3.24, Linux supports up to 64 GB of physical memory and up to several TB of swap on the x86 platform. This means that this howto is now obsolete. The easiest way to use more than 1GB of memory is to get a newer kernel and run that.

    I haven't tried it myself (send me 63.9375 GB of RAM and I will :), but it looks like the support is there. For more information, search for "64GB memory linux 2.3" on Google [google.com].

  • When the 386DX processor came out, it included 32 bit memory addressing. The 386SX, 386SL and 286 di 24 bit, the 8086 and 8088 did 20 bit. Intel, Amd and everyone else stayed with 32 bit until the Pentium Pro came out with 36 bit addressing, with 32 bit you could only address 4GB but with 36bit you could address 64GB. The limiting factor has been the OS, with it only handling up to 32 bit (4GB). Linux has been able to address more than 4GB before 2.3.23, but on other platforms (like ultraSparc) than x86, this just added support for the limited x86 platform.

    Don't expect to be able to cram anything over 1-2GB of ram into a machine anytime soon. The other limit is the Chipset, they normally only support upto 1GB mabey upto 2GB on nice motherboards. Linux and other OS's have used the the area from 2GB to 4GB to address the virtual memory (swap partition/file). SGI has been working recently to get Linux to support more than 2GB of physical ram(2GB real+2GB of swap=4GB of addressable memory)on x86. I think SGI released a patch to support upto 3.8GB of physical memory for the x86 platform.

    All this talk about ram is making me sick of only having 64MB and everyone else at work has 128MB or 256MB for their home computers.
  • I have here an ALR 6proc Pentium Pro motherboard that supports up to 4 gig of ram. I don't know if the full 4 gig works... but I know it works with 2 gig on it!
  • The use of the term virtual memory to mean swap filr/partition is a misnomer. Virtual memory is implemented all the time, regardless of swap usage. Each process has a page table which it uses to translate VM addresses to physical address. If a page has been swapped out to disk, a page fault is generated upon next request, and the data is retrieved back into RAM. If a page is in RAM, a process still has to go through VM to find its location. The only code that uses physical addressing directly is the kernel itself. The VM layer, AFAIK, is stricly limited to 32 bit addressing on 32 bit machines. Therefore, no single process could access more than 4GB out of the grand total. I've heard that under NT, Oracle and MS SQLServer are specially coded to work around this limitation, but I don't know the details.

    And as for motherboards/chipsets, the Micron that ftp.cdrom.com uses has a max of 8GB of RAM, but only 4GB is used, being the max amount available to FreeBSD.

  • ironically enough x86 now supports more memory than Alpha, which is limited to to 2GB on most machines because of PCI.. :(

    hope someone fixes this soon.
  • That probably won't exist for a year or so, given the current release rate of the 2.2 kernels...

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • I mean, really?

    When was the last time anyone ate up four gig of memory? Or even two?

    I've got an IBM RS/6000 S7A with four-gig of RAM and 12 processors serving a multi-gig Oracle database accessed by 300-400 users all day long. It doesn't use four gig of memory. (It uses about two-thirds but most of that is cache.)

    What do you plan on running under Linux that would justify 32 gig of memory, let alone 64 gig?

    And where are you going to find a motherboard and chipset that supports 64 gig?

    Yes, it would be nice if Linux would support gobs of memory. But, from a practicle standpoint, what's the point?

    (Yes, I realize that Bill Gates got in trouble for making similar statements (ie: 640k).)

    InitZero

  • by Anonymous Coward
    Well. I edit full-frame, 30fps, uncompressed video footage on a realtime non-linear editing system. At 22MB/s, 1 minute of video footage takes up approximately 1 Gig of space. In order to play 2 or 3 video streams simultaneously off of the disk array in real-time our disk array has to be capable of sustaining 66MB/s. This leads to a very expensive disk array. In the digital audio world, multitrack programs are seeing better performance by writing them to take advantage of lots of memory (i.e. 400+ megs) instead of fast (expensive) hard drives. It is reasonable to expect that a similar performance gain could be seen in the video editing world, but it would require a significantly larger chunk of ram. Especially when you consider that video editing is moving to HDTV. This could easily push the bandwidth requirements up to 180MB/s or more.
  • We have SGI origins at work with 24G of ram and yes, we're using it all to do very intensive particle renders. Not that we're using linux, but there _are_ uses for this much ram. The more linux advances the better.

  • You say yourself that Bill Gates once made the same mistake, and still you persist in your oppinion.

    Whatever resources we can only dream of having now, in 10 years it won't be enough.

    And even only from a PR standpoint, it is a good thing. Not too long ago Microsoft put up a page with 'Linux myths' on their site, trying to take the wind out of Linux's sails. One ot it's arguements was that, because Linux could address a smaller ammount of memory then Windows NT, it was less suited for an enterprise environment.

    If Linux supports more memory then you'll ever need at that time, it shows that the Linux movement is committed to support even the most extreme and demanding applications, which is a good thing.

    If there is room for improvement, use it. Limiting yourself will only hurt you later.

  • VMware with WinNT..

    LOL

  • Actually you can get Alpha's with 28GB of RAM and I think one of them supports 32GB... Linux on Alpha only supports 2GB, but there is a patch that allows 4GB. There is another patch, that I haven't heard of anyone actually trying, that allows 8GB of RAM. Try searching dejanews or the Red Hat axp-list archives.

    LONG LIVE ALPHA [alphalinux.org]!!!
  • doh... meant 2.3.28
    darn keyboard tells the computer what i type not what i mean :)

    cool username btw


Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...