Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Windows Operating Systems Software News

Performance of 64-bit vs. 32-bit Windows Dual Core 319

mikemuch writes "ExtremeTech's Loyd Case has done extensive testing on the same dual-core Athlon X2 4800+ system to explore performance differences between Windows XP Professional x64 and good ole Win32. The biggest hurdle is getting the right drivers. There are a few performance surprises, particularly in 3D games."
This discussion has been archived. No new comments can be posted.

Performance of 64-bit vs. 32-bit Windows Dual Core

Comments Filter:
  • by dsginter ( 104154 ) on Monday September 12, 2005 @05:45PM (#13541426)
    The spyware can all be run on one of the cores while the other can be used to get work done. I'm getting one for my father-in-law.

    • The spyware can all be run on one of the cores while the other can be used to get work done. I'm getting one for my father-in-law.

      The spyware companies all SAY they won't touch the second core now, and they may stick to it for a little while, but that's so that when you buy 2 cores, they have twice the zombie power per PC at your expense!!!
  • by Godeke ( 32895 ) * on Monday September 12, 2005 @05:47PM (#13541443)
    The good this article tells us is that the 64 bit OS doesn't cause any significant loss of performance for the 32 bit applications that will function under it. On the other hand, the only 64 bit to 32 bit comparisons they have also show almost no differences. I think this is the most telling:


    The good news is that 32-bit Far Cry (as of the 1.31 patch) runs fine under Windows 64-bit mode, with very little performance penalty. When we move to the base 64-bit version, we pick up a couple of frames per second at 1280x1024, but we defy anyone to actually notice the difference between 79.5 and 82 fps.

    The good news is that the enhanced version still clocks in at 80 fps. This bodes well for 64-bit gaming, as game developers can add substantial new content and detail without sacrificing performance.


    Desktop applications (even games) don't need the one thing that 64 bit computing really excels at: massive addressing space. A database server that is compiled to 64 bit code will have access to much more RAM, and thus have much better performance if RAM bound (which many DBs are). Meanwhile for POV-Ray the fastest result of 383 seconds was the 32bit application on 64 OS!

    I think that it is safe to hold off on 64 bit for your personal desktop until a larger share of applications are compiled with 64 bit optimizations, but unlike the 16 -> 32 bit shift, I suspect the results will be underwhelming except for extremely memory consuming applications.
    • In addition to being able to address much more RAM, x86-64 chips also have more general purpose registers than their 32-bit brethren. This would probably account for performance gains more than anything else in most applications.
      • ...as soon as that can be harnessed by the application programmers and compilers. Right now that is still not the case for most programs out there.
        • Actually, any 64-bit compiler will automatically use the additional registers. Any compiler that doesn't is just plain stupid. If MSVC doesn't generate good 64-bit code, then use gcc or even icc.

          Anyone care to comment on MSVC's capabilities in the 64-bit arena?

          --S
          • It's waiting for Vista sp2_64.
          • by caspper69 ( 548511 ) on Monday September 12, 2005 @06:41PM (#13541928)
            Well, as ironic as it is, gcc is the one that used to suck when it came to generating functions called with the __fastcall calling convention (where function arguments are passed via registers instead of on the stack). But now, the ABI for x86-64 seems to encourage the __fastcall convention by default (assuming this is due to the extra general purpose registers). I found this out when doing some OS dev and some ASM test routines were not working in 64-bit mode. I was looking in RAX like I always had (well, EAX at least), and lo and behold, no value!! Well, a quick read of an AMD64 ABI guide from x86-64.org (which went offline last week, don't know if it's back), and I was ready to rock and roll again.

            Seeing as how MSVC and icc both conform to the x86-64 ABI, I would assume that both are equally capable (they're already damn near the same anyway) of utilizing the extra registers.
            • There are two areas I can think of where this matters greatly -- one is calling conventions as you point out, and the other is register allocation when dealing with code in general (sorry for the imprecise description of the latter, I'm not a compiler guy :-).

              In other words, if I have eight 32-bit counter variables used heavily in a short block of code, The AMD64 version should be able to stick them all in regsters and use them. If it's using the IA32 register allocation algorithm though, they'd end up get
              • You are right on both counts, but the latter is not as big of an issue as RISC advocates would like you to believe. x86 processors since the Pentium Pro (hazy, but maybe the Pentium) have used register renaming, whereby the internal micro-op execution core of the processor has many, many more registers than the IA32 ISA (like 32/64/128 vs. 8).

                The greatest speed increase will not come from the number of registers, per se, but rather the compiler's ability to explicitly access those registers. There is
          • Anyone care to comment on MSVC's capabilities in the 64-bit arena?

            Almost non-existent in 6, 7 (.Net 2002) and 7.1 (.Net 2003). We've switched some of our 64-bit test platforms at the office from using an SDK to using the beta of version 8 (VS 2005), which seems to be much better at targetting such a platform, but obviously it's unlikely you'll see the latest and greatest game today built with a compiler that's not due for release until 7 November...

      • An interesting point, but apparently the apps that were recompiled are not doing a great job at doing so. A game would seem to be an excellent candidate for register usage of this nature (they have tight inner loops and more registers means less memory access). Yet, that did not appear to be the result in the case of Far Cry.

        Is it possible that diminishing returns is kicking in on the register set size, or simply bad compilers (or use thereof)?
        • by Bloater ( 12932 ) on Monday September 12, 2005 @06:37PM (#13541888) Homepage Journal
          > Is it possible that diminishing returns is kicking in on the register set size, or simply bad compilers (or use thereof)?

          Bad compilers or more likely they haven't hand optimised their inner loops.

          Most high performance ia32 (Intel Architecture 32 bit) software has hand tuned assembler for the tight inner loops, but it takes time, experience and skill to create such assembler. Some discussions I've seen put recent gcc compiling generic C for amd64 at close to the performance of hand optimised assembler for ia32 on the same Athlon 64 (for tight inner loops).

          There was an article about an assembler version of a cryptographic function that showed amd64 was capable of a *huge* performance increase over ia32, due to its increased register set.

          However it can also come down to implementation quality. IIRC, benchmarks of early amd64 xeon chips showed that they performed worse than ia32 on the same chip for tests that athlon 64 shows a performance *boost* in its 64 bit mode.
        • Is it possible that diminishing returns is kicking in on the register set size, or simply bad compilers (or use thereof)?

          More likely those tight inner loops are optimized for the current amount of registers, so if they need to access the same address several times per frame they do all the accesses in the row, so they don't need to move it to and from a register.

      • by freidog ( 706941 ) on Monday September 12, 2005 @06:47PM (#13541982)
        Well x86 chips have pretty well developed methods of dealing the lack of registers.
        Register renaming eliminates, or at least minimizes most of the problems with a small register set.
        (Athlon64 has something like 72 integer registers and 122 90 bit FP registers (two of these are combined to make an XMM register for SSE vectors), almost all of which are availible in 32 bit mode).

        The extra achitectual registers will help with moderate to long term storage (more than a few dozen clock cycles between uses) as the programmer will explicity specify the data remains in the register, where as with current shuffeling it's up to the CPU (and to some extent how the renamed registers are inteded to work) to determine if a write to cache is in order, or not.
        And really with the longer storage times, you often have the flexibility to write out to L1 and schedual the load so that there's no penalty for the load. (ie issue the move back to the register the 3 clock cycles prior to when you need it that an L1 load usually takes).

        The new registers probably won't make all that much difference in the end. But the again, nothing from the move to 64 bit will be a major impact for a while (at leat on the desktop).
    • I think that it is safe to hold off on 64 bit for your personal desktop until a larger share of applications are compiled with 64 bit optimizations, but unlike the 16 -> 32 bit shift, I suspect the results will be underwhelming except for extremely memory consuming applications.

      Very telling, in fact, I think you'll find that the 16-32bit shift only helped in applications that were extremely memory consuming for their time.

      Odd thing is, it's about 2^16 times easier to exhaust a 16-bit memory address space
      • Actually, by the time of the 16 -> 32 bit move, most applications were already using more than 16 bits (thats 64K) of memory, by a range of different nasty shifting and overlay methods, and moving to a clean flat address space was really overdue, and something lots of people were desperate to get. If you managed to avoid having to program in that time be very thankful :)

        One really nice thing about the 32 -> 64 bit move is with a few exceptions, it looks like we won't have to do any paging and overlay
        • Actually, I did deal with program in the 16-bit segement 16-bit address space day and age, also with the interaction with EMS, copying data in and out of it like a secondary store.

          Unlike most kids I had fun as a child. If by fun you mean, drilling holes in my head by dealing with braindead architectures.
    • Standard phallacy (Score:5, Interesting)

      by vlad_petric ( 94134 ) on Monday September 12, 2005 @05:59PM (#13541554) Homepage
      The main performance gain from going to x86-64 does not come from larger operands and larger addressing space. It comes from a cleaned-up instruction set architecture and, most importantly, from a larger set of registers. x86-64 has 16 general-purpose registers whereas x86-32 arguably has about 7 GPRs. For x86-32, a compiler generally allocates 2 or at most 3 registers to variables. For x86-64, it can utilize ~12. This greatly reduces the number of loads and stores to the stack. The performance gain comes from the fact that it's much faster to communicate via a register than through memory.

      BTW, I don't know about windoze, but in the Linux world going from 32 bits to 64 bits almost always seems to produce a performance gain of 10->20%. I personally tried a simulator I'm using with 64 bits (recompiled with gcc), and got a speedup of 12%.

      • by Elwood P Dowd ( 16933 ) <judgmentalist@gmail.com> on Monday September 12, 2005 @06:39PM (#13541904) Journal
        "phallacy"? Is that like when you say it's 7 inches but you know damn well it's 5?
      • "This greatly reduces the number of loads and stores to the stack. The performance gain comes from the fact that it's much faster to communicate via a register than through memory."

        A typical, "yes, but..." is in play here. Additional registers also mean that you have more saves/loads to do on function entry/exit, as well as during thread and process context switches.

      • For the most part, the ISA remains unchanged, and to the extent that it is improved, it's important to keep in mind that internally neither Intel nor AMD's chips are actually executing x86 code, as it's translated into an internal instruction set. The same can be said for the registers. While the x86 ISA leaves you with hardly any registers, Intel and AMD's chips do register renaming to hundreds of hardware registers.

        Sure, there is some overhead in all that translation, and having a broken instruction set d
        • some debunking here (Score:3, Informative)

          by vlad_petric ( 94134 )
          For the most part, the ISA remains unchanged. True, except that a couple of things that were an incredible PITA (pain in the ass) were removed. For example partial register writes, which caused pipeline stalls because it's not really possible to use renaming when you have a partial write followed quickly by a full read. Some instruction which were nasty to implement were also removed. Finally, while there is still some remains of segmentation in x86-64, it's much simpler than x86-32.

          While the x86 ISA leav

    • Arguably, though, you'd see much more of a performance gap if the applications were better designed to take advantage of the 64 bit power of the processor. That means not just a recompile as a 64 bit application, but having the app actually use 64 bit numbers whenever possible.

      It's a bit like the jump to 32bit. When all we had was 16bit software to test, the performance numbers tended to be equal. Once the software started showing up that was written for 64bit processing, we started seeing a major performan
      • linux etc. have been working on being 'ready' for 64-bit systems for a while now, and many programs are written so that the compiler can adjust the code to work better when compliled as optomized for a certain cpu etc, as another poster mentioned here Linux apps [slashdot.org] tend to get a good boost when recomplied for 64-bit, just from the compiler options being given at compile time. admitedly 10-15% isn't that 'huge' a gap but the code executable was only tweaked as much as the gcc could do so without the code being
    • Games need a couple of things that 64bit can provide:

      64bit operations - there are a lot of places where you can make use of 64 bit vs 32 bit integers to reduce (halve) the number of instructions you execute. This assumes that your performance is instruction bounded rather than memory bounded, which is sometimes the case.

      easier handling of 64bit color formats without conversions

      massive memory - games will happily use as much memory as you have, peddling of course to some least common denominator. But if ev
  • by theGreater ( 596196 ) on Monday September 12, 2005 @05:48PM (#13541461) Homepage
  • by vandy1 ( 568419 ) <vandy.aperfectpc@com> on Monday September 12, 2005 @05:48PM (#13541464)
    I can only conclude that they made no attempt to use the extra registers. Of *course* an f'ing 32-bit system will outpace a 64-bit system; Why do you think most Solaris apps are still 32-bit?

    The reason why x86-64 is a win is because there are more registers as well. This allows compilers to do a better job.
    • I can only conclude that they made no attempt to use the extra registers.

      Of course they didn't. You think these "ExtremeTech" guys have the slightest clue what a register even is?

      They tested a whole bunch of 32-bit apps on a 64-bit OS. They found that the 64-bit OS was slightly broken in a couple cases. That's about it.

  • by oringo ( 848629 ) on Monday September 12, 2005 @05:49PM (#13541471)
    I still don't understand why someone would need a 64-bit workstation/desktop. What does x86-64 offer you other than the higher price tag? True, AMD-64 rocks in Intel's face, but the performance is gained through a direct memory interface, not by going 64-bit. The tests from TFA shows no difference between running 64-bit and 32-bit applications. If I were to own a x86-64 machine, I bet I'd turn off the 64-bit function to reduce the complexity of running applications.
    • Re:Marketing Hype (Score:5, Insightful)

      by Chirs ( 87576 ) on Monday September 12, 2005 @05:57PM (#13541539)
      When running in 64-bit mode you have a cleaner API with more registers. Compiler writers and low-level developers like this.

      In addition, the kernel can provide the full 4GB of virtual address to userspace apps without having to resort to performance-robbing kludges.

      Once you switch to 64-bit userspace apps with their huge virtual address space you can also do things like mmap() your entire 500GB disk and manipulate it as though it's all in memory.

      The end user might not notice a lot but it's much nicer for coders.
    • Assuming you weren't trying to be funny as you were moderated, in 64bit land you can run your 32-bit applications in a more protected way, which can if nothing else, help to increase system stability.
  • by 00_NOP ( 559413 ) on Monday September 12, 2005 @05:49PM (#13541478) Homepage
    As I understand it, most users of a 64 bit Linux kernel are using a 32 bit (GNU? I want to avoid a religous war :)) userland, whereas this suggests Windows users can mix and match.
    Is there a Linux equivalent available?
    Having said all that I well remember getting MS to agree with me that there was a bug in their Win32 bolt on for Win16 that meant my software wouldn't run, but they then said they wouldn't fix it! No wonder I eventually switched to Linux... but that'sa whole other story.
    • Why do you think that people using 64-bit linux are running a 32-bit userland?

      They have the source code, you got back, you recompile, you get at 64-bit binary.

      Linux is 64-bit kernel and userland.

      WINDOWS is 64-bit kernel and device drivers, with 64/32-bit libraries (often time both at the same time) and 32-bit binaries.
      • They have the source code, you got back, you recompile, you get at 64-bit binary.

        Try doing that to openoffice. Every distro I've seen so far has only 32-bit office, and that alone drags in a huge train of 32-bit libraries, so you end up with almost a dual install of Gnome. Then there is 64-bit Firefox or Mozilla but it won't load 32-bit shared libraries, so if you want flash or acroread or Real plugins you need 32-bit Mozilla. By the time you're done half the libraries on your system are dual-arch.
        • Openoffice.org2 now compiles natively on amd64. I have a pure 64-bit version running right now, no 32bit libraries required. And why on earth would you want to pollute a nice browser like firefox with flash and acroread?
      • They have the source code, you got back, you recompile, you get at 64-bit binary.

        To a degree. Sure, the compiler will create code that only runs on a 64 bit cpu if that's what it's supposed to do, and it may use the extra registers and the like to improve performance, but that doesn't really mean your code is really using 64 bits now.

        (Granted, you didn't say it was, but I thought I'd be a bit more explicit.)

        For example, if the application did a lot of integer math, and it was programmed to use

        • For example, if the application did a lot of integer math, and it was programmed to use 32 bit ints, merely recompiling won't make it use 64 bit ints. In theory, merely being able to use 64 bit ints could double the performance of your application right there (because rather than doing two 32 bit operations, it could do just one 64 bit operation), but that's only if your application knows how to do it.

          If you're using gcc's "long long" extension to achieve 64-bitness in 32-bit environments, then recompiling

    • by nukem996 ( 624036 ) on Monday September 12, 2005 @06:00PM (#13541573)
      Im on a 64bit system now. Every open source app that I use has been ported to 64bit. The few 32bit apps that I have run fine in a 64bit environment, all I have to do is make sure I have 32bit libs availible for them. Recent versions of gcc and glibc offer the ablity to do this without any trouble at all. The User Land 32bit was when the AMD64 CPU first came out but things have changed.
    • Not if you use a source-based distro. A friend of mine recently assembled an Athlon64 system using Gentoo [gentoo.org] and he's in love with the performance.
      • Gentoo would be 100x better if it had more extensive "build profiles." I'd love to say something like "emerge gnome-system" and have it up and build a full system for me with one command -- inclusive of a decent default USE= setting.

        In the mean time, I suffer with 2005.1 and constantly adding packages that should really have been included as dependencies in the gnome meta-package (hal and dbus, anyone?). But it *is* more stable than anything else around, IMHO.

        And it is *WAY* faster on AMD64 (although goin
    • I have been using a 64-bit userland under Gentoo Linux for over 1 1/2 years. I still have to run some 32 bit apps, like openoffice. It all just works. Of course I have a lot more respect for Gentoo's support ;->
    • by WhiteWolf666 ( 145211 ) <sherwinNO@SPAMamiran.us> on Monday September 12, 2005 @07:06PM (#13542134) Homepage Journal
      Windows has the same 32-bit cruft.

      With 32-bit apps, you need a 32-bit userland. That's the WoW64 bit; it's the 32-bit Windows on Windows cruft.

      The main difference is that the linux stuff is organized differently. lib is your 32-bit libraries, while lib64 is your 64-bit stuff.

      On Windows, the 'normal' location is where you would find the 64-bit libraries, and the WoW64 stuff is loaded from a separate directory.

      Implementation details: http://msdn.microsoft.com/library/default.asp?url= /library/en-us/win64/win64/wow64_implementation_de tails.asp [microsoft.com]

      Select Quote:
      The WOW64 emulator runs in user mode, provides an interface between the 32-bit version of Ntdll.dll and the kernel of the processor, and it intercepts kernel calls. The emulator consists of the following DLLs:
      Wow64.dll provides the core emulation infrastructure and the thunks for the Ntoskrnl.exe entry-point functions.
      Wow64Win.dll provides thunks for the Win32k.sys entry-point functions.
      Wow64Cpu.dll provides x86 instruction emulation on Itanium processors. It executes mode-switch instructions on the processor. This DLL is not necessary for x64 processors because they execute x86-32 instructions at full clock speed.
      Along with the 64-bit version of Ntdll.dll, these are the only 64-bit binaries that can be loaded into a 32-bit process.
      At startup, Wow64.dll loads the x86 version of Ntdll.dll and runs its initialization code, which loads all necessary 32-bit DLLs. Almost all 32-bit DLLs are unmodified copies of 32-bit Windows binaries. However, some of these DLLs are written to behave differently on WOW64 than they do on 32-bit Windows, usually because they share memory with 64-bit system components. All user mode address space above the 32-bit limits (2 GB for most applications, 4 GB for applications marked with the IMAGE_FILE_LARGE_ADDRESS_AWARE flag in the image header) is reserved by the system.


      It's a different methodolgy, but most likely one that works as well. I appreciate the Linux one better-- the "normal" 32-bit stuff lives in the "normal" places-- that way, you don't *need* an emulation layer for the 64-bit unaware apps. Rather, 64-bit aware apps know to look in the correct location for the libraries (well, they are told by the OS, anyways). The Linux Way (TM) is slightly more backward compatible, me thinks. You'll *never* experience a problem with a 32-bit app on a 64-bit linux system, while there are some bugs in WoW64 which will probably never be fixed, rather, they'll be 'phased out', in the usual MS fashion (ignored until irrelevant).

      Information on the Linux approach is here: http://www.hp.com/workstations/pws/linux/faq.html [hp.com]
      Mainly, when recompiling your apps to be native 64-bit, you need to observe the following:
      Simple. Just rebuild from scratch and the compiler will build 64-bit by default. This is true for most apps. However, some apps must be made 64-bit clean which means that the developers must review the code to get rid of any assumptions about 32-bitness, such pointer arithmetic issues. Some makefiles that explicitly declare paths such as /lib, /usr/lib and /usr/X11R6/lib might need to be changed to append "64".
  • 16TB addressable VM Space should be enough for ANYONE.
  • Good deal (Score:3, Funny)

    by gkozlyk ( 247448 ) on Monday September 12, 2005 @05:54PM (#13541516) Homepage
    Windows 64 seems to be a good deal. From the benchmarks I looked at, i get the same, if not worse performance than the 32-bloat version. Not bad for $140US.
    • Windows 64 seems to be a good deal.

      Until you try to find 64bit drivers for your hardware...
      • Windows 64 seems to be a good deal.

        Until you try to find 64bit drivers for your hardware...

        Sadly, that jet engine you thought you heard was actually a well-executed (and badly Underrated) joke about Windows' memory requirements soaring past your head.

        Mod parent down Redundant, mod grandparent up t3h Insightfulz0r, and have a nice day.

  • whew! (Score:3, Funny)

    by Sebastopol ( 189276 ) on Monday September 12, 2005 @06:01PM (#13541583) Homepage
    Boy am I glad all the marketing hype helped make 64-bits a reality! Whew, I can sleep now.

  • Architecture change (Score:4, Informative)

    by fjf33 ( 890896 ) on Monday September 12, 2005 @06:02PM (#13541591)
    From what I've been able to understand from other people that know a lot about this than me. The main gain in going from the classic 32bit x86 architecture to the AMD64/x86-64 is that they bring into play some of the things learned from the RISC architecture. Lots of registers that can be used instead of the much slower main memory. The speed comes not from the 64bit wide bus but from being able to use this very fast registers to hold and pass information. So until compilers optimize for using registers instead of the stack, then little will be gained except for higher memory requirements.
    • by Krach42 ( 227798 )
      The problem is that it's architected registers.

      The "coolest" thing that you can actually do in x86 is called memory value forwarding. (Or something like that).

      Basically, you assign an internal register to cache the value of the memory access in an unarchitected register. This means that you can write some code like:

      ROR [mem], 1
      ADD [mem], 2
      ROL [mem], 1

      And it will go faster than:

      MOV reg, [mem]
      ROR reg, 1
      ADD reg, 2
    • The number of registers only double to 16 programmer-addressable general registers. Many RISC architectures have 32 or more, some 128.
  • While many users have been complaining that Win 64 has little support for their devices and many of their programs are still 32bit this is not the case on Linux. I have been running Linux 64bit for over a year now and have found that everything works with no problem. Every open source apps Ive seen works fine on x86_64 Linux. Infact the only reason why you need 32bit compatiblity on Linux is for closed source software(mainly games). Linux is years ahead of Win and probably any other OS in the 64bit OS area,
    • by Synn ( 6288 )
      DEC Alpha's in the mid 90's were 64bit and Linux went through a fairly large push to clean everything up to work with them.

      I think that's one of the reasons why everything works so well with AMD64 today under Linux.
  • by serano ( 544693 ) * on Monday September 12, 2005 @06:07PM (#13541639)
    A few months ago I bought a new AMD 64-bit processor and mother board. I installed XP Professional 64-bit edition, but the wireless MS mouse and keyboard I had wouldn't work. I couldn't find 64-bit drivers anywhere on MS's site, so I gave them a call. The person on the phone told me the keyboard and mouse wouldn't work with XP 64 and suggested I try another operating system. I asked if she recommmended Red Hat or Gentoo, but she just said, "No comment. Is there anything I can help you with?"
  • Not in these apps (Score:5, Informative)

    by mobby_6kl ( 668092 ) on Monday September 12, 2005 @06:11PM (#13541679)
    None of thes programs they tested showed any significant difference, but scientific [xbitlabs.com] benchmarks seem to show significant improvement. Much smaller, but still detectable improvement in xvid/divx [xbitlabs.com] encoding. The 64-bit version of CINEMA 4D also benefits significantly in most cases (page 11).
    • This likely has to do with a number of nifty factors. For instance with GMP, where you're performing extremely large math. Normally you're stuck with doing something like this:
      MOV32 reg, [src1]
      ADD32 reg, [src2]
      MOV32 [dst], reg
      MOV32 reg, [src1+1]
      ADC32 reg, [src2+1]
      MOV32 [dst+1], reg

      Which works nice and all, but it's a ripple carry adder. Ripple carry is SLOW, because you have to wait on definitive resolution of the c
  • by HishamMuhammad ( 553916 ) on Monday September 12, 2005 @06:14PM (#13541696) Homepage Journal
    A 32-bit application that has any remaining 16-bit code won't run, because WOW64 doesn't support any 16-bit code.

    Hooray, it's about time. Further in the same paragraph:

    "Program Files" is reserved for 64-bit apps, while "Program Files (x86)" is for 32-bit software. This will sometimes result in strange installer behavior, as with Steam, Valve Software's game download application. Steam insisted that the parentheses in "Program Files (x86)" were illegal characters, and refused to install. You can either install Steam into a different folder (e.g., \games\valve) or change the folder name in the installer to "Progra~2\valve".

    Some things never change... ;)

  • Sad to say (Score:3, Interesting)

    by jmoo ( 67040 ) on Monday September 12, 2005 @06:14PM (#13541699)
    I've been messing around with Windows long enough to remember the 16bit to 32bit application jump made many years ago (When Windows NT 3.1 came out). A lot of the same stuff was said, lack of 32bit apps, huge memory requirements (32 MB of memory!), poor driver support (not that 16bit windows was a lot better). Windows on Windows is nothing new, you still use WOW32 when access a 16bit app in XP.
  • by markass530 ( 870112 ) <markass530@NOspAm.gmail.com> on Monday September 12, 2005 @06:20PM (#13541754) Homepage
    but I figure this is the only place I can get a good answer. I was just getting into computers during the 16-32 bit shift, windows 95 etc. (I was 14) How come a new proccesor wasn't required like now? Whats the difference? No need for complete laymans terms, as I consider myself a pretty avid comptuer geek, but certainly no engineer.
    • Actually there was a new processor required, same as there is now. The Intel 286 was 16-bit, while the 386 was 32-bit. If you wanted a 32-bit system, you needed a 386.
    • Intel x86 processors were already 32-bit as of the 386 processor. Therefore, when Windows went 32-bit, all the processors out there were already 32 bit. By contrast, until now all Intel and AMD processors have been 32 bit, with the only 64-bit processors being made by other smaller vendors. Therefore, the processor/OS upgrades are simply closer together this time, and it is more apparent. However, as another poster noted: the same driver/incompatibility issues were present when Windows went 32-bit, it w
    • That's largely due to the fact that when the 32-bit OS shift happened, most people already had 32-bit processors in their systems.

      I'd expect much the same thing to happen here. Microsoft will wait for proliferation of AMD64/EM64T chips before they make a strong push to 64-bit Windows. I'm actually surprised they've released it at all, personally...

      --S
    • by canadiangoose ( 606308 ) <djgraham@gm a i l .com> on Monday September 12, 2005 @07:31PM (#13542329)
      A new processor was required for the shift from 16-bits to 32-bits, but you may not have noticed because the processors came out well before the microsoft software that was available to support them.

      The first x86 processor to feature 32-bit registers and addressing was the i386 [wikipedia.org] released in 1985. Support for the new 32-bit features of the chip was added to Windows slowly starting with Windows 2.1 in 1987(also known as Windows/386) [wikipedia.org], and provided support for virtual memory and somewhat improved multitasking. The 32-bit features in Windows were optional right through to Windows 3.1 in 1992, infact Win3.1 runs fairly well on a 286/AT with 2MB of memory. Although Windows included some 32-bit code as early as 1987, it did not provide a 32-bit API for applications until the introduction of the Win32 API with Windows NT 3.1 [wikipedia.org] (1993) and Windows 95. There was also a free update released for Windows 3.1 called Win32s [uiuc.edu] that provided a subset of the Win32 API for Windows 3.1 amd Windows for Workgroups 3.11, though it provided rather poor compatibility; major features like comctl32.dll and a real registry were not provided.

      The first version of Windows to offer a complete 32-bit kernel and drivers was Windows NT 3.1 [wikipedia.org]. It provided proper support for the 32-bit funtionality as early as 1993, but it was not used much outside of a corporate environment. Home users had to wait for Windows 95, 10 frickin' years after the release of the 386!!! Even then, Windows 95 still contained a large ammount of 16-bit code!

      Anyhow, I find it funny that people With Athlon64's are complaining about having to wait a year or two for a version of Windows that can make proper use of the processors. At least users now have the option of running 64-bit Linux or BSD, but alternative operating systems for the 386 didn't become available until 1993 with the release of BSD/386 [wikipedia.org] and OS/2 2.0 [wikipedia.org], neither of which were free.

      Well, enough of my rambling. Hope that answers your question :)

  • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Monday September 12, 2005 @06:21PM (#13541764) Homepage Journal
    Complex data-structures involve a lot of pointers -- all of which are twice bigger on 64-bit machines. Sometimes, this makes the pointers bigger (or comparable) to the structures themselves.

    Most obvious are char * fields. If the string is 8 characters or less, it is cheaper to just store in the structure (and pass by value, where possible).

    Considering, that most such strings (and substructures) are malloc-ed (with a couple of pointers worth of malloc's overhead), the case for embedding them becomes even stronger...

    • How would you propose telling the difference between a an embedded string and a pointer? I can think of a couple of ways, but they're not pretty. :-)

      --S
    • This has always been true, but here are the problems:

      To determine whether or not it's a pointer or a char string, you have to have some bit or set of bits dedicated to a switch.

      Well you could say, dedicate the last byte to a true/false value. Then you can't address any memory that corresponds to a string at certain addresses. So if you pick 0x00 then you can't use low memory addresses. If you pick 0xFF you can't use high memory addresses.

      If you use the first byte then you can't address any location
  • The key to running 32-bit applications is something Microsoft dubs WOW64; WOW stands for Windows on Windows. Running 32-bit apps in x64 essentially gives each application its own 4GB of virtual memory space, which isolates it from other applications. So if one 32-bit application locks up, it only affects its memory space, not other running 32-bit apps.
    Isn't this exactly how it works anyway in Win32?
    • That's what they wanted you to believe. However, shared system processes would often lockup, freezing the system.

      Now, 32-bit applications can trip up on bugs in the WoW64 implementation, and since that is intimately tied to the kernel there's *another* thing to break ;-)
  • Wondering what the heck good all the extra processing power is good for? Because you may play games, compile apps, or even brute-force-crack your favorite target server, but there's one place where you're *guaranteed* to want faster hardware: generating 3D ray-traced graphics!

    Yes, I played the Sims, compiled gcc, ran Python chatterbots, had KDE in maximum eye-candy-mode and ran multiple processes in desktops 1-10, but the day I began trying to render a scene with transparent height-fields and looped ISOsu

  • by BarryNorton ( 778694 ) on Tuesday September 13, 2005 @03:06AM (#13544903)
    Terrible summary of a meaningless article

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...