Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Porting to 64-bit Linux 120

An anonymous reader writes "As 64-bit architectures continue to gain popularity it is becoming more and more important to make sure that your software is ready for the shift. IBMDeveloperworks takes a look at a few of the most common pitfalls when making sure your applications are 64-bit ready. From the article: 'Major hardware vendors have recently expanded their 64-bit offerings because of the performance, value, and scalability that 64-bit platforms can provide. The constraints of 32-bit systems, particularly the 4GB virtual memory ceiling, have spurred companies to consider migrating to 64-bit platforms. Knowing how to port applications to comply with a 64-bit architecture can help you write portable and efficient code.'"
This discussion has been archived. No new comments can be posted.

Porting to 64-bit Linux

Comments Filter:
  • Just a recompile? (Score:1, Informative)

    by Bromskloss ( 750445 )
    Provided your code isn't written in assembly, do you really _have_ to do anything else than to recompile it? Of course, you might want to make changes to make better use of the 64 bits, but to just make it run, wouldn't this be enough?
    • Usually yes, but sometimes the code is so sloppily made that it won't simply build for 64-bit architecture and even when it builds, users may suffer weird run-time crashes.
    • Not quite (Score:1, Interesting)

      by Sqwubbsy ( 723014 )
      Well, if you RTFA you'll see that there are issues with how registers are handled on certain integer values for example.
      I'm sure it won't affect your VB app, but it could affect something written in C/C++.

      I'm just wondering if this is what is holding up an AMD64 version of Flash [macromedia.com].
      • I'm sure it won't affect your VB app,

        Or your Python, Perl, Pascal, Ruby, Tcl, Java, COBOL, FORTRAN, PL/1, Prolog or Forth programs.

        I'm just wondering if this is what is holding up an AMD64 version of Flash.

        Sloppy code?
    • In theory, yes. However programmers usually do stupid mistakes, like assuming that sizeof (int) == sizeof (void *) and so they think they can cast pointers to int and the other way around, while on a typical LP64 platform (like AMD64), ints are 32 bits wide and pointers 64 bits, so the cast will not work as expected.

      All in all, nothing new here, well written portable code just need a recompilation, and everything else will need to be debugged.

      • Some of their examples do seem pretty horrible, though:
        int *ptr;
        int i;
        ptr = (int *) i;
        I'm struggling to see why anyone would do that. All I can think of is something like an API using int32 'handles', which are actually pointers internally. That's pretty ugly, though - opaque pointers would be better.
      • assuming that sizeof (int) == sizeof (void *)

        Which reminds me of something... Isn't it time for C and its likes to let us specify explicitly how many bits we want for a variable? I would like to tell the compiler that a variable should be _exactly_ 32 bits and another one _at least_ 64 bits. It's appears strange to me that an int seems to be allowed to be, well, anything it wants to be, and I will never know. Throw in arbitrary precision floats too while we're at it. If the hardware cannot handle 523-bit

        • I doubt that will ever make it into C, because the main philosophy of C is to only provide a thin layer over the bare metal, allowing you to write portable programs that run on many architectures, but can also be well-optimized for a given architecture. People wouldn't want to have their code being simulated in software without their explicit knowledge.

          However, you could implement this in a library in C++ using compile-time templates, implying very little run-time overhead (if any) for stuff that can be co
          • People wouldn't want to have their code being simulated in software without their explicit knowledge.

            But it's already done! Even if you lower your expectations, not requiring the processor to do 153-bit floating point maths, but settling for, say, 32 bits, not all processors will be able to give that to you. Thus, if you're feeling modern and brave, using floats anyway, the compiler will have to emulate it for you. Anything less, I think, would require knowledge of the instructions present on the target C

          • However, you could implement this in a library in C++ using compile-time templates, implying very little run-time overhead (if any) for stuff that can be computed directly in hardware.

            Proving that those who don't use Common Lisp are doomed to reimplement it...

        • Isn't that mostly acheived with types.h? int16_t, uint32_t etc.
        • by Anonymous Coward
          What you want is provided by the stdint.h header (and before that, inttypes.h), which among other things, provides types which are at least X bits, exactly X bits, or the fastest type with at least X bits. Only the at-least-X bits types are guaranteed to exist, up to the maximum word size on the machine. (After all, C is still used on machines with 24-bit and 31-bit word sizes, among others, and some that don't even use 2-complement arithmetic.)

          However, using something like "int_least32_t" directly in your
        • My latest C project is an embedded avr system, where this is of course very important. The solution has been to make a header file with a bunch of typedefs in it like:

          typedef unsigned char uint8;
          typedef unsigned short uint16;

          And so forth. Then I exclusively use the new types. If I need to compile to another platform, I just need to change the portable.h file.

          You can even go one further with:

          #if sizeof(unsigned char) == 1
          typedef unsigned char uint8;
          #else
          #error "No uint8 type available"
          #endi
          • hmm, it seems

            #if sizeof(uint8)!=1

            is not valid syntax.  Well, I'm sure there's a way to do it properly.  Anyone?
          • As another post noted use int8_t, uint8_t, int16_t, uint16_t and etc. These are defined by ANSI C99 so that you will already find them when using GCC, Visual Studio and any other modern compiler.
          • #if sizeof(unsigned char) == 1
            typedef unsigned char uint8;
            #else
            #error "No uint8 type available"
            #endif
            That way the compiler will warn you if there's a problem when you switch platforms.


            Um, no, actually it won't, because the sizeof operator returns the size of the type in chars. That is to say, sizeof(char) is 1 by definition, regardless of the number of bits in a char.
          • My latest C project is an embedded avr system, where this is of course very important. The solution has been to make a header file with a bunch of typedefs in it like:
            typedef unsigned char uint8; typedef unsigned short uint16;
            And so forth. Then I exclusively use the new types. If I need to compile to another platform, I just need to change the portable.h file.

            Two comments:

            Get yourself a compiler/stdlib with implements the official C99 known size types: uint8_t, uint16_t and so on. Or have fun when

          • 1) The preprocessor has not concept of "sizeof" 2) sizeof(unsigned char)is always 1 by definition 3) The latest ISO C standard already provides these types
        • Isn't it time for C and its likes to let us specify explicitly how many bits we want for a variable? I would like to tell the compiler that a variable should be _exactly_ 32 bits and another one _at least_ 64 bits. It's appears strange to me that an int seems to be allowed to be, well, anything it wants to be, and I will never know. Throw in arbitrary precision floats too while we're at it.

          People who have forgotten COBOL and Binary-Coded Dedimal are doomed to repeat it, poorly.
          • People who have forgotten COBOL and Binary-Coded Dedimal are doomed to repeat it, poorly.
            Would you like to elaborate on that? I _do_ want arbitrary precision and it would be nice to have it built in, instead of using libraries. Is that necessarily bad?
            • Would you like to elaborate on that? I _do_ want arbitrary precision and it would be nice to have it built in, instead of using libraries. Is that necessarily bad?

              COBOL uses IBM's version of BCD. Decimal numbers are a fundamental data type in COBOL, and IIRC you can make them as big as you want with PICTURE clauses.

        • An int cannot be "anything it wants to be." It is required to be at least 16 bits. All the primative types have size guarantees. int is special, though. It is usually choosen to be the target machine's most effienct size. Plus, the latest C standard defines typedefs for particularly sized integers.

          Ada allows you do what you want. You want an integer that is always 32 bits:
          type Int32 is new Integer range -2**31 .. 2**31 - 1;
          for Int32'size use 32;
        • Which reminds me of something... Isn't it time for C and its likes to let us specify explicitly how many bits we want for a variable? I would like to tell the compiler that a variable should be _exactly_ 32 bits and another one _at least_ 64 bits.

          Sounds like you want the "kind" functionality of Fortran 90.

          Back in the bad old days, a REAL in Fortran could be anywhere from 32 to 64 bits - a program that ran fine using REAL on a CDC-6600 (60 bits) might die horribly using REAL on an IBM 360 (32 bits but usi

      • Even well written code can have problems.

        Specifically, say I have a 64 bit platform capable of running both LP64 code and ILP32 (legacy) code.

        I use a shared memory segment to communicate between my legacy 32 bit applications, and it has internal use of pointers to perform self-reference on data.

        [Rather than complicating things, let's just assume that the pointers are internally based off the base address of the shared memory segment, rather than being based off of 0, so there is no requirement of mapping th
    • Re:Just a recompile? (Score:5, Informative)

      by cnettel ( 836611 ) on Wednesday April 19, 2006 @04:41AM (#15155664)
      Unless you assume:
      1. sizeof(int) == sizeof(void*), or
      2. sizeof(int) == 4
      If your codebase only makes the first OR the second assumption, you can tweak the compiler to like you by defines. If you also assume that sizeof(void*) == 4, you have bigger problems. Note that you can do this in rather innocent ways, like dumping a complete structure on disk, knowing that pointer values will be invalid, but just assuming that the structure will be the same size if you read it back later.

      In addition, and this is hellish, a 32-bit MOV is (generally) atomic on x86. You can rely on the high-order word and the low-order word staying together, without race conditions. The memory access semantics are different on x64 and many other platforms. This is not related to 64-bitness per se, you could see if you ported to multi-threaded 32-bit PPC as well, but it will still surface if you do the transition to AMD64/EM64T/x64. Or rather, it will result in an additional one-in-a-million crash in your source, that you'll blame on bad memory chips in the user's machine.

      • On all Linux systems:

        sizeof(int)==4

        the "long" and "void*" data types may be atomically written, 64-bit or not

        Also:

        sizeof(long)==sizeof(void*)

        sizeof(long long)==8

        This is quite standard for 32-bit and 64-bit systems. The only major OS to
        violate this is Win64, which kept a 32-bit long and thus can't safely cast
        a void* to long and back again. Linux, BSD, Solaris, MacOS 9, Win32, OS/2,
        VMS, VxWorks... they all work as Linux does. (screw Win64)
      • In addition, and this is hellish, a 32-bit MOV is (generally) atomic on x86. You can rely on the high-order word and the low-order word staying together, without race conditions. The memory access semantics are different on x64 and many other platforms. This is not related to 64-bitness per se, you could see if you ported to multi-threaded 32-bit PPC as well, but it will still surface if you do the transition to AMD64/EM64T/x64. Or rather, it will result in an additional one-in-a-million crash in your sourc
        • The AMD guys certainly don't seem to be sure that AMD64 satisfies this. See this trail [amd.com].

          Further, I suspect that none of the processors support atomic writes of 64-bit values that are not aligned on an 8-byte boundary. If your code has not been written to ensure that values are always on appropriate boundaries (and it's very easy to get this wrong, even if you're aware of the issue), this will probably bite you. At work, we run a lot of software on both Intel and Sparc processors. It is far from unusual

          • I think those are just some people confused about what the LOCK prefix is for (i.e., to force an instruction to execute atomically that otherwise would not be expected to). A lot of stuff would fail very spectacularly if simple 64-bit reads and writes weren't atomic.
      • So what you're saying is that if you have a variable, that is shared between two (or more) threads and you haven't protected it with e.g. a mutex, a condition variable or a semaphore, you could have a problem?

        Well, yeah, isn't that parallel computing 101?

    • by baadger ( 764884 )
      The answer is no, RTFA. This the exact perception that it lays to waste.
    • no (Score:5, Insightful)

      by sentientbrendan ( 316150 ) on Wednesday April 19, 2006 @04:52AM (#15155690)
      Generally architecture changes, compiler version changes, break code on large projects. Over a million lines of code, any tiny little difference in the platform that the original developers didn't think to account for will come up *somewhere*. A good example of this is if you are dumping data structures to disk or network and write a size_t variable. Suddenly, you can no longer communicate between 32 bit and 64 bit versions of your software.

      As a general rule, "just a recompile" *never happens* for any architecture and compiler change on a project above a certain size. Compiler writers break compatibility with some little ol' thing they don't think anyone is using, but which everyone is actually using in *every* version, fail to implement uncommon or difficult language features, add non standard features that other compilers don't support. Then application developers do things like not swapping to network byte order and using architecture dependent data types (size_t as in the example). Between different unices, header file contents will change.

      The fixes are often not that hard (usually trivial) to do between say versions of the same compiler, or endian switches... but they are still there and annoy the hell out off people trying to compile old open source software on a new platform, like say macosx was a few years ago and x86 64 is now. There's always growing pains.
      • Generally architecture changes, compiler version changes, break code on large projects. Over a million lines of code, any tiny little difference in the platform that the original developers didn't think to account for will come up *somewhere*. A good example of this is if you are dumping data structures to disk or network and write a size_t variable. Suddenly, you can no longer communicate between 32 bit and 64 bit versions of your software.

        In general, I agree. But the example is not a good one. Dumping da

        • Not really. It's stupid to dump the padding to disk, and it is stupid to not put things in network byte order, but not everything should be in plaintext... that's a tremendous waste of space and makes random file IO impossible in many cases since records aren't of uniform size.
    • by Keeper ( 56691 )
      That really depends on the code. x64 changes the size of a stardard pointer, but didn't change the size of a word. In the real world (assuming we're talking about an app which was always 32bit only), once you get something to build against x64, you're about 90% done (because coders are human, and people do stupid shit sometimes).

      In my experience, most of the problems will center around using non-pointer types with pointer-types. Mostly around bounds checking, offsets into arrays, pointer arithmatic, etc.
      • You are wrong, for some definitions of "word".

        To AMD and Intel, a word is 16 bits. This is seen in the Intel-style assembly that masm and nasm use.

        By the ELF binary specification, a word is 32-bit or 64-bit according to the platform. So the word size did change.

        The traditional idea, with a word being the size of a register, is like the ELF spec.

        The C programming language has no such thing. On both i386 and x86-64, sizeof(int)==4 and sizeof(long long)==8. On x86-64, sizeof(void*)==8. On i386, sizeof(void*)==
    • Provided your code isn't written in assembly, do you really _have_ to do anything else than to recompile it?

      Do you realise how difficult it is to find a healthy goat and sacraficial knife these days?
    • “Provided your code isn't written in assembly, do you really _have_ to do anything else than to recompile it? Of course, you might want to make changes to make better use of the 64 bits, but to just make it run, wouldn't this be enough?”

      Ideally, that would be the case, but in the real world, not really. A couple weeks ago I got an AMD64 box, and since then I've been working on porting my Linux distribution over. Not exactly the hardest thing I've done, but nowhere near the easiest, either.

    • I used to think so too, but then qmail's CRAM-MD5 patch didn't work on my machine. Among other things, while I think Linux-AMD64 treats "int" as 32 bit, it treats "long int" as 64-bit. This causes problems if people assume "long int" means 32-bit, which a lot do. This is why for size-critical stuff, I always make it explicit, with types like u_int32_t and such.
    • When Linux was ported to the Alpha and Itanium, libraries went in /lib. Emulation libraries go somewhere under /usr, like /usr/lib/i386-linux-elf/lib or maybe /emu/i386-linux-elf/lib. Any app being natively compiled didn't need to care about this cruft.

      Then AMD told SuSE to make x86-64 run all i386 binaries perfectly, including installers that would expect to use the /lib directory. Not that we want old cruddy i386 binaries!

      So now we're supposed to use /lib64. It is so lame. It causes all sorts of trouble p
      • We'll have a /lib directory without libraries, and the "/lib64" wart lasting until the end of time

        Nah. That's a bit pessimistic outlook. Already today /lib64 is a mere symlink to /lib on current distributions. The symlink may have to be kept around of for a while though until the early nomenclature oopses have been effectively phased out.

        $ uname -m
        x86_64
        $ ls -ld /lib*
        drwxr-xr-x 17 root root 4544 2006-03-26 14:52 /lib
        drwxr-xr-x 2 root root 2120 2006-04-02 13:59 /lib32
        lrwxrwxrwx 1 root root 3 2006-02-28

        • Been using gentoo on amd64 for almost a year now, and I was wondering if there is a technical reason why it's:
          /lib
          /lib32
          /lib64 -> lib
          instead of:
          /lib32
          /lib64
          /lib -> /lib64
          One martini on an empty stomach into the night and my desire to apply any analytical thought is out the window... at least until tomorrow...
  • by way2trivial ( 601132 ) on Wednesday April 19, 2006 @05:10AM (#15155723) Homepage Journal
    I have windows XP 64bit edition, and let me tell you- that 128GB ram limit really pulls me down..
  • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Wednesday April 19, 2006 @06:22AM (#15155881) Homepage
    If you don't make assumptions about pointer sizes in your code, always use size_t in the appropriate places, etc, it is generally just a quick recompile for x64. I find a lot of open source code (I'm sure this isn't exclusive to open source, but, well, I can't see closed source!) spits out hundreds to thousands of warnings about assigning the return of strlen() to an int and other similar and usually harmless things, but most of the time it Just Works (tm).

    The only area I've ran into things being significantly harder is writing clean lock-free algorithms due to the lack of a CMPXCHG16B instruction in the original spec - only EMT64 and very recent AMD64 models have it. There are a couple ways to hack around this limitation but they aren't very pretty.
    • The hard part -- at least for people without a decade of intensive C background -- is knowing the "appropriate places" for size_t, ssize_t, socklen_t, and all those other fun types that are used only sometimes or in some places or on some platforms. System libraries are getting better about consistency and compatibility, and compilers are getting better about type conversion warnings, but source code that wants to run on different OSes released before 2001 ends up growing a maze of twisty little #ifdefs, a
  • I tried compiling an application for 64-bit, and the problem I found is that many libraries weren't available in 64-bit versions. I don't mind compiling something for 64-bit, but I do mind compiling the application and a few libraries, and the libraries they depend on, recursively ad nauseum.

    Finally, when I did get it working, the maintainer didn't have a 64-bit OS so they weren't interested in hosting the RPM I built. It seems like until enough people have 64-bit systems, nobody really cares about it.
    • Yep. I will second that.

      I had very similar experience when working on 64bit Linux 6 years ago in the days of Debian on alpha. I ended up lifting many libraries out of the NetBSD tree and rebuilding them for Debian because they were the only project at the time which was meticulously cleaned up to be both endian-clean and int-size-clean.

      After getting some things working I could not get the packages and patches back because people could not verify them.

  • by ChaoticCoyote ( 195677 ) on Wednesday April 19, 2006 @08:07AM (#15156225) Homepage

    I've been running a 100% 64-bit dual Opteron rig for almost two years, under Gentoo. No emulation libraries, no multilib, just 64-bit code. Other than Open Office, I've had almost no trouble at all.

    BTW, "64-bits" don't make programs run faster (in general) — code compiled for AMD64/EMT64 runs faster than its 32-bit counterpart (for the most part) because of the extra general-purpose registers in the AMD 64-bit design.

  • Use stdint.h! (Score:5, Informative)

    by Chemisor ( 97276 ) on Wednesday April 19, 2006 @08:42AM (#15156441)
    The article doesn't appear to mention this, but there is a C99 standard header stdint.h, which defines fixed width types. I haven't seen any OSS project use it, for some reason, but it has all the types you need for portable development; int32_t, uint64_t, constant wrappers like UINT64_C, and, of course, limit constants for all of the fixed-size types. Using these is much better than all those size-based #ifdef'ed typedefs I see people use all over their code.
    • The article doesn't appear to mention this, but there is a C99 standard header stdint.h, which defines fixed width types. I haven't seen any OSS project use it, for some reason, but it has all the types you need for portable development; int32_t, uint64_t, constant wrappers like UINT64_C, and, of course, limit constants for all of the fixed-size types. Using these is much better than all those size-based #ifdef'ed typedefs I see people use all over their code.

      Umm, try NetBSD maybe? It's the most portable sy
      • Linux runs on every combination of big/little endian and 32/64-bit systems, so I fail to see your point about NetBSD being unique in that regard.

        32-big: SPARC
        64-big: Alpha
        32-little: IA-32
        64-little: EM64T

  • by AK76 ( 966804 ) on Wednesday April 19, 2006 @08:53AM (#15156539)
    I did a lot of 64-bit cleaning up for the PHP project, and I can tell you that there are more subtle issues that may arise when porting from 32-bit to 64-bit.

    One example:
    on a 32-bit Intel machine, a double is precise enough to distinguish LONG_MAX (the highest representable long) from LONG_MAX+1 (a number that doesn't fit in a long anymore). So for instance, to determine whether a long multiplication has overflowed, you could repeat the same multiplication using doubles and compare the result to (double)LONG_MAX.
    In contrast, on a 64-bit platform LONG_MAX and LONG_MAX+1 are mapped to the same double representation, so there's no way to do the comparison anymore.
    As this example involves static casts, it is something the compiler will usually not warn you about.

    Another thing to be careful about is passing pointers to variadic functions (eg. sscanf), because usually the compiler doesn't know the expected types, as they are buried in the format string, not in the function prototype.
    • by Anonymous Coward
      One example:
      on a 32-bit Intel machine, a double is precise enough to distinguish LONG_MAX (the highest representable long) from LONG_MAX+1 (a number that doesn't fit in a long anymore). So for instance, to determine whether a long multiplication has overflowed, you could repeat the same multiplication using doubles and compare the result to (double)LONG_MAX.


      That seems like a terrible way to do it... couldn't you just find the highest set bit position in the multiplicands and add?

      Or better yet, IIRC when yo
  • The silly thing is that his big-endian/little-endian program would break on a 64 bit bigendian system which would return '0', and not 0x12 or 0x78. You can use -2 and check for *(unsigned char *)i == 0xff or 0xfe .. but, in that case.

    Going off on a tangent:
    I have no idea what a 36 bit signed-magnitude integer mainfraim (( Yeah, they really existed -- CDC made them )) would return for *(unsigned char *) (int)-2. It would probably be 0x80 or 0x40 -- but it might be 0x800 (CDC used 6 bit characters, an

    • 9-bit (Score:3, Informative)

      by r00t ( 33219 )
      char was 9-bit

      C requires at least 8 bits for char, so 6 isn't good enough.
      All types must be a multiple of the size of char, because
      sizeof(char) is 1 by definition and fractions are not OK.

      Valid sizes are thus: 9, 12, 18, 36

      The char-short-int-long progression may be one of:

      9,18,18,36 a likely choice
      9,18,27,36 this is the cool way: sizeof(int)==3
      9,18,36,36 a likely choice
      9,27,27,36
      9,27,36,36
      9,36,36,36 a likely choice
      12,24,24,36
      12,24,36,36
      12,36,36,36
      18,18,18,36
      18,18,36,36
      18,36,36,36
      36,36,36,36
      • Probably 12/36/36, since characters are a multiple of 6 bits (either 6 or 12) and a 24 bit short just doesn't seem to make much sense.
        • No, it was 9-bit. 12 was not used for some reason.

          24-bit shorts make perfect sense. I suspect they got smart about powers of two after making the mistake of using a 36-bit word, and decided not to have sizeof(long)==3.

          So char was 9 and long must have been 36. (long could have been bigger, but I doubt it was) The remaining two were most likely 18 or 36. These are most likely:

          9/18/36/36

          9/18/18/36

          Also, 9/36/36/36 was somewhat likely.
    • I have no idea what a 36 bit signed-magnitude integer mainfraim (( Yeah, they really existed -- CDC made them ))

      CDC made 48 bit machines (1604 and 3000 series) and 60 bit machines (6000 series, 7600 and some Cyber's) but not a 36 bit machine AFAIK. The 6600 had 60 bit reals and long ints, 18 bit short ints, 12 bit words for the peripheral processors - a real PITA for C.

  • I am wondering if this 64-bit porting article is written specifically for Macromedia [macromedia.com].
  • 64 bit porting is more of a compiler problem.

    In particular, the GNU toolchain has a very poor ability to complain about long/int coercion. It also doesn't have a 64 bit pointer type for use in 32 bit code - so any 32 bit code you need to talk to from 64 bit ends up handing around a long long, and since this is just an integer type, there's no problem with assigning it to another integer type, and potentially losing resolution (and bits off the pointer, should it be converted/passed back).

    Minimally, the too
    • by Anonymous Coward
      I dunno what 'GNU toolchain' you're using, but I always get warnings about truncation in assignments and comparisons, and with GCC 4.x the signedness warnings are so strict as to almost be absurd (they may have crossed the line with char vs signed char vs unsigned char warnings--yes those are all three distinct types, regardless of whether plain char is effectively signed or unsigned).

      The `quad' type was a BSD anachronism, and in any event "Unix" specific (quad what? C doesn't guaranteed that char is an oct
      • GNU toolchain and not giving warnings

        You are incorrect.

        The following is some code that does not warn that the resolution of "long l" is potentially insufficient to store the value contained in "long ll":

        #include <stdio.h>
        #include <stdlib.h>

        int
        main(int ac, char *av[])
        {
        long long ll = 5;
        char buf[128];
        int constant;
        long l;

        printf("Enter constant: ");
        gets(buf);
        constant = atoi(buf);

  • Crank the warnings up as high as the OS' include files can bear and try recompiling.

    Then -- patiently fix them all. You know, you planned to do that for years. Do it before trying to build a 64-bit version.

    Then -- try the 64-bit version and fix all the warnings you missed before. void * to int conversions are my personal favorites...

    Resist the temptation to invent your own types, though (Mozilla's source tree is awful in this regard). Use the standard int32_t or uint64_t, where the number of bits mat

  • I had to change a bunch of 'int' to 'long' to get something to compile.
    • That's great... for HelloWorld.c.

      Now... get it to actually work, be backwards compatible with your 32 bit compiler, and do something even slightly nontrivial. Say, pack a struct into a datagram on your 64 bit host, send it to a 32 bit host, and unpack it into a struct with the same byte alignment (using the same code on both hosts).

Hard work never killed anybody, but why take a chance? -- Charlie McCarthy

Working...