Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment three words: flat file system (Score 4, Informative) 263

As pointed out in this review:

You can move whole directories but the Kindle flattens them out listing every file (by file name) separately on the main home page.

You can't organize PDFs into directories on the Kindle, which makes accessing a large number of PDFs a serious problem. It's like 1984.

(The lack of PDF annotation capability is also a headache.)

Comment Re:the usual BS about 64-bit (Score 1) 770

I haven't seen those benchmarks, and I'm quite skeptical of your claim that ZFS is CPU-bound by 64-bit integer operations. The only times I've heard of where filesystems are CPU-bound are when they are using compression and/or encryption, and both of those problems can operate on narrower integers (possibly in 128-bit vectors ala SSE).

As for simulations, the overwhelming majority of scientific simulations rely on floating-point arithmetic.

Comment Re:the usual BS about 64-bit (Score 1) 770

In 32-bit mode, PowerPC and SPARC registers are 32-bit (on 64-bit systems they're just sign-extended when running 32-bit code).

I can't believe how much total nonsense is being propagated on this thread. To quote the IBM PowerPC Instruction Set Architecture manual:

Implementations of this architecture provide 32 floating-point registers (FPRs). [...] Each FPR contains 64 bits that support the floating-point double format.

So, you have it completely backwards: there are no 32-bit single-precision registers on PowerPC, whether in 32-bit or 64-bit mode. It only has 64-bit double-precision registers. Single-precision floating-point operations (not including AltiVec) are done with the double-precision hardware, and are rounded to single-precision when they are spilled. So, storing a 64-bit floating-point value does not "halve" the number of available registers: it uses exactly the same (double precision) registers as for single-precision math.

The situation is very similar on x86: single and double precision use exactly the same hardware registers (which are actually 80-bit extended-precision registers on 32-bit x86 machines); going from 32-bit to 64-bit fp types does not halve the number of physical registers available. (And thanks to hardware register renaming, the instruction set's nominal limitation on the number of registers is not really a practical limitation; the hardware lets you exploit the much larger set of real physical registers.)

Judging by this thread, the "64-bit CPUs can process data twice as quickly as 32-bit CPUs" hooey has been circulating for so long that people have begun to invent works of fiction to justifiy it in their minds.

Comment Re:the usual BS about 64-bit (Score 1) 770

Sorry, you're still wrong.

What the document you are referring to is about is the support for Intel SIMD extensions on AMD. Originally, AMD supported their own 64-bit SIMD operations called 3DNow! which could do two single-precision fp operations at once. Intel, on the other hand, had 128-bit SIMD instructions called SSE which could do four single-precision fp operations at once. Then AMD added support for SSE to their processors, but essentially emulated it with 3DNow! so that one SSE instruction would really use two 3DNow! instructions and take two cycles. Intel also had SSE2 instructions that could do 2 double-precision fp instructions in a cycle, which AMD emulated by doing it in 2 cycles with their ordinary fp unit. The document you are linking describes the fact that AMD eventually dropped 3DNow! and added true hardware support for SSE/SSE2 so that they could do all four/two fp operations in a cycle.

This happened to coincide with their switch to amd64, since they took advantage of the change in instruction set to drop their old 3DNow! instructions. But it has nothing to do with "64-bit" per se. Moreover, on Intel 32-bit CPUs, the 128-bit SSE instructions did execute in a cycle because they were supported in hardware. And even the old 32-bit AMD cpus did 64-bit floating-point and 3DNow! operations in a cycle.

Comment Re:the usual BS about 64-bit (Score 1) 770

You really don't get it, do you. Even on a 32-bit processor like x86, there are programmer-visible (IN THE INSTRUCTION SET) 64-bit and 128-bit registers. The 64-bit registers are the double-precision floating-point registers (and in fact these are 80-bit extended-precision registers on x86), and the 128-bit registers are from SSE and the other Intel SIMD extensions. The terms 32-bit and 64-bit refer only to the size of addresses , not to all registers or anything else.

The rest of your post follows from your total ignorance, and I will ignore it.

Comment Re:the usual BS about 64-bit (Score 1) 770

64bit chunks of computations instead of 32bit chunks. So the data being 'computed' is in native 64bit chunks - and in theory could be twice as fast in an optimal pass.

Yes, you've managed to paraphrase the usual bullshit that you've been brainwashed with quite well. Here's a hint: the 32-bit x86 processors had 64-bit (and 128-bit) registers already.

2) 64bit CPU features - more registers, other AMD64/EMT64 features

Nothing to do with 64 bits, and certainly not a simple "factor of 2" in "data processing rates" or "instructions per clock cycle". As I said, newer architectures can certainly be faster for reasons that have nothing to do with 64 bit addressing, and which have nothing to do with hype about "factors of 2" gained by "going from 32 bits to 64 bits" (which is obviously twice as good, right?)

3) Combined memory read writes, for example in Vista x64 when a 32bit application is reading or writing to RAM the OS can often combine two 32bit read/writes into ONE 64bit read/write, thus speeding up RAM access.

See point (1): the 32-bit processors could do this already.

Comment Re:the usual BS about 64-bit (Score 1) 770

Bullshit.

You have a 64-bit data path even on old 32-bit processors: it's called double precision. And there are 128-bit data paths, too, on any 32- bit Intel x86 CPU for the past 10 years: MMX and SSE*. The width of the SSE registers (the single-precision SIMD extensions you are referring to) did not increase with x86_64/amd64/em64t.

The width of the widest data register has absolutely nothing to do with the size of the address space.

Comment the usual BS about 64-bit (Score 5, Informative) 770

I was dismayed to see this old canard in Apple's MacOS Snow Leopard technology summary

64-bit computing [...] enables computers to process twice the number of instructions per clock cycle, which can dramatically speed up numeric calculations and other tasks.

Haven't people learned by now that this is total BS? 64-bit addressing is independent of instructions per cycle, bus width, or anything like that. (Of course, newer 64-bit systems may be happen to be faster for other, unrelated reasons.) The old "64-bit is twice as fast as 32-bit" is a line of hooey that has been sold to the public for years now (I recall it gaining prominence when Intel started promoting its Itanium plans), but I thought it was finally dying out.

Comment Re:Wrong: Linux DLL names encode ABI compatibility (Score 1) 993

It's quite hard to keep C++ libraries ABI compatible across releases, because of fragile base class problems as well as ABI changes with new releases of C++ compilers (grrr).

If your ABI changes, then your .so version number has to reflect this and software needs to be recompiled. My point was that, contrary to Tweenk's claims, if the ABI of a new library version is backwards compatible, the Unix/Linux .so versioning scheme can reflect this and software doesn't need to be recompiled.

With C libraries, it's usually possible to maintain ABI compatibility even when adding new features. Unfortunately, proper .so versioning cannot fix the many ways in which C++ sucks.

Comment Wrong: Linux DLL names encode ABI compatibility (Score 1) 993

The .so name of a shared library under Linux encodes not only whether the library has been revised, but also indicates ABI compatibility.

If the library authors know what they are doing, when they release a new version of a library they will set the .so version number (different from the human-targeted source-code version number) to reflect which previous versions of the library the new release is compatible with. As a result, ABI-compatible software does not need to be recompiled.

See, for example, the shared-lib versioning documentation for GNU libtool.

Comment Re:we've tried a few of these... (Score 1) 328

LyX has built-in templates for RevTeX, so it works well with most physics journals. I occasionally run into a journal that uses a custom non-RevTeX stylesheet not supported by LyX, but it's always possible to get it to work with LyX. The easiest thing is to just format it for the journal at the very end, right before submission, by exporting LyX to LaTeX and then switching it over to the new style file (and any minor changes that requires). (It's also not too hard to write a new LyX template to use a new stylesheet if you want to do everything in LyX.)

Slashdot Top Deals

Adding features does not necessarily increase functionality -- it just makes the manuals thicker.

Working...