Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re: Send in the drones! (Score 1) 848

Neville Chamberlain was in a tough position as the United Kingdom had pretty much disposed of their military in the aftermath of World War I. Their navy was certainly world-class, but the army and anything which could be used to stop Germany was basically non-existent. Ditto for the U.S. Army (which even had serious legislation going before Congress to completely disband the U.S. Army altogether and rely strictly on the state militias for national defense). The rest of the world was disarming at the time Germany was moving into the Rhineland and elsewhere.

Military intelligence was also miserable at the time, where Germany purposely inflated the numbers of their soldiers by marching the same units across prominent bridges (easily seen by observers)... only to ship them by train back to Germany to have them march again over the same bridge several times. Basically the UK & France thought Germany had many more soldiers involved in those early occupations than really was the case and something that might have been stopped simply by calling Germany's bluff.

I don't know if it is too late to do that with Putin's Russia or not... which I suppose is the question some are asking right now.

Comment Is the anonymous reader aware of Europe? (Score 4, Informative) 221

They say

I also wonder if the vaunted Canadian healthcare system plays a role. When advances in medical science are something you automatically expect to benefit from personally if you need them, they look a lot better than when you have to scramble just to cover your bills for what we have now.

but it sounds as if they're comparing the Canadian system for paying for health care with the US system, as opposed to the systems used in for example, Western Europe.

Comment Re:There's a lot more going on... (Score 1) 161

That's absolutely correct, unless of course you count the fact that you can't create a CISC CPU with just as many registers that can be used to store data, manipulate data, etc sans a cache hit as a RISC CPU given the same die size.

You can't? You can't trade off, say, transistors used for registers (especially given that the bigger processors do register renaming, so you have more hardware registers than the actual RISC/CISC instruction set provides) for transistors used for some other purpose?

Comment Re:isn't x86 RISC by now? (Score 1) 161

RISC processors had hundreds of registers to store the stack frames. There is some smart overlapping of stack frames so that functions could pass by reference straight through registers. When you look at the depth of the function call stacks in some GUI systems, those are needed.

RISC processors with the letters "S", "P", "A", "R", and "C" in the instruction set name, in that order, did. The ones with the digits "8", "0", "9", "6", and "0" in the processor name also did, I think. The ones with "M", "I", "P", and "S" in the instruction set name, in that order, did not, nor did the ones with "A", "l", "p", "h", and "a" in the instruction set name, in that order, nor the ones with "A", "R", and "M" in the instruction set name, in that order, nor the ones with instruction sets having names matching the regular expression "P(ower|OWER)(PC| ISA)".

And, given that most processors running GUI systems these days, and even most processors running GUI systems before x86/ARM ended up running most of the UI code people see, didn't have register windows, no, they're not needed. Yeah, SPARC workstations may have been popular, but I don't think register windows magically made GUIs work better on them. (And remember that register windows eventually spill, so once the stack depth gets beyond a certain point, I'm not sure they help; it's shallow call stacks, in which you can go up and down the call stack without spilling register windows, where they might help.)

Then there would be separate addition/multiplication and vector operation units, so that separate instructions could be processed independently. Modern CPU's are also deeply pipelined to around 14+ stages - every one of the classic four Fetch, Read, Execute, Write stages has been parallelized with pre-lookup, abort and bypass stages, so that means that around 100+ instructions can be in flight at any time. Then the results written back into registers have to be synchronized. So there are register scoreboards to keep track of dependencies. To keep track of all those and to guarantee program execution safety, extra instructions have been added to implement mutex's and thread barriers in hardware.

None of which has anything to do with RISC vs. CISC, and much of which wasn't the case when RISC processors first came out.

Another difference was that RISC CPU's would implement complex instructions like floating-point division in microcode rather than in hardware logic.

Actually, it's more likely to have been the other way around, unless by "microcode" you meant "software". These days, most processors whether RISC or CISC, probably do floating-point division in hardware.

Comment Re:isn't x86 RISC by now? (Score 3, Informative) 161

They're not the only ones. The IBM mainframes have long been VMs implemented on top of various microcode platforms.

But the microcode implemented part or all of an interpreter for the machine code; the instructions weren't translated into directly-executed microcode. (And the System/360 Model 75 did it all in hardware, with no microcode).

And the "instruction set" for the microcode was often rather close to the hardware, with extremely little in the way of "instruction decoding" of microinstructions, although I think some lower-end machines might have had microinstructions that didn't look too different from a regular instruction set. (Some might have been IBM 801s.)

So that's not exactly the same thing as what the Pentium Pro and successors, the Nx586, and the AMD K5 and successors, do.

Currently mainframe processors, however, as far as I know 1) execute most instructions directly in hardware, 2) do so by translating them into micro-ops the same way current x86 processors do, and 3) trap some instructions to "millicode", which is z/Architecture machine code with some processor-dependent special instructions and access to processor-dependent special registers (and, yes, I can hear the word PALcode being shouted in the background...). See, for example, " A high-frequency custom CMOS S/390 microprocessor" (paywalled, but the abstract is free at that link, and mentions millicode) and "IBM zEnterprise 196 microprocessor and cache subsystem" (non-paywalled copy; mentions microoperations). I'm not sure those processors have any of what would normally be thought of as "microcode".

The midrange System/38 and older ("CISC") AS/400 machines also had an S/360-ish instruction set implemented in microcode. The compilers, however, generated code for an extremely CISCy processor - but that code wasn't interpreted, it was translated into the native instruction set by low-level OS code and executed.

For legal reasons, the people who wrote the low-level OS code (compiled into the native instruction set) worked for a hardware manager and wrote what was called "vertical microcode" (the microcode that implemented the native instruction set was called "horizontal microcode"). That way, IBM wouldn't have to provide that code to competitors, the way they had to make the IBM mainframe OSes available to plug-compatible manufacturers, as it's not software, it's internal microcode. See "Inside the AS/400" by one of the architects of S/38 and AS/400.

Current ("RISC") AS/400s^WeServer iSeries^W^WSystem i^WIBM Power Systems running IBM i are similar, but the internal machine language is PowerPC^WPower ISA (with some extensions such as tag bits and decimal-arithmetic assists, present, I think, in recent POWER microprocessors but not documented) rather than the old "IMPI" 360-ish instruction set.

The main differences between RISC and CISC, as I recall were lots of registers and the simplicity of the instruction set. Both the Intel and zSeries CISC instruction sets have lots of registers, though.

Depends on which version of the instruction set and your definition of "lots".

32-bit x86 had 8 registers (many x86 processors used register renaming, but they still had only 8 programmer-visible registers, and not all were as general as one might like), and they only went to 16 registers in x86-64. System/360 had 16 general-purpose registers (much more regular than x86, but that's not setting the bar all that high :-)), and that continues to z/Architecture, although the latest z/Architecture lets you do arithmetic separately on the upper 32 bits and lower 32 bits of a GPR, so for 32-bit and shorter arithmetic, you sort-of have 32 GPRs. z/Architecture also boosts the number of floating-point registers from 4 to 16.

Most RISC instruction sets had 31 or 32 GPRs (in the 31-GPR machines, one of them was always zero when fetched, and operations writing into it discarded the result); ARM had only 16 (one of which was the PC, so not really usable), but 64-bit ARM has 32.

So the main difference between RISC and CISC would be that you could - in theory - optimize "between" the CISC instructions if you coded RISC instead.

As I see it, differences between current RISC and CISC instruction sets are:

  1. RISC ISAs are load/store, meaning that arithmetic instructions are all register-to-register, whereas CISC ISAs have memory-to-register arithmetic. Splitting memory-to-register arithmetic does let you optimize "between" the memory reference and the arithmetic. However, breaking a memory-to-register arithmetic instruction into separate memory-reference and arithmetic microops lets the hardware do similar things.
  2. RISC ISAs generally have more registers; that's not an inherent RISC vs. CISC characteristic, however. A CISC ISA could have, for example, 32 GPRs; at the time S/360 was designed. you couldn't just throw transistors at the problem, and, on the lower end S/360's, the GPRs were stored in a small higher-speed core memory array, not in transistorized registers, and they may have thought that assembler-language programmers and compiler writers wouldn't have been able to make much use of 32 GPRs in any case.
  3. CISC ISAs may have individual "complex" instructions, such as procedure call instructions, string manipulation instructions, decimal arithmetic instructions, and various instructions and instruction set features to "close the semantic gap" between high-level languages and machine code, add extra forms of data protection, etc. - although the original procedure-call instructions in S/360 were pretty simple, BAL/BALR just putting the PC of the next instruction into a register and jumping to the target instruction, just as most RISC procedure-call instructions do. A lot of the really CISCy instruction sets may have been reactions to systems like S/360, viewing its instruction set as being far from CISCy enough, but that trend has largely died out.

Presumably somebody tried this, but didn't get benefits worth shouting about.

Actually, I think many compilers for RISC processors will schedule instructions in that fashion. Of course, modern RISC processors may reschedule instructions in hardware, just as modern CISC processors may reschedule microoperations in hardware ("out-of-order" processors).

Incidentally, the CISC instruction set of the more recent IBM z machines includes entire C stdlib functions such as strcpy in a single machine-language instruction.

...which is probably implemented as a fast trap to a millicode subroutine in the aforementioned z/Architecture-with-its-own-private-GPR-set-plus-maybe-some-processor-dependent-instructions machine language. The millicode routine doesn't, as far as I know, have to save or restore any GPRs, as it gets to use its own set, and probably runs from memory that's as fast as the level 1 instruction cache rather than from the I-cache, so it might reduce I-cache misses, and can transparently differ from processor to processor in case the best string-copy or string-translate or... instruction sequence differs from processor to processor. Of course, a RISC processor could add "fast call" and "fast return" instructions that do similar GPR-set switching, and have a bigger I-cache, and the OS could make processor-specific string copy routines available, so I don't know how much those instructions would buy you.

Comment Re:Send in the drones! (Score 1) 848

Afghanistan might as well be called the place where empires die. The last military force to successfully occupy and control Afghanistan was Mongolia under Gengis Kahn (and even that can be debated). That the USSR failed in nearly the same places where the British Empire failed earlier, and before them Alexander of Macedonia (aka "the Great"). Rome never even bothered to try (although they certainly knew about the place). The jury is still out on America, but it doesn't look pretty.

Comment Re:My opinion on the matter. (Score 2) 826

My story: Been using Linux and BSD heavily since the 90s. I don't really care if you spell "restart foo" as "/etc/init.d/foo restart", "/usr/local/etc/rc.d/foo.sh restart" "service foo restart", "systemctl restart foo", or just "pkill foo && foo".

I spell "restart autofsd" as

$ cat /System/Library/LaunchDaemons/com.apple.autofsd.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.apple.autofsd</string>
<key>KeepAlive</key>
<true/>
<key>Program</key>
<string>/usr/libexec/autofsd</string>
<key>ProgramArguments</key>
<array>
<string>autofsd</string>
</array>
<key>EnableTransactions</key>
<true/>
</dict>
</plist>

which is the same way I spell "start autofsd in the first place".

Slashdot Top Deals

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...