...can anyone come up with a use for this that existing WiFi doesn't already cover? It isn't more range, and I'm not sure it is usefully less range. If you are worried about eavesdroppers on the network you need light tight rooms, but if you want to set up a whole house network you need to have repeaters for each room.
This seems more like an answer in search of a problem. Sometimes that means we will find a problem we didn't understand we had, and sometimes this turns out to be the technology equivalent of the big kitchen junk drawer full of bits that almost never get used.
I've used PowerPC based CPUs in radiation environments without issues
Cool! Was it a designed to be radhard PPC, or was it a normal PPC that turned out to be radhard enough?
Not just production process though. They do well in many other areas of research. The ALUs are mighty well designed. They have plenty of great work in many many areas. I really do hate the instruction set, and I'm not fond of the company, but they do really good work in so many areas.
I think if you gave them and IBM equal research budgets and aimed them at the same part of the market it would be hard to predict who would win. Any other two companies though, and the bet is clear.
That situation existed in the early 1990s/late 1980s when the terms CISC and RISC were invented. The x86 existed and was CISCy on the outside and microcoded inside. The VAX was the same. The arguments were never "you can't implement CISC internally the same as a RISC" because they were all already done that way. It was "if you avoid X, Y and Z in your programmer visible instruction set you don't need all that cruft in the chip". What makes something RISC or CISC was originally all about the instruction set, and I see nothing that has changed in the last 20 years that makes it useful to change the definitions.
Collapsing two useful words into one useless meaning doesn't add value to the language, it destroys it (well, not the whole language, just those two words). So why do it? If the new meanings actually had some value, sure I can see adopting new usage, but why switch to something worse?
Say again? Are you telling me they had a 32-bit architecture in the 1970s...? I call BS.
No, but the way ia32 is binary compatible with the 16 bit x86 code from the 1970s makes it relevant. You still have to handle AL and AH as aliases to AX. Ask Transmeta how much of a pain that was (hint: that is a big part of why their x86 CPU ran windows like a dog...the other part being they benchmarked Windows apps too late in the game to hit the market with something that efficiently handled the register aliases). If x86 mode was a fully distinct mode that ditched anything from the past that Intel decided made stuff slow then yes, we would be talking about ia32 as a 1980s architecture.
First, RISC instructions complete in one cycle. If you have multi-cycle instructions, you're not RISC
LOAD and STORE aren't single cycle instructions on any RISC I know of. Lots of RISC designs also have multicycle floating point instructions. A lot of second or third generation RISCs added a MULTIPLY instruction and they were multiple cycle.
There are not a lot of hard and fast rules about what makes things RISCy, mostly just "they tend to this" and "tend not to that". Like "tend to have very simple addressing modes" (most have register+constant displacement -- but the AMD29k had an adder before you could get the register data out, so R[n+C1]+C2 which is more complext then the norm). Also "no more then two source registers and one destination register per instruction" (I think the PPC breaks this) -- oh, and "no condition register" but the PPC breaks that.
Second, x86 processors are internally RISCy and x86 is decomposed into multiple micro-ops.
Yeah, Intel invented microcode again, or a new marketing term for it. It doesn't make the x86 any more a RISC then the VAX was though. (for anyone too young to remember the VAX was the poster child for big fast CISC before the x86 became the big deal it is today).
So far RISC is only found in low-power applications (when it comes to consumer devices at least).
Plus printers (or at least last I checked), game consoles (the original Xbox was the only CISC in the last 2~3 generations of game consoles not to be a RISC). Many of IBMs mainframes are RISCs these days. In fact I think the desktop market is the only place you can randomly pick a product and have a near certainty that it is a CISC CPU. Servers are a mixed bag. Network infrastructure is a mixed bag. Embedded devices are use to be CISC, but now that tends to vary a lot, lower cost embeded devices (under $10) tend to be CISC, but over $10 tends to be RISC.
Ah! You might find CISC dominant in radiation hard environments. There is a MIPS R2000-based silicon on sapphire design in that space, but pretty much everything else is CISC (I haven't looked in a while, but that is a very slow moving market).
I'de say the x86 being the dominant CPU in the desktop has given Intel the R&D budget to overcome the disadvantages of being a 1970s instruction set. Anything they lose by not being able to wipe the slate clean (complex addressing modes in the critical data path, and complex instruction decoders for example), they get to offset by pouring tons of R&D onto either finding a way to "do the inefficient, efficiently", or finding another area they can make fast enough to offset the slowness they can't fix.
The x86 is inelegant, and nothing will ever fix that, but if you want to bang some numbers around, well, the inelegance isn't slowing it down this decade.
IA32 today is little more than an encoding for a sequence of RISC instructions
That was true of many CPUs over the years, even when RISC was new. In fact even before RISC existed as a concept. One of the "RISC sucks, it'll never take off" complaints was "if I wanted to write microcode I would have gotten onto the VAX design team". While the instruction set matters, it isn't the only thing. RISCs have very very simple addressing modes (sometimes no addressing modes) which means they can get some of the advantages of OOO without any hardware OOE support. When they get hardware OOE support nothing has to fuse results back together and so on. There are tons of things like that, but pretty much all of them can be combated with enough cleverness and die area. (but since die area tends to contribute to power usage, it'll be interesting to see if power efficiency is forever out of x86's reach, or if that too will eventually fall -- Intel seems to be doing a nice job chipping away at it)
BTW, intel processors haven't been CISC for years. They're all RISC with a components that translates from the CISC instructions to RISC
Nice marketing talk. So was the VAX (most of them anyway - I think the VAX9000 was a notable exception) I mean it had this hardware instruction decoder, and it did simple instructions in hardware, and then it slopped all the complex stuff over onto microcode. In fact most CISC CPUs work that way - in the past all of the "cheap" ones did, and now pretty much all of them do. So if you call any CPU that executes "only simple components directly and translates the rest" it is hard to find any non-RISC CPU. Of corse internally they aren't so much "RISCy" as "VLIWy"...
The x86 is still the poster boy for CISC. (and hey, CISC isn't all bad pick up a copy of Hennasy and Patterson and read up on the relevant topics)
Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back
FYI, all of Apple's iOS devices have ARM CPUs, which are RISC CPUs. So I'm not so sure your "don't seem to be in any hurry to move back" bit is all that accurate. In fact looking at Apple's major successful product lines we have:
So a pretty mixed bag. Neither a condemnation of CISC nor a ringing endorsement of it.
Fragmentation is when you need to produce several subtly different versions of the same app that does the same thing because there's several different devices that all run what is allegedly the same operating system but each manufacturer has made little modifications that make them incompatible with everything else.
That is a bit of a narrow definition. I'll totally grant that that is fragmentation, but many other things are as well. Some are simpler to deal with then others (GPS/no-GPS-but-WiFi-psudo-GPS is only an issue if your app needs high accuracy position data). Needless software fragmentation is the most annoying because it doesn't really make life better for anyone while a lot of hardware fragmentation exists either to satisfy a price point (and therefore bring a device to a set of people that wouldn't have been able to afford it, or a feature to people willing to fund it without forcing others to do so), or because things do tend to get better year to year. The "our brand of Android has this and that extra, and that and the other changed a little" feels too much like the 80/90 Unix fragmentation that didn't make Unix users happy, and I think ultimately cost it the chance to win big on the desktop (or for a more charitable view, delayed victory until OSX came in...but I think that is wishful thinking, OSX has a non-trivial percentage of the desktop market, but Windows is dominant there). Now just because it worked out badly before doesn't mean it will do so again (we are not doomed to repeat the past, not always at any rate), but it still smells bad.
I would say iOS has a few differences from device to device. Many of which have graceful fallbacks. A very few of them do not. Even so it is just a tiny handful of things, and for the vast majority of apps comes down to supporting two very different screen layouts, and a third similar layout, and two sets of pixel density for artwork (or using lots of vector art). That is pretty much it for "normal" apps. Some apps need to worry because they push the CPU and/or the GPU very hard so they need to scale back on slower hardware, but that is mainly games and games do that everywhere except consoles.
So you say that having two groups of users, one of which has constant IP connectivity, and one that does not, that's not fragmentation
No, having two groups of people that both sometimes have IP connectivity and sometimes don't, but at differing ratios is not fragmentation. My iPhone has no IP connectivity on large swaths of Highway One, esp. north of San Fran. Software attempting to deal with that is no different from software in my iPad which has cellular data hardware, but I've turned it off until I decide I want to reactivate my data contract. Software attempting to deal with both of those is no different then running on my wife's old (WiFi only) iPad.
The last few years we have seen Microsoft, Nintendo, Sony and Apple all bring out means to thwart homebrew development. The app store on both Android and iOS have taken many homebrew devs over to try and break the market.
Well I guess it depends on your definition of home-brew, but I think it is hard to make a game for iOS or Android that wouldn't be let into the store (unless you say crash on launch, or are noticed grabbing all the user's contacts without permission). It is in fact far simpler then it was to get your own games onto the Dreamcast! You get the real dev kit for very cheep (cheeper then the hardware you are developing for), and while the hardware to host the development on isn't free, it isn't exactly expensive (hardware dev systems for the 16bit era ran to $30k, now it is just a Mac mini, or pretty much any old PC for Android).
On the other hand if homebrew has to mean "we figured out how to get onto the hardware ourselves and made our own psudo dev kit", yes Android and iOS are hurting that effort because who really wants to go to all that bother when they could just get down to making a game?
The ST was litigated out of existence
Wikipedia and google don't show anything on that story. I had a 520ST and later a 1040ST, but later "found Unix" and lost touch with the ST. Do you have any pointers to this story?
Tomorrow's computers some time next month. -- DEC