Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:oversimplified (Score 1) 403

Not just production process though. They do well in many other areas of research. The ALUs are mighty well designed. They have plenty of great work in many many areas. I really do hate the instruction set, and I'm not fond of the company, but they do really good work in so many areas.

I think if you gave them and IBM equal research budgets and aimed them at the same part of the market it would be hard to predict who would win. Any other two companies though, and the bet is clear.

Comment Re:Thanks (Score 1) 403

That situation existed in the early 1990s/late 1980s when the terms CISC and RISC were invented. The x86 existed and was CISCy on the outside and microcoded inside. The VAX was the same. The arguments were never "you can't implement CISC internally the same as a RISC" because they were all already done that way. It was "if you avoid X, Y and Z in your programmer visible instruction set you don't need all that cruft in the chip". What makes something RISC or CISC was originally all about the instruction set, and I see nothing that has changed in the last 20 years that makes it useful to change the definitions.

Collapsing two useful words into one useless meaning doesn't add value to the language, it destroys it (well, not the whole language, just those two words). So why do it? If the new meanings actually had some value, sure I can see adopting new usage, but why switch to something worse?

Comment Re: ia32 dates back to the 1970's -- B.S. (Score 2) 403

Say again? Are you telling me they had a 32-bit architecture in the 1970s...? I call BS.

No, but the way ia32 is binary compatible with the 16 bit x86 code from the 1970s makes it relevant. You still have to handle AL and AH as aliases to AX. Ask Transmeta how much of a pain that was (hint: that is a big part of why their x86 CPU ran windows like a dog...the other part being they benchmarked Windows apps too late in the game to hit the market with something that efficiently handled the register aliases). If x86 mode was a fully distinct mode that ditched anything from the past that Intel decided made stuff slow then yes, we would be talking about ia32 as a 1980s architecture.

Comment Re:RISC is not the silver bullet (Score 5, Interesting) 403

First, RISC instructions complete in one cycle. If you have multi-cycle instructions, you're not RISC

LOAD and STORE aren't single cycle instructions on any RISC I know of. Lots of RISC designs also have multicycle floating point instructions. A lot of second or third generation RISCs added a MULTIPLY instruction and they were multiple cycle.

There are not a lot of hard and fast rules about what makes things RISCy, mostly just "they tend to this" and "tend not to that". Like "tend to have very simple addressing modes" (most have register+constant displacement -- but the AMD29k had an adder before you could get the register data out, so R[n+C1]+C2 which is more complext then the norm). Also "no more then two source registers and one destination register per instruction" (I think the PPC breaks this) -- oh, and "no condition register" but the PPC breaks that.

Second, x86 processors are internally RISCy and x86 is decomposed into multiple micro-ops.

Yeah, Intel invented microcode again, or a new marketing term for it. It doesn't make the x86 any more a RISC then the VAX was though. (for anyone too young to remember the VAX was the poster child for big fast CISC before the x86 became the big deal it is today).

Comment Re:RISC is not the silver bullet (Score 2) 403

So far RISC is only found in low-power applications (when it comes to consumer devices at least).

Plus printers (or at least last I checked), game consoles (the original Xbox was the only CISC in the last 2~3 generations of game consoles not to be a RISC). Many of IBMs mainframes are RISCs these days. In fact I think the desktop market is the only place you can randomly pick a product and have a near certainty that it is a CISC CPU. Servers are a mixed bag. Network infrastructure is a mixed bag. Embedded devices are use to be CISC, but now that tends to vary a lot, lower cost embeded devices (under $10) tend to be CISC, but over $10 tends to be RISC.

Ah! You might find CISC dominant in radiation hard environments. There is a MIPS R2000-based silicon on sapphire design in that space, but pretty much everything else is CISC (I haven't looked in a while, but that is a very slow moving market).

Comment Re:oversimplified (Score 3, Insightful) 403

I'de say the x86 being the dominant CPU in the desktop has given Intel the R&D budget to overcome the disadvantages of being a 1970s instruction set. Anything they lose by not being able to wipe the slate clean (complex addressing modes in the critical data path, and complex instruction decoders for example), they get to offset by pouring tons of R&D onto either finding a way to "do the inefficient, efficiently", or finding another area they can make fast enough to offset the slowness they can't fix.

The x86 is inelegant, and nothing will ever fix that, but if you want to bang some numbers around, well, the inelegance isn't slowing it down this decade.

P.S.:

IA32 today is little more than an encoding for a sequence of RISC instructions

That was true of many CPUs over the years, even when RISC was new. In fact even before RISC existed as a concept. One of the "RISC sucks, it'll never take off" complaints was "if I wanted to write microcode I would have gotten onto the VAX design team". While the instruction set matters, it isn't the only thing. RISCs have very very simple addressing modes (sometimes no addressing modes) which means they can get some of the advantages of OOO without any hardware OOE support. When they get hardware OOE support nothing has to fuse results back together and so on. There are tons of things like that, but pretty much all of them can be combated with enough cleverness and die area. (but since die area tends to contribute to power usage, it'll be interesting to see if power efficiency is forever out of x86's reach, or if that too will eventually fall -- Intel seems to be doing a nice job chipping away at it)

Comment Re:Thanks (Score 2) 403

BTW, intel processors haven't been CISC for years. They're all RISC with a components that translates from the CISC instructions to RISC

Nice marketing talk. So was the VAX (most of them anyway - I think the VAX9000 was a notable exception) I mean it had this hardware instruction decoder, and it did simple instructions in hardware, and then it slopped all the complex stuff over onto microcode. In fact most CISC CPUs work that way - in the past all of the "cheap" ones did, and now pretty much all of them do. So if you call any CPU that executes "only simple components directly and translates the rest" it is hard to find any non-RISC CPU. Of corse internally they aren't so much "RISCy" as "VLIWy"...

The x86 is still the poster boy for CISC. (and hey, CISC isn't all bad pick up a copy of Hennasy and Patterson and read up on the relevant topics)

Comment Re:RISC is not the silver bullet (Score 2, Informative) 403

Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back

FYI, all of Apple's iOS devices have ARM CPUs, which are RISC CPUs. So I'm not so sure your "don't seem to be in any hurry to move back" bit is all that accurate. In fact looking at Apple's major successful product lines we have:

  1. Apple I/Apple ][ on a 6502 (largely classed as CISC)
  2. Mac on 680x0 (CISC) then PPC (RISC), then x86 (CISC) and x86_64 (also CISC)
  3. iPod on ARM (RISC), I'm sure the first iPod was an ARM, I'm not positive about the rest of them, but I think they were as well
  4. iPhone/iPod Touch/iPad all on ARM (RISC)

So a pretty mixed bag. Neither a condemnation of CISC nor a ringing endorsement of it.

Comment Re:totally incoherent! (Score 1) 244

Fragmentation is when you need to produce several subtly different versions of the same app that does the same thing because there's several different devices that all run what is allegedly the same operating system but each manufacturer has made little modifications that make them incompatible with everything else.

That is a bit of a narrow definition. I'll totally grant that that is fragmentation, but many other things are as well. Some are simpler to deal with then others (GPS/no-GPS-but-WiFi-psudo-GPS is only an issue if your app needs high accuracy position data). Needless software fragmentation is the most annoying because it doesn't really make life better for anyone while a lot of hardware fragmentation exists either to satisfy a price point (and therefore bring a device to a set of people that wouldn't have been able to afford it, or a feature to people willing to fund it without forcing others to do so), or because things do tend to get better year to year. The "our brand of Android has this and that extra, and that and the other changed a little" feels too much like the 80/90 Unix fragmentation that didn't make Unix users happy, and I think ultimately cost it the chance to win big on the desktop (or for a more charitable view, delayed victory until OSX came in...but I think that is wishful thinking, OSX has a non-trivial percentage of the desktop market, but Windows is dominant there). Now just because it worked out badly before doesn't mean it will do so again (we are not doomed to repeat the past, not always at any rate), but it still smells bad.

I would say iOS has a few differences from device to device. Many of which have graceful fallbacks. A very few of them do not. Even so it is just a tiny handful of things, and for the vast majority of apps comes down to supporting two very different screen layouts, and a third similar layout, and two sets of pixel density for artwork (or using lots of vector art). That is pretty much it for "normal" apps. Some apps need to worry because they push the CPU and/or the GPU very hard so they need to scale back on slower hardware, but that is mainly games and games do that everywhere except consoles.

Comment Re:totally incoherent! (Score 1) 244

So you say that having two groups of users, one of which has constant IP connectivity, and one that does not, that's not fragmentation

No, having two groups of people that both sometimes have IP connectivity and sometimes don't, but at differing ratios is not fragmentation. My iPhone has no IP connectivity on large swaths of Highway One, esp. north of San Fran. Software attempting to deal with that is no different from software in my iPad which has cellular data hardware, but I've turned it off until I decide I want to reactivate my data contract. Software attempting to deal with both of those is no different then running on my wife's old (WiFi only) iPad.

Comment Depends on what homebrew means... (Score 1) 181

The last few years we have seen Microsoft, Nintendo, Sony and Apple all bring out means to thwart homebrew development. The app store on both Android and iOS have taken many homebrew devs over to try and break the market.

Well I guess it depends on your definition of home-brew, but I think it is hard to make a game for iOS or Android that wouldn't be let into the store (unless you say crash on launch, or are noticed grabbing all the user's contacts without permission). It is in fact far simpler then it was to get your own games onto the Dreamcast! You get the real dev kit for very cheep (cheeper then the hardware you are developing for), and while the hardware to host the development on isn't free, it isn't exactly expensive (hardware dev systems for the 16bit era ran to $30k, now it is just a Mac mini, or pretty much any old PC for Android).

On the other hand if homebrew has to mean "we figured out how to get onto the hardware ourselves and made our own psudo dev kit", yes Android and iOS are hurting that effort because who really wants to go to all that bother when they could just get down to making a game?

Comment Re:FUD (Score 1) 375

to get into the big market (Android), it's well worth it

The market may be big, but does it pay? A lot of small developers have reported the android apps make a whole lot less then iOS apps ("an order of magnitude" sticks in my mind, but a quick google search shows a lot of 4x articles, and a smattering of 11%, but I didn't see an order of magnitude in the first page of results).

Assuming the 4x number is true, is it worth getting $1.25/app and writing C/C++ for 80% of the app, and then writing the last 20% in ObjC and again in Java -- or are you better off writing it all in ObjC and then starting work on the next app? (I imagine the right answer depends on how well served your core logic is by ObjC and the available frameworks, and also the total sales involved, and if you have another app that would make similar money, or if you are "played out" of good ideas)

Comment Re:FUD (Score 1) 375

The business logic for your app should be written in a platform agnostic way, and will be trivial to port.

Sure except...

...different platforms have different optimal workflows, and capabilities. This frequently drives changes into what you would think of as platform agnostic code. This is especially true of games but is true of most software. The effects of this can vary from just having a bad port (maybe a non-natiave feel, or just plain a kooky UI), to needing to re-write large parts of the "agnostic" code. This can be costly, and time consuming. Also if you have future versions of the products you need to decide if you want to port these changes back to the orignal platform (or platforms), or hold them apart. Both have their own sets of issues.

...even code that can be made platform agnostic isn't always as simple to write or as fast in platform agnostic form. For example use of CoreData on OSX/iOS is very platform specific, but it is tied to how your objects persist across executions, and even how you represent the objects. It can save an enormous amount of effort (save/load is trivial, undo/redo can be close to trivial, and so on). When it is the perfect fit as much as a third of the code you would normally need to write goes away.

Or if you look at Android, writing the "platform agnostic" part in Java gives you garbage collection so you spend very very little time hunting down memory leaks (you might end up with a few places that forget to nil out a pointer and end up pinning down extra memory for too long, but this isn't as common or painful as memory leaks in C/C++...). No debugging pointers that now dangle into the wring types of objects or to system heap structures. That can safe a whole lot of time.

However a platform agnostic core (business logic, or game play engine, or whatever) won't be able to use any of that. You have to restrict yourself to the intersection of what every platform you want to port to will have. I would be surprised if it cost you as much as having to write it twice, but not if it cost you a good 33% more then writing it platform specific.

Then you have the platform specific (UI?) part of your application. Could be pretty small for something like bug tracker, could be very large for a game or maybe a bike ride activity tracker. If making the core agnostic costs you 33% more, and then doing the platform specific part is significant the new platform has to be a very large percentage of the original platform's revenue before it is worth doing vs. making the faster, cheaper, but less flexible core logic and then moving on to a new project (or the next version of the current project).

I know this is sad when the platform you love is the underdog, but economics isn't called the dismal science for nothing.

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...