Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:A hard time keeping on the forefront? (Score 1) 605

I suspect that part of the reason Intel invented Itanium is that a totally new and very weird architecture means a new set of patents for the core stuff that you need to make a processor which can run user mode code written to the standard ABI.

Patents with a fresh 17 year expiration date and not covered by licensing agreements with AMD et. al... Yes, I'm quite certain you are correct.

Comment Re:A hard time keeping on the forefront? (Score 2) 605

Heh. Given the intimate relationship between the proprietary Unix vendors and their proprietary RISC chips it's not completely bonkers to call em that. I mean did a PA-RISC chip have any purpose besides running HPUX? And did HPUX have any purpose besides being the Unix you got when you bought your PA-RISC systems?

That's an honest question; I've never seen a machine that had one without the other. I'm sure someone runs NetBSD or Linux on it but as far as market presence...

Comment Re:A hard time keeping on the forefront? (Score 1) 605

The writing was on the wall for the proprietary RISCs already as x86 ate their lunch from below. The main thing that was different in this respect was the amount of silicon they threw at Itanium. Give Xenons giant caches (and a non-shit bus) and they would have had the same effect of just completing the process that had already started.

The big wild card is HP. Itanium thoroughly killed Alpha because the half of the Alpha team Intel got obviously wasn't going to work on Alpha, while the half HP got was still doing good work but HP's commitment to Itanium meant they were actually down-playing the performance of their own Alpha servers.

I am pretty sure that Intel did not intend for Itanium to fail in the market, but in retrospect the outcome for Intel has been close to perfect.

Kinda true, but not really. As a consequence of the time wasted pursuing IA-64, AMD was able to beat them to market with 64-bit extensions to x86 making excluding them from the future of the x86 market impossible, while simultaneously jumping ahead in desktop and x86 servers, slashing the margins for Xenon.

Certainly it's not all downside for Intel. But it was a misstep and nothing like what they planned. If it weren't for their illegal business dealings that limited AMD's ability to take advantage, it would have been a disaster for them.

Comment Re:A hard time keeping on the forefront? (Score 1) 605

And now x86 machines are RISC too. IIRC all the x86 chips translate the x86 instructions into RISC instructions, with a little bit of optimization for their own RISC instruction set. The x86 instruction set, in some ways, simple allows for convenient optimization into the RISC instruction sets, and the option to change them in the background as use priories change.

Yes to the first part (the AC is correct in that it's technically microcode and not x86-anything, but you do have the gist), not so much the second. See, most compilers/programmers use a very RISC-like subset of x86. Most of the micro-arch optimization is thus of the kind you'd do in a RISC isa too -- like if you have 256-bit vector FP in your ISA, but only 128-bit functional units, you'd split the one inst into two micro-ops in either case.

The one big exception is REP MOV. This basically gives the architects the ability to write the best copy algorithm they can for that particular microarchitecture. Which is nice to have.

It's been a long time since I dug into Linux kernel internals, but I remember seeing a routine at start-up that would try various memcpy algorithms, including on x86 rep movs. At the time (late 90s) I don't think much effort was put into optimizing the microcode for REP MOV, and linux would (on my machine) always choose something else. I wonder if this is changed and REP MOV is consistently winning, or if other instruction-level algorithms can do better in some cases.

Comment Re:A hard time keeping on the forefront? (Score 4, Insightful) 605

ARM is a really nice design, very extensible and very RISC

It has fixed instruction length and load/store architecture, the two crucial components of RISC imo, but doesn't go "very" imo. The more I learn about ARM, the more delirious my laughter gets as I think that this of all RISC ISAs is the one that is poised to overturn x86.

For example, it has a flags register. A flags register! Oh man, I cackled when I heard that. I must have sounded very disturbed. Which I was, since only moments before I was envisioning life without that particular albatross hanging around my neck. But I guess x86 wasn't the only architecture built around tradeoffs for scalar minimally-pipelined in-order machines.

Well whatever. The long and short of it is that ISA doesn't matter all that much. It wasn't the ISA that made those Acorn boxes faster than x86 chips. The ISA is limiting x86 in that the amount of energy spent decoding is non-negligible at the lowest power envelopes. In even only somewhat constrained systems it does just fine.

Oh and on the topic of Intel killing x86 -- they don't really want to kill x86. x86 has done great things for them, with both patents and it's general insane difficulty to implement creating huge barriers to entry for others helping them maintain their monopoly. Their only serious move to ditch x86 in the markets where x86 was making them tons of money (as opposed to dabbling in embedded markets) was IA64, and the whole reason for that was that then AMD and Via wouldn't have licenses to make compatible chips.

Comment Re:This is not a way *around* Heisenberg (Score 1) 153

Causality means that any transmission of information from event A to event B means that event A must precede B in time. An example would be an electron emitter emitting an electron, and a detector detecting an electron. If the detector went off before the electron was emitted, that would be a violation of causality.

Relativity of simultaneity does nothing to prevent such a global evaluation, it only restricts the sets of events that could possibly be causes of other events.

As long as events A and B are separated by a time-like distance, then while individual observers may disagree on the exact timing of A and B, all will agree on their ordering. It's only when A and B are separated by space-like distances that different observers will disagree on their ordering. And therefore, if it was possible to send information that could get outside of your future light-cone, then that information could be relayed around between several reference frames and back to you, arriving before you sent it in the first place according to all observers, creating a paradox.

This is the foundation of the argument against FTL information transfer, the Paradox in the EPR Paradox. It's why it's important for maintaining QM's consistency with Special Relativity that quantum entanglement is not capable of sending information.

The circumstances that allow time paradoxes in Special Relativity while allowing FTL communication are somewhat exotic. If we just allow retrograde causality in any given experiment then it should become trivial to create a paradox. Alice conducts and experiment that transmits information Bob. After receiving the result, Bob conducts his experiment which sends information in the opposite time direction back to Alice prior to her conducting her experiment. As per their previously agreed upon protocol, if Alice receives information from Bob she does not conduct her experiment. Paradox.

There's no requirement that one direction in time be singled out as special, but whichever way you go everything else should be going in the same direction. If you time reverse the evolution of the solar system everything works, but not if you only reverse time for the Moon while the rest of the solar system evolves in the usual direction. Can anything but our experience/the 2nd law say that one is the "future" and one the "past". No, but if you picked one by arbitrary convention, then everyone else would have to agree.

If there's a form of retrocausality that allows it to occur with respect to other forward-causality events without allowing for paradoxes, that'd be quite interesting.

Comment Re:Schrodinger would be happy (Score 1) 153

Schrodinger wasn't making a point about quantum theory, just the copenhagen interpretation.

The ridiculous result he posited only applied to the Copenhagen interpretation, but parts of how he arrived there, which he was also pointing out as problems, applied to QM in general. Like the concept and exact point of "measurement" being poorly defined.

If you looked at his thought experiment today, you'd say that the point where the detector either registered a hit from the radioactive decay, or didn't, was a measurement that collapsed the wave function. Interacting in a way that can influence an experiment is a measurement -- avoiding doing so until the desired time is a big part of the challenge of quantum computers.

Part of the unintended but sad legacy of Schrodinger's Cat is that by depicting a scenario where measurement by a device, release of a cloud of chemicals, interaction with the metabolism of a cat, and the death-throws of the cat do not count as measurement, but a researcher opening up the box of the cat does, it's led to the idea that "measurement" means only "observation" which can only be done by an "observer" which is a sentient human being and not a cat. Thus resulting in so much of the Woo-woo that bastardizes QM.

Anyway, I don't see what's so absurd about the Copenhagen Interpretation. It's basically taking the sum-of-histories method of calculating the predicted result of a quantum measurement and saying that, yes, the particle really followed all those histories. Apparent self-interference is the consequence of actual self-interference.

Comment Re:This is not a way *around* Heisenberg (Score 1) 153

Ah, interesting. But, um, this...

information can flow both directions in time. The future influences the past in exactly the same way that the past influences the future.

... is more than a little spooky. Time-reversible relativistic laws of physics are nothing new. In fact that's pretty much every law we have with the 2nd Law of Thermodynamics being the only apparent indication/cause of the arrow of time. But Relativity also assumes causality, cause preceeding effect.

How is this compatible with relativity if we have information going backward in time? That's not less spooky than interactions appearing to break out of the light cone, a concept and problem that only exists as a consequence of the assumption of causality!

Can you re-derive Special Relativity without the assumption of causality? Or is there some aspect of this theory that prevents the causal loops that backward-flowing information would seem to allow, much like how you can't break causality with quantum entanglement?

I guess the first question would be: Is this actually a theory, as in makes testable predictions beyond the standard formulation of quantum mechanics, or is it an "interpretation", a philosophical explanation and mathematical treatment that arrives at the same results but with a different underlying assumption of what QM means?

Comment Re:This is not a way *around* Heisenberg (Score 1) 153

Not true: tests of Bell's inequality have only ruled out some very limited classes of hidden variable theories. There are still lots of them that are very much on the table.

Specifically local hidden variables, as in ones that would obey the speed of light and get rid of the "spooky action at a distance" that inspired Einstein et. al. to write the EPR Paradox paper claiming quantum mechanics had to be incomplete.

The experimental violation of Bell's Inequality means that while there may be some kind of hidden variable, it can't be a kind that gets rid of the quantum weirdness.

It's often been pointed out that, although standard QM does predict weak measurements should work, it's unlikely anyone would ever have discovered that if time reversible QM hadn't made the prediction first.

Freaky the way science works sometimes, isn't it?

Comment Re:Political stunt (Score 5, Funny) 256

Yes, I know, it's because of the travesty that is DMCA, but that doesn't make it any less silly.

No, no, of course not, but it does mask it. Like, a man wearing a tutu, bunny slippers, and a singing Billy Bass as a hat is pretty fucking silly. But put that man on a giant merry-go-round with a troupe of nuns yodeling the dictionary backwards, an upside-down pie-eating contest, a poo-flinging monkey in a pope outfit on stilts, and so on, and suddenly the guy in the tutu doesn't stand out so much.

Of course the rational response to all this silliness is to bulldoze the entire merry-go-round into a big hole and cover it with hot tar.

Comment Re:In space cosmic ray excuse never gets old (Score 1) 98

Thats a great strategy only problem with it is from TFA the indication they received was noticing it was not behaving the way it was supposed to be behaving. They had to look around to figure out why.

Yes, because normal operations were suspended.

You don't have to assume anything. You KNOW the block is invalid. A bad block should not cripple the computer so that it can't do anything else. There is no indication from TFA there were any other faults.

No, you don't. You don't know if the block is bad, if the data bus is suffering an intermittent fault that happened to occur while that block was being read, if it's the BIST or ECC mechanisms that are faulty, or if it's a software error corrupting the data. Going from "we got a fault on reading this block" to "that block and only that block is affected, let's get on with it" with no consideration is a great way to lose a rover.

All I do is write software and I refuse to follow this shitty advice. Every error should be checked and handled. Besides the fricking hardware does all the heavy lifting for us

Ah, so you only allow your software to be run on hardware with ECC corrected RAM and ECC caches and ECC data busses... seems weird to call this a "PC app" when it's excluding most of the PC market. Unless you're doing it yourself then you're only checking for a subset of errors.

Now, assuming it's one that you can see, how do you "handle" that error? Do you just not read from that file again but continue on under the assumption that it was a singular event of no further consequence? Or do you have the software notify you so you can identify what the actual source of the read error was?

The former, I presume, which is fine for the situation of a PC app. Just let the hardware do the heavy lifting, and don't worry about what it can't find, and don't worry about what errors it signals actually mean. If it's a more serious error that ends up causing rampant corruption, it's not your problem! Contact your OEM, your help desk can tell them.

We're talking about I/O failure to flash not crashing an OS or broken hardware.

So, you don't see how an I/O failure could cause an OS to crash, like say if it's reading a code page, and you're still assuming that an ECC error on reading a block of flash can only mean that it's the ram cell itself and only that ram cell that could be affected? You're willing to bet 2.5 billion dollars and the rest of the mission on these assumptions?

Okay.

From TFA this is exactly what they did do...they waited to notice the rover not doing what it was supposed to be doing.

Which is a consequence of the rover doing what it should be doing -- ceasing normal activities on detecting a fault. We're talking about what the rover should be doing -- assume it's a one-off fault and continue normal operation minus that one block as you would have it, or try to prevent anything else bad from happening by going into safe mode (!= sleep mode, btw) and waiting for ground control to figure out the problem.

"We have probably several days, maybe a week of activities to get everything back and reconfigured."

Yes, and? Are you quibbling over "couple" vs "several" -- is this attempted pedantry, or are you actually implying that a couple days is fine, but waiting a week to finish the historic first-time analysis of an interior rock sample on Mars to make sure it has the maximum chance of success instead of bricking the rover crosses the line?

Comment Re:In space cosmic ray excuse never gets old (Score 4, Insightful) 98

Ok lets assume a cosmic ray corrupted some random block of flash memory...so what? Why should that lead to failure to upload anything or enter sleep mode?

Pretty much any fault, error, or out-of-bounds reading with any part of the rover causes it to stop whatever it is doing and wait for ground control to check it out and decide what to do. If the fault is with the computer itself, it makes sense to gracefully enter safe mode. It probably was a cosmic ray flipping a random bit, but you can't assume that when designing your fault handler.

If it were any old PC app this would be perfectly acceptable behavior. However for ultra expensive spacefaring things I would expect it to be designed to still try and be useful even if the southbridge cought fire.

See, I think you have that backwards. If it were a PC app it would be appropriate to just assume the error was insignificant or more likely not bother checking in the first place. If it's a more serious problem then eventually the app or OS might crash, the user will reboot, and if that doesn't work reinstall, and if not that then they'll just go get some new hardware.

For a multi-billion rover on another planet, you don't want to just wait and see what happens. Any anomaly at all should be cause for cautious, deliberate action. Heck, the whole project is run that way.

The rover was designed with a lot of redundancy and flexibility so that it can be useful even in the face of more serious problems, and if that turns out to be the case they'll find a way to make the rover as useful as possible. Missing a couple night's worth of downloads and delaying some activities in order to take the time to make sure they're maximizing the rover's future potential is an easy tradeoff.

Comment Re:I Don't Get It (Score 1) 326

in typical typical slashdotter fashion, you didnt even read it before replying did you?

Of course I read it. And all I saw was you saying that you "got dealt some shitty hands"; nothing about mental illness.

And I assumed that this "shitty hand" was something else, because if it was mental illness you were talking about and you actually understood mental illness, you wouldn't have said something so fucking stupid.

nytime someones says anything other than "people with mental illness are completely helpless"

But you didn't say just anything other than that. You said they had complete control and can just pick themselves up and dust themselves off and move on. Which is a fucking stupid way to say "they aren't completely helpless" because that's not the same thing.

So dont tell me I dont understand and have never thought about it.

What you said demonstrates that you don't, so tough shit, I'm telling you that you don't. You understand your own journey, which I don't know or care about. As a generalization, as a statement intended to demonstrate understanding of mental illness and others who suffer from it, "you have complete control of how you play that hand" is stupid and wrong.

Comment Re:Spin equal to mass? (Score 1) 227

What the AC said as to why the SMBH can't explain the galaxy rotation curve -- the problem is that the curve is flat, meaning the orbital velocity doesn't decrease with distance from the center as one would expect regardless of the amount of mass at the center. See the graph on the right, here. All increasing mass at the center would do is change the values on the Y-axis. The curve shape would be the same.

As far as measuring the relativistic mass goes -- turns out that's easy! Put an object on a scale, and you are measuring its relativistic mass. Measure the gravitational force exerted on some object by another, and you are measuring it's relativistic mass. All of your everyday notions of what "mass" means and the ways in which you measure mass are measuring relativistic mass.

It's actually figuring out the intrinsic mass that's hard. And for a black hole, it's both impossible and irrelevant. The properties of a black hole do not depend at all on the intrinsic mass of whatever went into it. In fact it was proven that in General Relativity that you can't tell what made a black hole, or what has gone into it since, by observing the black hole. The resulting object is the same regardless. This is called the "no hair" theorem for what I'm sure are hilarious historical reasons.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...