Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Can the executive branch be held in contempt? (Score 1) 248

What would happen if the executive branch (which is supposed to enforce the law) simply refused to comply with a judicial order? Can someone be held in contempt? Who would take on the role of enforcing the judicial order (in terms of compelling the action or executing punishment)?

Comment Re:why the focus on gender balance? (Score 1) 579

No, it's a real problem here. Wikipedia is all about (1) information about the world, and (2) a neutral perspective on that information. Women do have a slightly different perspective, focusing on different information and different aspects of information. Including those additional perspectives will make wikipedia content more complete and also more accessible to female readers.

Comment CISC instruction sets are now abstractions (Score 1) 161

And actually so is RISC to a degree on POWER processors.

Back in the 80's going RISC was a big deal. It simplified decode logic (which was a more appreciable portion of the circuit area), reduced the number of cycles and logic area necessary to execute an instruction, and was more amenable (by design) to pipelining. But this was back in the days when CISC processors actually directly executed their ISAs.

Today, CISC processors come with translation front-ends that convert their external ISA into a RISC-like internal representation. It's on-line dynamic binary translation. Now, instructions are broken down into simpler steps that are more amenable to pipelining and out-of-order scheduling. CISC processors don't execute CISC ISAs and therefore don't suffer from their drawbacks.

It has occurred to me that this could be taken to its logical extreme. ISAs could be made entirely abstract and optimized to be used that way, along with optimizing them for reasonably efficient translation. You get the benefits of microops and the benefits of a CISC ISA (more compact code). Abstract ISAs make it easier to extend functionality in a backward-compatible way too. And unlike x86, we can shed some of the deadweight and also go to all 3-operand instructions, which have some benefits. Decoupling the ISA from the execution engine, we could get even more performance and energy efficiency than Intel does.

With a processor like Haswell, the logic area dedicated to translation is very small, which is why it doesn't matter much. On the other hand, with something like Atom, it occupies a more substantial portion of the total, making the translation (basically, elaborate decode logic) a buden on die area and therefore power consumption.

So it's not really appropriate to say it doesn't matter. It MOSTLY doesn't matter, because most of the drawbacks of CISC have been overcome. The fact that we're using an out-dated CISC ISA for x86, however, has drawbacks of having to support rare and excessively complex instructions, a plethora of addressing modes, and only having two operands per instruction.

Comment Re:Particle state stored in fixed total # of bits? (Score 1) 247

IIRC, this isn't actually a paradox. One twin underwent acceleration, which leads to a temporal discontinuity. The other twin stayed in place. I guess if you have two clocks, and you accelerate one away from the other, you should be able to tell which one accerated and which didn't.

Comment Science as religion (Score 1) 528

Not teaching the scientific process may just make things worse. Doubt is a fundamental tennet of science, but many religious people (e.g. Kirk Cameron and his ilk) feel that they were "just told to believe this stuff" when they were in school. Without knowledge of the process that led to this knowledge, students will just start to treat science as an alternate bad religion or something.

Now, many kids handle uncertainty poorly, so this has to be handled carefully, but I think it's critical that science be tought as "this is the best explanation we have." Now, basically everything taught in high school is so well established (misrepresentations not withstanding), so we can explain that what they're being taught is consistent with mountains of evidence. But with the key factor that this stuff, at one time in the past, was cutting edge knowledge and did deserve to be taken with a big grain of salt. This can be expressed in terms of the history and evolution of particular sciences. We understood A at this time, and then someone discovered something, and views shifted accordingly to B. See how new evidence lead to a BETTER understanding through the scientific process??? What we're learning now has pretty well been beaten into submission, but understand that questioning assumptions is an important thing for people to learn.

Comment Particle state stored in fixed total # of bits? (Score 1) 247

In special relativity, we find out that our velocity through spacetime is actually constant. If you move though space faster, you necessarily move through time more slowly.

So I'm wondering if information about particles is somehow limited to a specific amount of information. If you have more bits of precision about one thing, then the certainty about some other property is necessarily weaker because it doesn't get as many of the total number of bits something can have. Can we work out the number of bits? We need bits for position, bits for momentum, bits for other quantum mechanical properties, etc.

I'm wondering if perhaps superposition is a result of the number of bits for a given property (like spin) going to zero because they were required to increase the precision of something else. For that matter, I wonder if particles can share/trade bits, so that sometimes particles have no bits (like when they get absorbed). And maybe a body made up of particles has bits shared kinda like how a metal's conduction band is shared among all the atoms. Maybe that is the way force carriers act... trading bits. MAYBE the whole universe simply has a total number of bits, which are divided up as necessarily among the particles. And really particle interactions are just bits (and their values) being traded around within a vast amorphous ocean of bits. In that case, particles are an illusion; they're an emergent property (from our perspective) of the varying association among bits.

Comment Re:Static scheduling always performs poorly (Score 1) 125

Peer-reviewed venues don't reject things that are too novel on principle. They reject them on the basis of poor experimental evidence. I think someone's BS'ing you about the lack of novelty claim, but the lack of hard numbers makes sense.

Perhaps the best thing to do would be to synthesize Mill and some other processor (e.g. OpenRISC) for FPGA and then run a bunch of benchmarks. Along with logic area and energy usage, that would be more than good enough to get into ISCA, MICRO, or HPCA.

I see nothing about Mill that should make it unpublishable except for the developers refusing to fit into the scientific culture, present things in expected manners, write using conventional language, and do very well-controlled experiments.

One of my most-cited works was first rejected because it wasn't clear what the overhead was going to be. I had developed a novel forward error correction method, but I wasn't rigorous about the latencies or logic area. Once I actually coded it up in Verilog and got area and power measurements, along with tightly bounded latency statistics, then getting the paper accepted was a breeze.

Maybe I should look into contacting them about this.

Comment Re:Static scheduling always performs poorly (Score 1) 125

I looked at the Mill memory system. The main clever bit is to be able to issue loads in advance, but have the data returned correspond to the time the instruction retires, not when it's issued. This avoids aliasing problems. Still, you can't always know your address way far in advance, and Mill still has challenges with hoisting loads over flow control.

Comment Re:Static scheduling always performs poorly (Score 1) 125

I've heard of Mill. I also tried reading about it and got bored part way through. I wonder why Mill hasn't gotten much traction. It also bugs me that it comes up on regular google but not google scholar. If they want to get traction with this architecture, they're going to have to start publishing in peer-reviewed venues.

Comment Re:Static scheduling always performs poorly (Score 1) 125

Prefetching instructions hundreds of cycles ahead of time have to be highly speculative and therefore are likely to pull in data you don't need along with missing out on some data you do need. If you can improve the cache statistics this way, you can improve performance, and if you avoid a lot of LLC misses, then you can massively improve performance. But cache pollution is as big a problem as misses because it cause conflict and capacity misses that you'd otherwise like to avoid.

Anyhow, I see your point. If you can avoid 90% of your LLC misses by prefetching just into a massive last-level cache, then you can seriously boost your performance.

Slashdot Top Deals

BLISS is ignorance.

Working...