Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Static scheduling always performs poorly (Score 1) 125

Peer-reviewed venues don't reject things that are too novel on principle. They reject them on the basis of poor experimental evidence. I think someone's BS'ing you about the lack of novelty claim, but the lack of hard numbers makes sense.

Perhaps the best thing to do would be to synthesize Mill and some other processor (e.g. OpenRISC) for FPGA and then run a bunch of benchmarks. Along with logic area and energy usage, that would be more than good enough to get into ISCA, MICRO, or HPCA.

I see nothing about Mill that should make it unpublishable except for the developers refusing to fit into the scientific culture, present things in expected manners, write using conventional language, and do very well-controlled experiments.

One of my most-cited works was first rejected because it wasn't clear what the overhead was going to be. I had developed a novel forward error correction method, but I wasn't rigorous about the latencies or logic area. Once I actually coded it up in Verilog and got area and power measurements, along with tightly bounded latency statistics, then getting the paper accepted was a breeze.

Maybe I should look into contacting them about this.

Comment Re:Static scheduling always performs poorly (Score 1) 125

I looked at the Mill memory system. The main clever bit is to be able to issue loads in advance, but have the data returned correspond to the time the instruction retires, not when it's issued. This avoids aliasing problems. Still, you can't always know your address way far in advance, and Mill still has challenges with hoisting loads over flow control.

Comment Re:Static scheduling always performs poorly (Score 1) 125

I've heard of Mill. I also tried reading about it and got bored part way through. I wonder why Mill hasn't gotten much traction. It also bugs me that it comes up on regular google but not google scholar. If they want to get traction with this architecture, they're going to have to start publishing in peer-reviewed venues.

Comment Re:Static scheduling always performs poorly (Score 1) 125

Prefetching instructions hundreds of cycles ahead of time have to be highly speculative and therefore are likely to pull in data you don't need along with missing out on some data you do need. If you can improve the cache statistics this way, you can improve performance, and if you avoid a lot of LLC misses, then you can massively improve performance. But cache pollution is as big a problem as misses because it cause conflict and capacity misses that you'd otherwise like to avoid.

Anyhow, I see your point. If you can avoid 90% of your LLC misses by prefetching just into a massive last-level cache, then you can seriously boost your performance.

Comment Static scheduling always performs poorly (Score 5, Informative) 125

I'm an expert on CPU architecture. (I have a PhD in this area.)

The idea of offloading instruction scheduling to the compiler is not new. This was particularly in mind when Intel designed Itanium, although it was a very important concept for in-order processors long before that. For most instruction sequences, latencies are predictable, so you can order instructions to improve throughput (reduce stalls). So it seems like a good idea to let the compiler do the work once and save on hardware. Except for one major monkey wrench:

Memory load instructions

Cache misses and therefore access latencies are effectively unpredictable. Sure, if you have a workload with a high cache hit rate, you can make assumptions about the L1D load latency and schedule instructions accordingly. That works okay. Until you have a workload with a lot of cache misses. Then in-order designs fall on their faces. Why? Because a load miss is often followed by many instruction that are not dependent on the load, but only an out-of-order processor can continue on ahead and actually execute some instructions while the load is being serviced. Moreover, OOO designs can queue up multiple load misses, overlapping their stall time, and they can get many more instructions already decoded and waiting in instruction queues, shortening their effective latency when they finally do start executing. Also, OOO processors can schedule dynamically around dynamic instruction sequences (i.e. flow control making the exact sequence of instructions unknown at compile time).

One Sun engineer talking about Rock described modern software workloads as races between long memory stalls. Depending on the memory footprint, a workload could spend more than half its time waiting on what is otherwise a low-probability event. The processors blast through hundreds of instructions where the code has a high cache hit rate, and then they encounter a last-level cache miss and and stall out completely for hundreds of cycles (generally not on the load itself but the first instruction dependent on the load, which always comes up pretty soon after). This pattern repeats over and over again, and the only way to deal with that is to hide as much of that stall as possible.

With an OOO design, an L1 miss/L2 hit can be effectively and dynamically hidden by the instruction window. L2 (or in any case the last level) misses are hundreds of cycles, but an OOO design can continue to fetch and execute instructions during that memory stall, hiding a lot of (although not all of) that stall. Although it's good for optimizing poorly-ordered sequences of predictable instructions, OOO is more than anything else a solution to the variable memory latency problem. In modern systems, memory latencies are variable and very high, making OOO a massive win on throughput.

Now, think about idle power and its impact on energy usage. When an in-order CPU stalls on memory, it's still burning power while waiting, while an OOO processor is still getting work done. As the idle proportion of total power increases, the usefulness of the extra die area for OOO increases, because, especially for interactive workloads, there is more frequent opportunity for the CPU to get its job done a lot sooner and then go into a low-power low-leakage state.

So, back to the topic at hand: What they propose is basically static scheduling (by the compiler), except JIT. Very little information useful to instruction scheduling is going to be available JUST BEFORE time that is not available much earlier. What you'll basically get is some weak statistical information about which loads are more likely to stall than others, so that you can resequence instructions dependent on loads that are expected to stall. As a result, you may get a small improvement in throughput. What you don't get is the ability to handle unexpected stalls, overlapped stalls, or the ability to run ahead and execute only SOME of the instructions that follow the load. Those things are really what gives OOO its advantages.

I'm not sure where to mention this, but in OOO processors, the hardware to roll back mispredicted branches (the reorder buffer) does double-duty. It's used for dependency tracking, precise exceptions, and speculative execution. In a complex in-order processor (say, one with a vector ALU), rolling back speculative execution (which you have to do on mispredicted branches) needs hardware that is only for that purpose, so it's not as well utilized.

Comment Needed innovation: SLIM JAVA DOWN (Score 1) 371

Right now, if I want to ship an app that uses Java 8 features, I have to bundle an extra 40 megs of runtime. This is because Java 8 isn't yet the default. An extra 40 megs is stupid for simple apps. The runtime is an order of magnitude larger than the application. That's stupid.

If Java wants to innovate, they can find a way to maintain all the existing features and backward compatbility while using less space. That would be a worthy project and worth while for Java 9. They can make things smaller and perhaps even faster by rewriting things that are overly bloated.

Comment The root of the problem is culture & social cl (Score 3, Interesting) 514

For some reason, Americans have developed a stereotype of "white" and "black" that is related far more to social class than anything else. When you say "white," we imagine someone from the middle class. When you say "black," we imagine someone from lower socioeconomic status. How many blacks are in the middle class, I'm not sure, but as for whites in lower classes, we have them coming out our ears. While we may have millions of blacks who live in ghettos, we have 10 times as many whites living in trailor parks.

Because of our confusion between ethnicity and social class, we end up with things like Dave Chappelle's "Racial Draft": http://www.thecomedynetwork.ca/blogs/2013/06/chappelles-show-june5-racial-draft
While amusing, it highlights the real problem, and this false stereotype is widespread throughout American culture.

I recall an interview with Bill Cosby, talking about educational advancement among black children. Peers discourage each other from studying because it's "acting white." When in fact it is "acting middle class," because this same kind of discouragement occurs among lower class whites as well. As long as education is not valued within any group, that group will have difficulty being equally represented in white collar industries.

What we have to work out to explain the disparity between population demographics and white collar job demographics is the proportions of the underrepresented groups who discourage education. People like Jesse Jackson want to make this all out to be the result of prejudice on the basis of genetics or skin color. Honestly, I think we're long past that. There are still plenty of racist bastards out there, but in general, we do not have pink people acting intentionally or unconsciously to undermine the advancement of brown people when it comes to getting college degrees.

It's not PC to talk about genetic differences, but genetics is interesting. Geneticists have identified differences between different ethnic groups, and they have correlated them with some minor differences in physical and cognitive adaptations. Things like muscle tone, susceptibility to certain diseases, social ability, and other things have been correlated to a limited degree with variation in human DNA. But the average differences between genetic groups are miniscule compared to their overlap (statistically, we have very small mu / sigma for basically any meaningful measurable characteristic).

Thus I can only conclude that correcting any disparities must come from within. Regulating businesses won't do any good, because unqualified minorities will end up getting unfairly hired and promoted. We have to start with the children and get them to develop an interest in science and math. If Jesse Jackson wants to fix this problem, he need to learn science and math and start teaching it. I assure you, even at his age, he has that capability, if he just cared enough to do it. Unfortunately for him, if he were to corrupt himself with this knowledge, he would find himself taking a wholly different approach than the "we're victims" schtick he's played most of his life. Personally, I prefer the "the universe is awesome" philosophy held by Neil deGrasse Tyson. He's one of my biggest heroes, having nothing to do with his skin tone.

One last though: I'm sure someone will find something racist in what I have said. Either that or I'm being too anti-racist and appear like I'm overcompensating. There are also aspects of these social issues I know nothing about. I'm just writing a comment on Slashdot that is about as well-informed as any other comment. One thing people should think about in general is whether or not they have hidden prejudices. It's not their fault, having been brought up in a culture that takes certain thing for granted. Instead of burying our heads in the sand, we should be willing to admit that we probably do have subconscious prejudices. That's okay, as long as we consciously behave in a way that is fair to other human beings, regardless of race, gender, sexual orientation, autism, or any other thing they didn't choose to be born with (and plenty of things they have chosen, because it's people's right to choose).

Comment Listening to keystrokes + HMM = Profit! (Score 3, Interesting) 244

Passwords have been stolen just by listening to keyboard click noises. Why could a typewriter be any different? A relatively straightforward codebook analysis of keypress noises plus a hidden markov model plus a Viterbi algorithm will allow you calculate the highest probability sequence of letters for a given sequence of sounds and timings between sounds even in German!

Mind you, they have to be able to get a sound bug in there, but that might be malware-infected computers nearby the typewriters.

Anyhow, basically, the technology used to do automatic speech recognition would make short work of tapping typewriters, so they’re fooling themselves if they think this’ll make much difference.

BTW, I have a strong suspicion that the Germans’ outrage is all a big charade. Every major country has big spy operations. The NSA is neither unique nor the first of its kind. The Germans could not have been ignorant of at least the general nature NSA’s dealings before Snowden, so while they openly object, secretly, this is business as usual. By doing this, they fool their people into thinking they’re not being spied on by their own government and, using the US as a scapegoat, they also generate a degree of solidarity. Russians spy operations, of course, are way worse, so their objections are the same bullshit. And the Chinese government is all about lying to, well, basically everyone while they use both capitalism and cyberwarfare to take over the world and control everyone, so their recent statement about the iPhone is also a crock of shit.

This reminds me of Andrew Cuomo’s push to restore trust in government. The whole idea is disingenuous. Governments, like any large organization, are only going to do what the people need only with checks & balances and transparency.

And as a final note, I believe that the stated purpose of the NSA is a good one: Mine publically available data to identify terrorist activity. That sounds like a good thing to do. It’s the illegal violations of privacy that are wrong. They violate our rights because it’s inconvenient to get the info they need some other way. It’s also inconvenient for me to work a regular job instead of selling drugs. There are much more convenient ways to achieve my goals that I avoid because they are wrong. To do their job, the NSA needs to find clever ways to acquire the information they need WITHIN THE LAW.

Comment Anti-piracy campaigns are highly effective (Score 1) 214

But not for the reason you think.

A question we should be asking ourselves is what impact would piracy have on movie revenue if we’d had higher speed Internet in the days of Napster and Kazaa. We currently live in a culture where even non-technical people know that piracy is a copyright violation. There’s also the looming threat of being sucked up in a dredging operation or having your ISP (or the NSA) volunteer information to the MPAA on your metadata. People don’t avoid filesharing because it’s unethical or illegal. They avoid it because it’s relatively inconvenient (requiring technical knowledge), and they fear excessive penalties if they’re caught.

If pirating movies were as simple as downloaing an app and searching people’s libraries, the amount of piracy would be far greater, and the impact on revenue would more significant.

What’s really curious to me, however, is the amount of time and effort some people spend on this. Personally, I’d rather optimize to reduce how much time I spend on it than try to see how cleverly rebellious can be. If I want to watch a movie that’s currently out on DVD, I have four classes of options:
(1) I could spend about half an hour figuring out which of the numerous available torrents is in a playable format and not a fake and then maybe a couple hours downloading it. If I’m really lucky, I can burn it to a DVD that my player will understand so I don’t have to take the time to connect my laptop to the TV.
(2) I could run down to the nearest RedBox, about 15 minutes round trip, and spend the rest of the time doing some consulting work. Not only would I have a legal copy, but I’d come out ahead financially.
(3) If I have some patience to wait a day, I can order my own copy to keep from Amazon Prime, and I’ll STILL come out ahead financially.
(4) If I’m dead-set on a lengthy download, services like iTunes offer up a wide variety of downloadable media.

I suspect most of us clever enough to avoid getting caught pirating think this way. The legal options are just easier, less costly (time==money), and less risky. Those with the skills already in a minority, so the only people doing any significant amount of piracy are those with both the skillls and nothing better to do than to see how clever they can be at unnecessarily breaking the law.

I encounter that attitude a surprising amount, though, among students. There are people who will spend more time and effort trying to BS their way through an assignment and/or find a way to avoiding the need to do it than would be necessary to actually just do the assignment. Doing the assignment requires learning something new, while all this “clever" avoidance relies on established skills. But I don’t know why these people bothered to go to college if they have no interest in learning the material. I guess they feel pressure culturally or parentally, but I don’t like it when they make it my problem.

Comment Elites in any field must have some OCD (Score 1) 608

Those people really far out on the cutting edge of new sciences are successful only because they have some major obsessive qualities. They are driven to learn, understand, and create. They understand things so abstract and esoteric that it would be all but impossible to explain some of these ideas to the lay person. And each of us has some secret weapon too. Mine, for instance is that I can code rings around most other CS professors. I’m not actually smarter than them. Indeed, most of them seem to be better at coming up with better ideas on the first shot. My advantage is that I can filter out more bad ideas faster.

A key important aspect of the areas that we are experts in is that there are underlying and unifying principles. Subatomic particles fit into categories and have expected behaviors that fit a well-tested model. CPU architectures, even exotic ones, share fundamentals of data flow and computation. CS is one of those fields that in invented more than it’s discovered, and as we go along, scientists develop progressively more coherent approaches to solving problems. Out-of-order instruction scheduling algorithms of the past (Tomasulo’s and CCD6600, for instance) have given way to more elegant approaches that solve multiple problems using common mechanisms (e.g. register renaming and reorder buffers). You may think the x86 ISA is a mess, but that’s just a façade over a RISC-like back-end that is finely tuned based on the latest breakthroughs in computer architecture.

Then there’s web programming. Web programming is nothing short of a disaster. There’s a million and one incompatible Javascript toolkits. HTML, CSS, and Javascript are designed by committee so they have gaps and inconsistencies. To write the simplest web site (with any kind of interactivity) requires that one work with 5 different languages at one time (HTML, CSS, Javascript, SQL, and at least one back-end language like PHP), and they’re not even separable; one type of code gets embedded in the other. People develop toolkits to try to hide some of these complexities, but few approach feature-completeness, and it’s basically impossible to combine some of them. Then there’s security. In web programming, the straightforward, intuitive approach is always the wrong approach because it’s full of holes. This is because these tools were not originally developed with security in mind, so you have to jump through hoops to make sure to manually plug them all with a ton of extraneous code. In terms of lines of code, your actual business logic will be small in comparison to all the other cruft you need to make things work properly.

When I work on a hard problem, I strip away the side issues and focus on the core fundamental problem that needs to be solved. With most software, it is possible to break systems down into core components that solve coherent problems well and then combine them into a bigger system. This is not the case with web programming. And this is what makes it inaccessible to “normal people.” The elites in software engineering are the sorts of people who extra grand unifying theories behind problems, solve the esoteric problems, and provide the solutions as tools to other people. Then “normal people” use those tools to get work done. With the current state of the web, this is basically impossible.

Comment Who are these idiot futurists? (Score 1) 564

MAYBE machine intelligence will surpass humans in some ways, but where the hell do we get this idea that they’ll decide we’re unstable and wipe us out? Sci Fi? Do we get it from anything RATIONAL?

We humans have our emotions from millions of years of evolution in hostile environments on earth. And really, emotions are just low-level intelligence adaptations for detecting and avoiding threats. They’re somewhat vestigial in humans, due to layers of more advanced intelligence on top of earlier developments. With intelligent computers, we’re bypassing all of that and just giving them basic reasoning capabilities over huge data sets. For AI, the evolutionary mutations (i.e. advances in AI algorithms [*]) and selective pressures (which AI algorithms we choose to deploy) are COMPLETELY DIFFERENT from what our ancestors faced.

Computers will not spontaneously develop either intelligence or the kind of moronic reasoning necessary to decide to wipe humans out. To get the latter, we’d need a massive conspiracy of megalomaniac genius experts in artifcial intelligence who intentionally develop malware to infect military robots that go around shooting people. Oh, and we’d need the robots too.

Some people forget that politics (or aspects of it) and paranoia move as fast as technology. Every time some scientific advance occurs, a bunch of ethics people (some sensible, some not so much) pounce on it and pick it to pieces. IBM Watson isn’t capable of the kind of decision making that would obsolete humans, but there are plenty of people who are worried about it and ready to develop all manner of reasonable and assinine regulations.

Bottom line: Intentionally developing or accidentally evolving destructive AIs like this is highly implausble, due to lack of motivation and lack of evolutionary pressure, and those evolutionary pressures that exist are counter to this kind of development as well.

[*] Implementations of AI programs may be done intentionally by humans, but advances in algorithms evolve as memes. Evolutionary steps may often seem intentional, but quite often, they’re the result of a arbitrary combinations of pre-existing ideas in people’s minds, where the cleverness exists mostly in figuring out that these ideas can go together and finding a way to combine them. Technology evolves in the same way that languages do.

Comment Re:The good Samaritan always gets his ass kicked (Score 1) 160

Yes and no. They were testing something that they HYPOTHESIZED could reduce the quality of the user experience. And IIRC, that hypothesis turned out to be wrong (to the extent that one can get that from the statistics).

If all user interface modifications that lead to an improved user experience were intuitive, then Facebook would have implemented them already. They are now at a point where they have to consider things that are NOT intuitive. The idea that filtering other people’s posts in a way that increases their negativity should actually lead to an improvement in user experience is not intuitive. Moreover, their explanation for the result (that people are turned off by their friends doing too well) is conjecture, albeit a reasonable one. Human psychology is complex, and for Facebook to continue to advance their mind job on people, they have to get really clever and do things that aren’t obvious.

Like I say, I feel it was a mistake to not get human subject approval before conducting this research. If this was an analysis of pre-existing data, they wouldn’t need approval. If Facebook had done this unilaterally, they souldn’t need approval. But in the specific circulstances, the law is somewhat ambiguous on this point.

You’ll notice that I referred to Facebook as a mind job. I really don’t like it. I have an account, but I seldom use it. However, that doesn’t mean I can’t try to be objective about this research experiment.

Comment Re:The good Samaritan always gets his ass kicked (Score 1) 160

The requirement for informed consent was ambiguous in this case. If I had been in their position, I would have erred on the side of caution, and the research faculty who consulted on this project should have been more resolute about it. If anything, it is those people who should have done the paperwork. I think their failure to get informed consent was a mistake, but I don’t believe it was any kind of major ethical violation. It does no harm to get informed consent, even if you don’t legally need it, and there are moral arguments for getting it in any case.

My main point is that this kind of “manipulation” has been going on for a long time and will continue to occur. Facebook intentionally manipulates users in all sorts of ways to determine what will affect what gets users to use their service and click ads. The only practical difference between this current intentional manipulation and past intentional manipulation is that this time, they reported on it. Going forward, they continue to not get informed consent (because they don’t need it), but they will also continue to manipulate. Thus the travesty is that will simply stop reporting their findings in the future, and that is the ONLY thing that will change, and the rest of the world will be less informed because of it.

Comment The good Samaritan always gets his ass kicked (Score 4, Insightful) 160

As has been pointed out many times, Facebook was doing their usual sort of product testing. They actively optimize the user experience to keep people using their product (and, more importantly, clicking ads). The only difference between this time and all the other times was that they published their results. This was a good thing, because it introduced new and interesting scientific knowledge.

Because of this debacle, Facebook (and just about every other company) will never again make the mistake of sharing new knowledge with the scientific community. This is truly a dark day for science.

Ferengi rule of aquisition #285: No good deed ever goes unpunished.

Slashdot Top Deals

Suggest you just sit there and wait till life gets easier.

Working...