Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Random data point (Score 1) 174

Bees are all over the place at my home (basically at the center of a small town in rural Montana.) We have quite a few planters full of flowers on our largish deck (about 1000' sq), and it is not uncommon to go out there and see a very large number of bees going about their business. They are nearly zero threat. Well, unless you sit on one. :) We try not to do that.

There are no obvious hives anywhere nearby, and they seem to come and go from all points of the compass.

Sortof-kinda related, there are local honey merchants, and the honey is just lovely.

Games

Pro Gamers To Be Tested For Doping 155

An anonymous reader writes: The Electronic Sports League is the biggest organization for running video game competitions. The league has now announced that they will begin testing professional video gamers for performance-enhancing drugs. The league is getting help in making policies from anti-doping agencies that help regulate athletes in traditional sports. They say, "[W]e will be administering the first PED skin tests at ESL One Cologne this August, with a view to performing these tests at every Intel Extreme Masters, ESL One and ESL ESEA Pro League event thereafter as soon as the official PED policy is established and tournament rules updated accordingly." This announcement comes after a high-profile Counter-Strike: Global Offensive player admitted last week that he and many other players used Adderall to gain an advantage in tournaments.

Comment Re:Never understood (Score 1) 430

I think "on a shared workstation" means it was an electronic document and not a physical sealed envelope.

Fair point, and that sounds dicier. 'Round these parts (California), that employee might have a case for wrongful termination. But maybe not; snooping around corporate computer systems, even if the door is unlocked, just doesn't look good.

In the other case, though, now that I think about it, even if I had signed a contract that said my salary was confidential, surely that's only an agreement between me and the company? Would I really be violating such a clause if I disclosed my salary to another agent of the same company? It just doesn't seem like there's anything management can really do to prevent this sort of thing.

Seems like the only thing that keeps people from discussing this sort of thing more is the fear that someone's feeling are going to be hurt -- either theirs or yours -- if it turns out there's a big salary discrepancy.

Comment Re:Never understood (Score 1) 430

We recently had someone canned because they opened someone else's offer letter (which was sitting on a shared workstation).

Well if a sealed letter had someone else's name on it I'd agree that's a firing offense.

Me voluntarily telling you how much I make, on the other hand, is our business. Management can cough and sputter all it wants, but unless I signed a contract that stipulates my salary is confidential information, there's nothing they can do about it.

Comment Re:Misleading and Hyperbolic Title/Comparison (Score 3, Insightful) 130

Furthermore, local access pretty much is the end of the road anyway. Boot from the right CD with a custom filesystem that ignores HD filesystem permissions and yet allows you to set them any way you want, system is now wide open. Replace a few choice commands that you know are going to run, and bang, fully compromised. And that's just one of the many easy ways in to access as the system stands. You can also copy off the entire HD, or for that matter, erase it. Or both. You can compromise a command for a way in, copy an otherwise encrypted volume and walk off with it, break the encryption at your leisure, then use the previously installed compromise to get in and cause mayhem.

If you don't have physical security and there is any kind of local threat of compromise, you could become toast at any time. These kinds of "threats" are insignificant in the larger scheme of things. If you need local security, the only sufficient mechanism is to physically deny access to the computer.

Comment Re:Commission (Score 1) 634

Google routinely contacts everyone who has been through their hiring process before. I applied when I was a PhD student and was rejected, but started getting calls from them after 6 months and got them every six months after that. When I was a bit bored, I let them interview me again (free trip to Paris to visit friends, not California for me, and since I stayed with friends instead of in a hotel they paid for a nice meal out to thank my friends rather than a nice hotel room). I turned them down that time, but they still call me every few months. Saying yes on those calls is basically the same as reapplying - it just sticks you into step 1 of the hiring process, they still then want you to send them an up-to-date CV and other things.

Comment Re:The 19 year old is a lunatic (Score 1) 150

At a single core, we have a 128KB multibanked scratchpad memory, which you can think of as just like an L1 cache but smaller and lower latency. We have one cycle latency for a load/store from your registers to or from the scratchpad

Note that a single-cycle latency for L1 is not that uncommon in in-order pipelines - the Cortex A7, for example, has single-cycle access to L1.

That scratchpad is physically addressed, and does not have a bunch of extra (and in our opinion, wasted) logic to handle address translations,

The usual trick for this is to arrange your cache lines such that your L1 is virtually indexed and physically tagged, which means that you only need the TLB lookup (which can come from a micro-TLB) on the response. If you look at the cache design on the Cortex A72, it does a few more tricks that let you get roughly the same power as a direct-mapped L1 (which has very similar power to a scratchpad) from an associative L1.

If the address requested by a core is not in its own scratchpad's range, it goes to the router and hops on the NoC until it gets there... with a one cycle latency per hop

To get that latency, it sounds like you're using the NoC topology that some MIT folks presented at ISCA last year. I seem to remember that it was pretty easy to come up with cases that would overload their network (propagating wavefronts of messages) and end up breaking the latency guarantees. It also sounds like you're requiring physical layout awareness from your jobs, bringing NUMA scheduling problems from the OS (where they're hard) into the compiler (where they're harder).

Building a compiler for this sounds like a fun set of research problems (if you're looking for consultants, my rates are very reasonable! Though I have a different research architecture that presents interesting compiler problems to occupy most of my time).

Oh, one more quick question: Have you looked at Loki? The lowRISC project is likely to include an implementation of those ideas and it sounds as if they have a lot in common with your design (though also a number of significant differences).

Comment Re:The 19 year old is a lunatic (Score 2) 150

Prefetching in the general case is non-computable, but a lot of accesses are predictable. If the stack is in the scratchpad, then you're really only looking at heap accesses and globals for prefetching. Globals are easy to statically hint and heap variables are accessed by pointers that are reachable. It's fairly easy for each function that you might call to emit a prefetch version that doesn't do any calculation and just loads the data, then insert a call to that earlier. You don't have to get it right all of the time, you just have to get it right often enough that it's a benefit.

For prefetching vs eviction, it's a question of window size. Even with no prefetching, most programs exhibit a lot of locality of reference and so caches work pretty well without prefetching - it doesn't matter that you take a miss on the first access, because you hit on the next few dozen (and in a multithreaded chip, you just let another thread run while you wait), but if you're evicting data too early then it's a problem. A combination of LRU / LFU works well, though all of the good algorithms in this space are patented. Although issuing prefetch hints is fairly easy, the reason that most compilers don't is that there's a good chance of accidentally pushing something else out of the cache. That said, if they're targeting HPC workloads, then just running them in a trace and then using that for hinting would probably be enough for a lot of things.

I heard a nice anecdote from some friends at Apple a while ago. They found that one of their core frameworks was getting a significant slowdown on their newer chip. The eventual cause was quite surprising. In the old version, they had a branch being mispredicted, and a load speculatively executed. The correct branch target was identified quite early, so they only had a few cancelled instructions in the pipeline. About a hundred cycles later, they hit the same instruction and this time ran it correctly. With the new CPU, the initial branch was correctly predicted. This time, when they hit the load for real, it hadn't been speculatively executed and so they had to wait for a cache miss.

Also, if you're trying to create a parallel system with manual caches... good luck. Cache coherency is a pain to get right, but it's then fundamental to most modern parallel software. Implementing the shootdowns in software is going to give you a programming model that's horrible.

And finally there's the problem that doing it in software makes it serial. The main reason that we use hardware page-table walkers in modern CPUs is not that they're much better than a software TLB fill, it's that it's much easier to make them run completely asynchronously with the main pipeline. The same applies to caches.

Slashdot Top Deals

Going the speed of light is bad for your age.

Working...