Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Intel (Score 1) 142

I bought an AMD E-350 motherboard and a Chenbro case with 4 removable drive bays. It takes a slimline optical drive (which actually has a BD-RE drive, because they weren't much more expensive than DVD drives, although in the 3 years since I bought it I've yet to burn a single BD).

It's running FreeBSD, booting from a RAID-Z array (so cheap snapshots, block-level checksums, single drive failure recovery, and all of that good stuff). The GPU is now supported and so it runs XBMC connected to my projector for video playback too.

There's also an e-SATA port that I should be using for external backups, but am not currently...

Comment Re:Licensing issues with opening the code (Score 1) 142

That doesn't really make sense. The architectures of the three main GPU vendors are sufficiently different that it's highly unlikely that the can benefit much from tricks in the drivers (which are basically compiler optimisations these days, since the performance-critical part of the driver is the shader compiler). nVidia uses something that looks more or less like a scalar CPU (with some SIMD) but with SIMT so that if you run threads in lockstep they just do one decode but execute several steps at once. AMD is a VLIW-inspired design. Intel is a more classical SIMD architecture, but with the tweak that their vector register set is 2D and each operand encodes a start and a stripe size, so you can do vertical or diagonal stripes through the register set and don't have to do vector permutes. Intel and AMD put more of the parallelisation into the compiler (drivers) than nVidia, but the transforms that they do are quite different.

There's more of a chance that AMD drivers would help Broadcom than anyone else, but AMD and Broadcom aren't really in the same markets, especially since AMD sold Qualcomm their mobile GPUs (Radeon became Adreno)

Comment Re:Intel (Score 1) 142

I chose AMD because Intel is very aggressive about market segmentation. I wanted a MiniITX board with a low-power CPU and (at least) 4 SATA ports (so I could have RAID-Z and an optical drive). Intel Atom boards all restricted the number of SATA ports (a quick look now seems to show that they all come with 2 ports). If I wanted Intel and 4 SATA ports, I was pushed to the more expensive i3 range.

Comment Re:Stupid (Score 1) 217

Flat panel screens have the same yield issues as ICs (the process for creating them is vaguely similar). If you go from a 4" screen to a 8" screen, you quadruple the area, which quadruples the chance that there'll be a manufacturing error that will result in a dead / stuck pixel. This means that your yield drops by a factor of four.

Eventually, some one will figure out a way of creating really big panels and then cutting them to size, and then we'll have a large variety of screen sizes depending on where the defects happen to lie in a particular run.

Comment Re:Industry standard dock. (Score 1) 217

I'd love to see Thunderbolt connections on phones. The connector is just about small enough for a phone and provides DisplayPort and enough PCIe bandwidth for a disk controller and a USB hub. Perhaps someone could come up with a connector that was slightly thinner (and wider) and contained Thunderbolt and power.

Comment Re: approximately the resolution of an adult eye @ (Score 1) 217

300dpi for print is actually a lot lower than 300ppi for displays. Each dot for print is, depending on your technology, either black, cyan, magenta or yellow, or one of a very small (typically 4-16) shades of these colours. For a display, you have at least 2^16 shades of colour for each pixel. This is why the output from a 300dpi inkjet looks a lot worse than a 70dpi monitor. For print, you typically use 2400dpi, which comes close to approximating 300ppi.

Personally, I find you hit diminishing returns after about 200ppi. It's easy to tell 70-100ppi apart from 200ppi, but 400ppi is only better if you look really carefully. 600ppi is a marketing gimmick (and will need more frame buffer memory and more CPU power to use, so is likely to drain the battery faster). On the plus side, hopefully this will mean that the 225ppi panels will become cheaper and I'll be able to get a cheap phone with a nice screen...

Comment Re:First hand knowledge (Score 1) 173

That doesn't quite make sense as you've described it, because you can still only push one operations in to the ALU or get them out on cycle boundaries relative to the rest of the pipeline. It may run in a different clock domain (modern Intel CPUs have several clock domains for various things) for internally pipelined operations, but that still gives you a best case of starting an operation one cycle and getting the result back the next (and, more plausibly, two later). For multi-cycle operations, like divide or multiply, it would give a noticeable speedup, because the throughput would double (although the latency would remain one cycle), but for single-cycle operations there'd be no detectable difference.

Comment Re:More lip service (Score 1) 141

This is why Ben Laurie and others at Google (and elsewhere) are pushing certificate transparency. This is basically a mechanism to see that the certificate that you see for server X is the same as the certificate everyone else sees. It means that the NSA has to either steal the certificate that Google uses, or MITM all connections to Google to be able to compromise the connection without detection. Or just ask Google to do it on their server...

Comment Re:First hand knowledge (Score 1) 173

The Core 2 and later were entirely new 64-bit microarchitectures. The Core 1 was the last that began life as a 32-bit design. That said, the integer data paths had been 64 bit since the Pentium. I don't really understand what the grandparent is claiming (you don't do things in half a clock cycle - it's the discrete unit of time within a CPU). I suspect that he's confusing it with the Atom's implementation of SSE, which dispatched one 64-bit operation per cycle, so if you had two SSE operations back to back in the instruction stream there'd be an extra cycle of latency, but you still got the same performance improvement from denser instructions and only got one extra cycle of latency before the result was available. I had a student implement something similar last year, and found that this approach gave a very good ratio of performance to die area used (although on a more modern x86 chip you have a lot of spare transistors to play with, so now it doesn't make much sense to do the saving, and it doesn't help much with power consumption as it means the SSE execution units need to be powered for longer and you need a more complex micro-op path).

Comment Re:Fraud? Try Idiot. (Score 2) 99

Exactly. Unless it's an experiment to see how well peer review works, putting it in Nature is pretty stupid. You're pretty much guaranteeing that people will try to reproduce your work and you'll be exposed. You can probably get away with it if you put it in a less prestigious journal, but if people start citing it then there's a good chance that someone will try to reproduce it, especially when the novelty of the article is that it's an easy way of doing a thing that loads of people want to do. And if it isn't read and cited enough that people want to reproduce it, it's pretty worthless (from a research career perspective) as a publication...

Comment Re:Does that mean Microsoft Network is better ? (Score 4, Interesting) 40

Finally, people who go to Microsoft Research tend to disappear and never be heard of again. No one knows why.

That's only true if you never go to any computer science conferences: if you do, you'll find a lot of good papers written by MSR people. They do, however, have an appalling track record of turning them into products. This has improved a bit over the past few years, but until then MS and MSR were effectively run as two different companies and ideas from MSR were unlikely to be exploited in MS products.

The cynical explanation is that MSR exists to provide talented people with a well-funded sandbox where they will play and not create companies that compete with MS. The more likely explanation is that MSR has a budget of around $5bn annually, has separate premises, and does not provide any incentive to its employees to get their work into products.

Slashdot Top Deals

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...