Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Can't you just f*cking enjoy it? (Score 1) 51

Jeez, I hate this kind of butterfly-squashing blowharding. I hate it for the same reason I hate those books and TV shows that blather on about how all the technology in Star Trek and Star Wars isn't possible. Who gives a sh*t? It's FICTION, people! Just STFU and enjoy the ride.

It's the difference between science fiction and plain fiction.

Comment Re:Mechanism of Bluetooth attack explained (Score 1) 93

I once tested this idea with a large coil I placed against the ceiling, hoping that my upstairs neighbour's equipment would be caught within the near field. I then played some horrible schlager through the coil. Unfortunately, the only effect I got was on my side, with a large metal rack vibrating to the tune.

Comment Re:"CUDA implementation built for Radeon GPUs" (Score 1) 29

Ah, I must have been lucky with my applications and the GPU. I actually started with Mesa OpenCL which seemed fine at first, but there were timeouts on my longer-running kernels, and ROCm has none of that.

I do fairly simple but heavy numerical stuff, and it turns out AMD cards are much better for these uses. For example, double precision float speed is only half of single precision, whereas DP on Nvidia consumer cards is much slower. It's easy to check this as Nvidia also runs OpenCL, so in my experience ROCm isn't particularly slow, quite the contrary.

Comment Re:This is dead in the water pretty much (Score 1) 29

I've refused to learn CUDA as I don't want my code to be at the mercy of a single GPU maker. The project looks interesting at first glance, but it seems like they'd just be playing catch-up with new CUDA developments. Open standards are much nicer, and besides OpenCL, I've got the impression that ROCm itself (which is open source) provides a lot of CUDA-like higher-level functionality.

Comment Re:"CUDA implementation built for Radeon GPUs" (Score 1) 29

The ROCm platform is targeted at DATACENTER GPUs. As soon as any consumer GPU becomes affordable it's quietly dropped from the next ROCm release.

This doesn't seem right at all. I'm using ROCm drivers for OpenCL applications right now on an RX 6600, an affordable consumer GPU. ROCm is open source (see the Wikipedia link in the summary) so there isn't an immediate danger of dropping support for a given hardware; you can always fork it and backport things etc.

AMD also provides closed source drivers so perhaps you're referring to them?

Comment Re:Remember PhysX cards? (Score 1) 70

NPUs will meet the same fate, as the novelty wears off. AMD eventually dropped their 3DNow instructions too.

The idea of 3DNow instructions hasn't gone anywhere, it's just that Intel pushed their own version called SSE, which was eventually adopted by AMD as well. From what I understand, NPUs, tensor cores etc. are just continuing the trend of wider SIMD units and shouldn't be too application-specific. It's just marketing that likes to name them after the most popular application, just like AMD's floating point SIMD unit was named for 3D graphics.

Comment Re:Seems far-fetched (Score 1) 70

Nothing in CPU design has changed as a result of any of the marketing of the past 15 years or so. The multimedia stuff did result in some instruction set changes, but that more or less ended a while ago. Virtualization support was added back around the same time. If they add an AI instruction set that somehow makes sense in some way, then i'll notice, but I see no sign of that.

Intel's AMX instructions in their 4th gen Xeon's were designed for AI to enable loading huge amounts of data into a register array for matrix multiplication. This shit easily saturates all available memory bandwidth. Before that there was VNNI instructions which AMD also supports and was explicitly designed for AI.

The way I see this, it's just SIMD units getting wider. Some of us have been working with large vectors/matrices for decades, and wonder what's so "neural" about the latest matrix multiplication unit. It's nice to se the wide SIMD trend continue, as long as the hardware doesn't get too application-specific, so it remains useful after the current AI craze wears off.

Slashdot Top Deals

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...