Comment Re:Legitimate use of legacy admissions (Score 1) 62
They are currently at universities slightly below (about 5 spots in US news rankings) of my alma mater.
They are currently at universities slightly below (about 5 spots in US news rankings) of my alma mater.
Not honoring legacy comes with a price, when my alma mater rejected my kids I moved a multi-million donation to a different organization.
Jeez, I hate this kind of butterfly-squashing blowharding. I hate it for the same reason I hate those books and TV shows that blather on about how all the technology in Star Trek and Star Wars isn't possible. Who gives a sh*t? It's FICTION, people! Just STFU and enjoy the ride.
It's the difference between science fiction and plain fiction.
Why don't we pick another molecule which is similar to DNA and not biologically active?
Ah, I must have been lucky with my applications and the GPU. I actually started with Mesa OpenCL which seemed fine at first, but there were timeouts on my longer-running kernels, and ROCm has none of that.
I do fairly simple but heavy numerical stuff, and it turns out AMD cards are much better for these uses. For example, double precision float speed is only half of single precision, whereas DP on Nvidia consumer cards is much slower. It's easy to check this as Nvidia also runs OpenCL, so in my experience ROCm isn't particularly slow, quite the contrary.
The ROCm platform is targeted at DATACENTER GPUs. As soon as any consumer GPU becomes affordable it's quietly dropped from the next ROCm release.
This doesn't seem right at all. I'm using ROCm drivers for OpenCL applications right now on an RX 6600, an affordable consumer GPU. ROCm is open source (see the Wikipedia link in the summary) so there isn't an immediate danger of dropping support for a given hardware; you can always fork it and backport things etc.
AMD also provides closed source drivers so perhaps you're referring to them?
NPUs will meet the same fate, as the novelty wears off. AMD eventually dropped their 3DNow instructions too.
The idea of 3DNow instructions hasn't gone anywhere, it's just that Intel pushed their own version called SSE, which was eventually adopted by AMD as well. From what I understand, NPUs, tensor cores etc. are just continuing the trend of wider SIMD units and shouldn't be too application-specific. It's just marketing that likes to name them after the most popular application, just like AMD's floating point SIMD unit was named for 3D graphics.
Nothing in CPU design has changed as a result of any of the marketing of the past 15 years or so. The multimedia stuff did result in some instruction set changes, but that more or less ended a while ago. Virtualization support was added back around the same time. If they add an AI instruction set that somehow makes sense in some way, then i'll notice, but I see no sign of that.
Intel's AMX instructions in their 4th gen Xeon's were designed for AI to enable loading huge amounts of data into a register array for matrix multiplication. This shit easily saturates all available memory bandwidth. Before that there was VNNI instructions which AMD also supports and was explicitly designed for AI.
The way I see this, it's just SIMD units getting wider. Some of us have been working with large vectors/matrices for decades, and wonder what's so "neural" about the latest matrix multiplication unit. It's nice to se the wide SIMD trend continue, as long as the hardware doesn't get too application-specific, so it remains useful after the current AI craze wears off.
People who say this aren't routinely moving around six or seven people and a dog. I guess we can take a car and a crossover SUV instead, of course that doubles the energy used and takes two parking places.
They got into your hovercraft again, I presume?
I've had it with these motherfucking eels on this motherfucking hovercraft!
Always draw your curves, then plot your reading.