Comment Re:Infectocausation (Score 1) 49
autoimmune reactions can then follow setting off a process, very much so
autoimmune reactions can then follow setting off a process, very much so
I made up the word in lieu of infectious etiology to describe that probably many many more diseases are infection triggered than currently know, prime candidate: HSV.
the article misses the observation that the power level of HR in big business probably has widely recedeed compared to e.g. the 1990s. While HR used to be a pilar of its own, many of the HR responsibilities including the core say in hiring/firing/promotion decisions probably passed to line management. It would be interesting where this trend came from. The point of the article that HR is bigger than ever may only be true for quantity.
AMD's been doing this for forty years (ATI was founded in 1985) and their software support is still dogshit compared with nVidia. Unless Qualcomm is also rolling out first-class support for the top 20 open source projects using its GPUs nobody is going to give a shit. And seeing as Qualcomm's ARM laptops flopped precisely because of poor software support I expect no better from this.
good luck: this is not 3dfx gaming gpu with a joystick port but for an accelerator
there is an NVDA bubble, as the GPUs are being sold at outrageous premiums. It's bubbling left and right with de-facto vendor financing (2000 deja-vue) though. Which is not sustainable since a) competition will inevitably kick in, especially for the enormous profit potential as of today b) only a fraction of GPU functionality is needed for the linear algebra operations for LLMs c) hardware optimized for transformers including KV cache are/will be coming and not only from Nvidia, d) hyperscaler customers are showing reluctance to pay those premiums despite the CUDA bastion, at last going to other sources (AMD as of this week) besides in-house silicon.
I actually bet against NVDA based on the expectation that DGEMM-type matrix multiplication would soon(er) be in hardware optimized for LLMs plus putting part of the transformer into hardware. An expensive undertaking. It's incomprehensible given the market size and overpayment for Nvidia products that things have been slow. I don't get the technical rationale why some of the heavy GPU buyers don't put effort into making PyTorch, Transformers and more of the stack more affine to non-CUDA accelerators. in HPC matrix-matrix, matrix-vector has long long been abstracted away in BLAS libraries. Here, even with KV caching and what not the GPUs are completely overengineered to what LLMs need feature wise.
Emphasize: this is not EU regulatory burden called out here. Actually more EU action would be needed for a liquidity attracting common capital market among other things.
What's to point out is that rather some many countries went overboard with an administrative mindset to everything. It's toxic for innovation, business and regular life. The amount of government vs. private is just out of whack. The amount of GDP being government-related and the sheer number of people in public administration jobs is off. Everything is done with more rules, red-tape and administration.
most EU countries have price control on books, that is every retailer including Amazon has to charge the same price on a given new book. This is braindead as it the prices of books are in a simple demand supply relationship leading to fewer books being bought. The rational arguments given when it was re-legislated last was that it should protect authors and cultural heritage, which is economic nonsense, but works in politics.
"there is a world market for maybe five computers" [Thomas Watson Sr.]
Webbrowser are for consumers, Lotus Notes for Enterprise (in meaning] [Irving Wladawsky-Berger, early IBM Internet Division VP on web browsers] IBM WebExplorer was canceled
politics. Not having anywhere near current technology launchers and failure to reach agreement to at least try to catch up, what do we do: PR. Sounds good the effort that's about it.
BS and correction from before it's probably in the O(1 kWh) for a 100k token context SGEMV operations alone
it's now almost 2 years after the release of ChatGPT. There are many vendors pitching x10 and more from ASIC to FPGA to Wafer. Hard to understand that in 2 years matrix-vector optimized for transformer training and inference is not all over the place in lieu of the absurd pricing for CUDA devices.
Submitting a more than 100k context right now - just first step initially processing it is probably more than 0.5 kW.
positively, it provides funding and community to fusion
If it wasn't for Newton, we wouldn't have to eat bruised apples.