I am not associated with Anthropic or Claude beyond being a very recent paying user. It might sound like I'm shilling, I'm really, really not. I'm just surprised to be this excited by an LLM.
Claude is the most capable coding LLM I've ever used. I paid for it after trialling it for just a couple of hours. I am an AI - and particularly LLM - sceptic. I had used ChatGPT and Copilot and dabbled very briefly with Gemini and thought they were fine as Google replacements/augments, but they suck at coding, especially from the ground up. They seem to expect you to get started and they'll just tag along.
Claude changed my mind about coding LLMs. In my opinion, Claude with Sonnet 4.5 is pretty exceptional. Certainly not perfect, it makes mistakes, sometimes gets trapped and needs you to dig it out, but its reasoning, capabilities and the sheer speed at which it seems to ingest information is mind boggling.
I tasked Claude with writing a BSP map viewer for Quake (yes, the 1996 FPS) that runs in a browser. I am an enthusiastic novice programmer at best, and I have zero experience writing 3D code. None. I have a basic familiarity with the Quake engine, I've made some maps for it, I know the basic file formats etc. but if I had to learn how to write a BSP parser, how to handle UV mapping, textures, lightmaps, (and, frankly, relearning high school maths that I haven't used in 25 years) etc. it would take me - I'm guessing - somewhere between 8 and 12 months.
I knew that Claude used the three.js library for 3D rendering and it's already very familiar with it. I gave Claude a very basic prompt outlining what I wanted; essentially just render whatever BSP you threw at it, no game logic or anything... and Claude just went off and did it. Seeing its first iteration was genuinely amazing - it wasn't at all feature-complete but it was functional; you could fly around the textured map with keyboard and mouse controls and it had thrown in a basic UI. My jaw kinda dropped a bit.
Obviously there's lots of technical information about Quake online, not to mention the entire source code of the original game on GitHub. Over the course of 5 days - without me writing a single line of code, just several hours of prompting - it has managed to make a very capable Quake map viewer with some very nice bells and whistles. I'm still working on it as it keeps hitting the token limits with my basic 18GBP/month subscription, but what it has already generated is extremely impressive.
Currently the project features full texturing with toggle between various texture filtering levels, lightmaps, entity visualisations and filtering with tagging and bounding boxes, image controls with gamma, brightness and contrast, quake-style sky and liquid rendering including mostly Quake-accurate underwater shaders, embedded texture browser, lines and arrows showing connected entities, wireframe mode and wire-overlay mode, rendering the quake character set from an embedded base64 gif, basic collision detection, alias models, animations, dynamic lighting and a lot more I don't recall right now. Again, I have not touched the code even once. I just give it prompts and if something doesn't work I describe how it doesn't work and expect it to figure it out. All of this in a (currently) ~240KB html file (and obviously it pulls in three.js from CDN).
I should note that someone has already ported Quake in its entirety to three.js using Claude, so I'm somewhat reinventing the wheel, but the only time I ever referenced that project was pointing Claude to one single source file that handled lightmaps, as Claude was having a hard time getting the implementation right from just random Internet-sourced documentation and "intuition". The only other thing it struggled with repeatedly was conversion of Quake angles to three.js angles. Even though the formula was extremely simple it took a long time to get Claude to implement it consistently, and it had to be prompted to standardize it in a utility function instead of having multiple near-identical formulas in several places. It did get there eventually, but required a lot of handholding. When I say a lot I mean multiple rounds of prompts. It's more about the tokens being wasted than the time, honestly.
Virtually every other feature I asked for it has managed to do completely by itself, coming up with fixes when I report issues, throwing in smart logging and diagnostics when it needs more information and generally just being surprisingly - even astonishingly - competent.
Maybe I need to give Gemini another shot, but frankly I'm blown away by what Claude has managed to do. It's already earned its £18 for this month, as just watching it work has been a fascinating journey, and I've learned a lot even if it is just for a niche and fairly useless hobby project.
Believe me, I think AI - and especially LLMs - are an extremely unbalanced double-edged sword. I worry for white-collar employment in the future. I hate the RAM and HDD shortages that the bubble has produced. I hate the environmental impact of datacentres and the massive energy costs. Despite all of this, I am coming around to the fact that LLMs really do have a place, and really do provide value in some scenarios.
I look forward to trying Sonnet 4.6 and seeing how it compares to what I've already experienced.
I am not associated with Anthropic or Claude beyond being a very recent paying user.
Claude is - by far - the most capable coding LLM I've ever used. I paid for it after trialling it for just a couple of hours. I am an AI - and particularly LLM - sceptic. I had used Gemini, ChatGPT and Copilot and thought they were fine as Google replacements/augments, but they suck at coding, especially from the ground up. They seem to expect you to get started and they'll just tag along.
Claude changed my mind about coding LLMs. In my opinion, Claude is pretty exceptional. Certainly not perfect, it makes mistakes, but its reasoning, capabilities and the sheer speed at which it seems to ingest information is mind boggling.
I tasked Claude with writing a BSP map viewer for Quake (yes, the 1996 FPS) that runs in a browser. I knew that Claude used the three.js library for 3D rendering and it's already very familiar with it. I gave Claude a very basic prompt outlining what I wanted; essentially just render whatever BSP you threw at it, no game logic or anything... and Claude just went off and did it. Seeing its first iteration was genuinely amazing - it wasn't at all feature-complete but it was functional; you could fly around the textured map with keyboard and mouse controls and it had thrown in a basic UI. My jaw kinda dropped a bit.
Obviously there's lots of technical information about Quake online, not to mention the entire source code of the original game on GitHub. Over the course of 3 days - without me writing a single line of code, just lots of prompting - it has managed to make a very capable BSP viewer with some very nice bells and whistles. I'm still working on it as it keeps hitting the token limits with my basic 18GBP/month subscription, but what it has already generated is extremely impressive. Currently features full texturing with toggle between various texture filtering levels, lightmaps, entity visualisations and filtering, image controls with gamma, brightness and contrast, quake-style sky and liquid rendering, embedded texture browser, lines and arrows showing connected entities, wireframe mode and wire-overlay mode, rendering the quake character set from an embedded base64 gif, basic collision detection, and more I don't recall right now. Again, I have not touched the code once. I just give it prompts and if something doesn't work I describe how it doesn't work and expect it to figure it out. All of this in a (currently) 160KB html file (and obviously it pulls in three.js from CDN).
I should note that someone has already ported Quake in its entirety to three.js using Claude, so I'm somewhat reinventing the wheel, but the only time I ever referenced that project was pointing Claude to one single source file that handled lightmaps, as Claude was having a hard time getting the implementation right from just documentation and "intuition". Almost everything else it managed to do itself, coming up with fixes when I report issues and generally just being surprisingly - even astonishingly - competent.
The other AI companies should be worried, because right now, in my humble opinion, Claude outperforms all of them at coding tasks. Anthropic knows what it has.
is this just all for show?
Of course it is. It's his MO. Say a lot, do very little, claim victory.
Energy prices are absolutely going to rise for consumers because of AI. Now Trump can say he wrote a big, beautiful, (worthless, toothless) pact but those big bad companies did it anyway. If he really wanted to stop them he could get a law passed. He doesn't want to stop them. He wants to be seen to be stopping them, while doing nothing.
Yeah. So I'm guessing Claude is trained on mountains of existing open-source code, including GCC. It also had GCC to compare its results against. This is not the same as coding up something novel where it has not been trained on the source of the thing it is recreating, and where the whole thing is already documented to infinity.
While it's an interesting prototype of how to effectively parallelize Claude instances on a codebase, the example used is not a hard one for an LLM to solve.
Show me Claude doing the same thing against a set of more loosely-defined specifications without access to a reference sample and I bet your human engineers spend more time writing unit tests than the LLM does coding.
The question is what happens if an LLM creates a knock-off and a human goes to redistribute and maintain the work as a viable alternative... Does the original software vendor go nuclear because the LLM can't be considered to really do a 'clean room' reverse engineering?
These things are trained on open-source where that matters less. As long as they're not stepping on a trademark or a software patent they'll have carte blanche. I'll bet any amount of money that Claude has never been trained on proprietary source. No publicly available LLM will have ingested the Windows source, or Adobe Creative Cloud, or Oracle DB, or any commercial software for the exact reason you propose above, because users could tell the AI to spit out a replica. It'll do it badly, but it'll do it.
And so, we have a two-tier system. Open-source stuff can be easily cloned. Vibe coders can shrug and say "the AI wrote it so it's all kosher" and ignore existing licenses. Closed-source remains closed-source.
I could do the same thing at my job (not that I need to since we're the ones deactivating accounts) and nobody would be any the wiser. It's not hard to correlate account deactivations with people being terminated. Their mistake was that they "shared it more broadly". If they'd just kept quiet nobody would have known, but they decided to go public with the list, and the nail that sticks up...
Not making a judgment btw, it was their choice to make, but anyone could have predicted this outcome.
Firstly, you cannot refund DLC if you've played the base game for more than 2 hours, this is stated up front in Steam's T&Cs. You only mention reading FAQs; did you actually try contacting Steam support to fix the DLC entitlement? They have 14 days after being notified of the problem to respond. It'll be complicated by the fact that BF6 is actually delivered via EA Play so they may indeed redirect you to EA's own support.
Secondly, you're incorrect about refunds within 30 days. Under UK law that applies to physical goods only, not digital content. If digital content doesn't work you can ask for a repair or replacement, but they are not obligated to provide a refund. If the repair or replacement doesn't work, or isn't possible, you can then ask for a reduction in price instead. The law says that a full refund may be given "where appropriate", but - of course - doesn't specify what those circumstances are. In general the most you're likely to get is a partial refund.
Finally, if you Google "BF6 dlc not working" you'll see quite a number of articles addressing missing content in BF6 and possible ways to address it. Good luck.
The thing is (monthly patch fuckups notwithstanding) the Windows kernel, driver model, and maybe 50% of Windows userland is all perfectly fine. Under all the bloat and services nobody asked for is a genuinely good OS.
It's the other 50% of absolute garbage they're dumping in userland that's killing Windows; every supposedly local search redirecting to Bing, Copilot stuffed into every nook and cranny, forced UWP app installs, ads everywhere, etc. Debloating Windows 11 is a never-ending game of Whac-A-Mole.
If you run Windows 11 IoT LTSC it's probably, technically, the best version of Windows yet because it contains none of that stuff. Unfortunately, acquiring it (legally, legitimately) requires signing up to a Customer License Agreement with Microsoft - it's not available to consumers or even normal business customers, despite the fact that a great number of customers would opt for it if given the choice.
Microsoft do not consider Windows 11 broken, because the garbage is strictly by design. They make money from ads, they make money when they redirect to Bing and they hope to make money from Copilot one day, so it's being jammed in everywhere to get as many people dependent on it as fast as possible. Windows simply isn't the product anymore; it's just the product delivery system and you are the product being delivered, via Windows, to Microsoft.
Yeah, the troll mod is unfair but that's Slashdot for ya.
The problem is "workstation" is now very loosely defined. Is a standard office PC a workstation or does it have to be more powerful than that? Does it have to have a specialised purpose like CAD or graphic design or can it be general purpose? Actual workstations as many of us would understand them no longer really exist - they're just PCs, maybe with a specific card to support whatever their primary purpose is, but still just PCs. In that case, there's no reason a powerful phone can't be considered a workstation.
Anyway - the way things are going, soon you won't be allowed to own anything more powerful than a thin client and all "your" computing power will be in the cloud. That makes the smartphone a perfect on-ramp to pay-as-you-go cloud computing - the tech bros' wet dream.
In my experience almost all lab or engineering equipment is like this. Software support is quietly dropped after a few years (usually something like "BioWonder Blue devices are no longer supported in BioAnalyze version 7" written in a small box on the download page) and eventually the software ages out of working with modern Windows entirely. Then you're left with, as you say, the support nightmare of running older versions of Windows while managing security and data access just to keep using a perfectly functional device that cost hundreds of thousands. The answer from the vendor is always "buy a new one from us" which is, of course, just a rephrasing of "what have you done for me lately?"
No vendor will pledge to keep extending support even though it's obviously technically feasible. Selling software support contracts must not be as lucrative as selling new devices. But hey, the new devices have "AI" plastered all over the product description, so they must be better, right?
Hard work never killed anybody, but why take a chance? -- Charlie McCarthy