Forgot your password?
typodupeerror

Comment Stop hiring for the sake of hiring (Score 1) 59

The number of jobs are increasing, because the number of jobs being filled is rapidly decreasing. So, new job listings are often old job listings reworded to exclude more candidates.

If you have one or two good engineers (not programmers, not developers... honest to goodness engineers) who with AI can keep 15-20 tasks running simultaneously (the cost is GPUs and screens), you don't need to hire someone right now.

Companies want the best they can get. They don't want to fill seats anymore. If they already have a few developers, the LLMs are increasing their productivity 10-20x (yes, real engineers who learn LLMs and how to use them as though they're programming languages get that kind of gain), then they can hold out.

Here's a great one for you... I would never hire anyone without a great GitHub now. I want to see projects and I want to see both hand written code and LLM generated code. I want to see external projects they've patched to see if they can work on other peoples code. I want to see student/professional projects (things with deadlines) AND fun projects.

I'm in no rush. I will post a job, if I don't find someone great, I'll change the posting and wait again. LLMs will give me the buffer I need.

Comment Re:Anthropic _is_ the odd one out. (Score 0) 21

Google, Microsoft and Meta profit by open sourcing models. (more like hedges their losses)

OpenAI released shit models as open source and OpenAI and Anthropic are junk without gigantic caches from mass economy. The model is far less important than the cache. So, if Anthropic actually opened anything it would be junk too.

And I think the Pentagon should be legally required to make their own models. I think that depending on external companies for their most important technology is idiotic. The government should own and operate their own drone companies and their own AI companies. They should not legally be allowed to use outsiders that are easily influenced by external entities.

Comment I'll testify in favor of cursor (Score 1) 110

AI belongs in a sandbox on test servers. We don't use AI in production.

When a programmer is programming, they use an offline database and an offline server. When their code passes tests and code review, we push to production.

If you don't work like this you have no right to sue.

Comment Cross-discipline issues (Score 3, Interesting) 81

I have spent time with 9 neurosurgeons, 2 neurologists, 1 orthopedic surgeon, 8 general practitioners and over 50 nurses in 2026. There have also been 3 or more radiologists/neuroradiologists involved.

They all disagree on what is wrong with me or throw their hands in the air and say "I don't know" and another said "it'll heal in a few days, ignore it".

Here's the catch, the doctors all lack the ability to understand what is wrong with me. But as I collect information from each one of them and I upload the CTs and MRIs into the AIs and finally I have a doctor who isn't very good, but is not a specialist. As a result, I have a list of questions to ask the doctors to see if we can work on the problems.

As someone who takes 16 pain killers every day for months now, I want to get past this. I don't believe the doctors will figure it out. I believe it will be the AIs

Comment Re:No (Score 2) 62

You make a pretty good point and I'll ride this one out. Piracy was everything.

The idea was that IBM PC Compatibles won for a lot of reasons. But the clone market and piracy was the real reason. I remember the first time I saw the original manuals for an iBM PC 5150 and I was shocked that one manual was gobs of printouts of the source code to the PC BIOS. The Compaq made a clone very EARLY. CP/M was way too damn expensive... I can go on and make a history lesson, but the people old enough to remember have their own versions of it and its not worth it.

Over all, the PC won because we pirated the shit out of everything. The PC clone itself was basically a pirated computer. There were arguments that the NEC v20 processor exceeded the agreement with Intel for "second sourcing" and was technically also pirated.

Anyway, the path was simple....

The company dad worked for which was an IBM shop bought IBM PCs and used DOS, Wordperfect, Lotus... maybe a little later on Act! and allt he software cost 3.5 metric butt tons. Then dad pirated the software and rather than buying Wordperfect, he bought an aftermarket keyboard template or similar for how to use it. The his nerdy kid who was pissed dad bought a monochrome terminal (because CGA hurt your eyes) played on GW-BASIC and Flight Simulator (the only program dad actually paid for to make up for not getting a commodore, atari, Apple....) and a LOT of houses ended up with PCs.

But the point was, a IBM PC clone with two floppies, hercules, MS-DOS, and a screen and keyboard was A LOT cheaper than ANY Xenix system if only because every single aspect of the computer was pirated except for the one program you actually paid for which was DOS which was almost free. Especially when that was pirated too.

But PC happened because

1) Developers got PCs because they wanted to write software for the only people who actually paid for it which was companies and the occasional oddball who actually bough a genuine IBM PC.

2) PCs had a crap load of RAM (except the actual PC which never did)

3) When graphics happened, Autocad and every engineering software company coded for the computer with a lot of RAM

4) You could slowly play less sucky games on PC which made it so Junior at home was willing to not burn the house down.

5) Most importantly, copy protection never worked, Copy II PC Deluxe (also easily pirated) happened, and no one paid for software.

Xenix and Unix couldn't happen because unlike modern Linux, every single thing about it sucked. It was so incredibly shitty. A/UX probably was the only Unix ever shipped which didn't absolutely suck. And it sucked.

I don't think modern kids could ever understand how impressively shitty UNIX was. I can't believe it took us until the 1990s to get a shell that had history.

Comment Re:Sure (Score 2) 56

i love it... this is like back in the old days when the whole world moved on to trains, planes and automobiles and the cart rights were arguing over which modern breed of horse was best at transporting large amounts of grain.

Do me a favor, get or rent a GPU somewhere and run your own instance of Qwen 3.6 35B A3B. But make sure you give it the Playwrite MCP and for a bonus, toss in a web search MCP. I think you'll find that the other two still have a speed advantage as far as tokens spent. But that Qwen finishes almost all the same tasks with a higher level of accuracy. I've been experimenting with Gemma lately as well and seeing similar results.

Here's the thing. OpenAI and Anthropic only exist if you spend stupid amounts of money on flat rate or tokens. And yesterday Claude Open 4.7 1M token used an entire day to achieve mediocre results bootstrapping a new robot I designed.

When people buy Android, who pays for the tokens to run the AI services? That's right... 3 billion Google AI enabled telephones are being paid for by Google. They are hell bent on making it so most people will pay for their own AI by making Gemma and giving it away for free. It will genuinely save them a trillion dollars in money otherwise mooched.

Alibaba is in a similar circumstance. They need to get AI the hell out of their data centers because the cost is too high and they know that any money invested in inference is a total waste. Alibaba is a company who is smart enough to realize that chatbots and code bots is money wasted. No one will pay for those. So, they're investing in offloading the inference cost of that. Their data centers will offer AI as a service for embedding in products like self-driving cars or factory automation.

OpenAI doesn't have any products. They have a chatbot which... well it's pretty good, but it's like buying a Lamborghini and paying full price because you need to get to work faster in the morning... in rush hour traffic... during construction. Even the shittiest car on the road is still going to get where they are going and even though the most expensive car on the road might (or might not) get there a few seconds sooner after an hour in traffic, was it really worth it?

Anthropic is even worse... I seriously have no idea what their advantage is. Their tech is aging and they are pulling publicity stunts like warming up to the American Christian Right because they know they need to find a niche pretty quickly to get government deals because... well... no one actually pays for Claude... at least not he big deals.

Comment Re:Who is sailing on a sinking ship? (Score 1) 162

hmmm... strange.

I'm presenting this to the head of supercomputing for a NATO country on Friday because... oh wait... I am one of those educated people with access to 10 computers on the first page of the top 500. I am very sorry to disappoint.

Second I'm presenting my research and findings in a big room at Huawei Connect in China later this year... because crackpots need love too :)

Thanks for all the fish

Comment I wonder how it compares to ours (Score 1) 14

i wouldn't pay for an OpenAI product, that's an ethical issue. And forget data sovereignty. I don't feel comfortable supporting people like Sam Altman and the CEO of Anthropic who I'm sure spends his mornings admiring children in school yards.

But that said, every university in Europe is producing the same thing and let's be honest, training new models has becomes a lot easier these days.

Comment Re:Acquire then discontinue (Score 1) 31

They buy them for the customers.

What you do is you buy a company who has a lot of locked in customers for the next 5-7 years. For the first two years, you keep the rest and vest employees who perform to the bare minimum until they can cash out. Then the customers start leaving one by one. When the cost of maintaining the product gets high, you hire 5 people from India to take over. You then wait for all service agreements to expire and you end it.

PCoIP never stood a chance. HP likes to sell boxes and lots of them. The only "long term service" they understand is high end printers. And the bitch of that is, they're not doing long term anymore. A few years tops. I buy a new used plotter for $500 every few years because HP will increase service cost so high the customers can't keep them. Oslo, Norway doesn't even have HP printer service anymore as far as I can tell. Yes, I know it's two companies... but HP and HPe are still not really doing it.

When HP spun off their engineering tools division, it was the end of the company.

Comment Two steps behind (Score 2) 72

The NSA almost certainly needs to investigate the possibility of a threat. As such, they'll use it test for these vulnerabilities within their test network and then possibly on a larger network. They will either deem it a threat or not.

That said, there are a hundred open models which are always les than two steps behind Anthropic. So, assume that within a month, there will be a new open model matching or outperforming this one. And it will be a MoE and it will be untraceable.

So... if Anthropic can't release theirs... don't worry. Anyone who has tried the pubic release of Qwen 3.6 35b on a RTX 3090 and a pile of MCPs knows we don't have long to wait.

Comment Who is sailing on a sinking ship? (Score 1) 162

First... We can't release this model because it doesn't work

Second... We need to convince the Christian right that they should use their influence to force this tech down everyone's throats.

Anthropic is going to go public, but this should be considered gross negligence because they are knowingly asking money for something they know can only decline.

Try the open models and tell me that they aren't good enough to replace Anthropic in 95% or more cases already. And how will Anthropic compete with free?

Why open models matter? Well, it's only a matter of a few years before even miniscule devices will be able to locally host AI.

Here's the next thing. You need to see AI as an onion. Neural networks are a series of layers. Last week, I was playing with running layers at differing cost levels of hardware. I uses a cluster of H200s for the outer layers and I used <$100 AI accelerators for the inner layers and I used an RTX3090 for the middle layers. I then tested coding and general nonsense like "what eyeshadow matches these earrings" questions. 85% of all questions were answered quickly on the $100 accelerator. 99% were answered with the cheapest two options. And remember, I wasn't running a small model, I was running a gigantic model sharded across a $100 device, a $1000 device and a $500,000 device. I reduced usage of the $500,000 device to almost nothing. I managed to achieve the same results at about a 20% performance drop on a 1 trillion parameter model while increasing compute density of a cluster of H200s by 100 fold.

So, what this means is that using extreme MoE models sharded properly and adding what currently is a $100 accelerator and soon will be a $5 accelerator and a thin layer in-between, assume a single RTX3090 class card for 1000 users (500 for better performance).... the case for massive inference data centers is screwed. Give me a grant and a few months, I am 100% sure I can get efficiency closer to 10,000x rather than 100x better. And no, this is not exaggerations. I would retrain the models to be spread across more... thinner layers with a LOT more experts. Of course, retraining something on the scale of a $1 trillion parameter model is expensive. What's great is, there is true value in China footing the bill for this because cutting their dependence on gigawatt data centers filled with NVidia and tons of HBM memory (possible literally) is a survival requirement.

If there's anyone in China reading this, take Qwen or Deepseek, spread them REALLY REALLY thin... then distribute the layers and open the weights. You'll make it so that companies like Huawei and the others can layers locally on devices as small as ESP32 and the distribute the layers outward. It was LM Studio's magical cross platform sharding which got me going on this. It just works. It's so simple. It just works.

Slashdot Top Deals

Nondeterminism means never having to say you are wrong.

Working...