Forgot your password?
typodupeerror

Comment Re:These are not ~for~ us (Score 1) 56

It really depends who is selling them.

Meta are primarily in the business of selling advertising, and selling information about their users to advertisers, so in that case you're right, and you might therefore expect Meta to be gathering information about you via any such product, maybe even pushing audio ads to you ("sure, i can recommend a restaurant...").

Apple are in the business of selling appliances, and try to emphasize privacy to distinguish themselves from others like Meta and Google. In this case you are not the product, but arguably the people you are taking creeper pictures of in the club are. I doubt you'd see Apple sell a product like this where the camera was just an AI input and not capable of taking stealthy photos, since I'm sure they realize that the attraction of the appliance they are trying to sell is in large part exactly that.

Comment Re:Consider it career acceleration (Score 1) 150

It used to be extremely common for companies to actively seek out juniors by on campus interviewing etc, and the rate of hiring only seems to have stopped in last few years judging by the online accounts of how difficult it is for newly graduated comp sci students to find jobs (notwithstanding that the job market for developers seems to have largely frozen for everyone).

Comment Re: Maybe I'm missing something (Score 1) 150

You seem to be suggesting that someone's expertise can largely be triangulated and replicated by the questions they ask - that the architect brainstorming with the LLM is enough to reveal their decision making progress, but I doubt that is normally the case, even if the AI providers are were trying to identify and leverage such company confidential data!

It not as if the architect is trying to transfer their skill to the LLM and therefore bothering to explain all the relevant background as to why they are asking certain questions, what information they are bringing to bear to interpret the answers, or even what decisions they may ultimately make based on the LLM's input. There would certainly be no reason for them to explain this to the LLM "thanks for the chat, based on everything we've discussed I'm considering designs A or B, and not C, and here's why ...".

I don't think it's possible to transfer a non-trivial skill to someone/something just by describing it even if that is your intended goal, because the mind of the expert and student are different, and received knowledge anyways needs to be practiced to become a learnt skill grounded in your own mind and perception, with the unintentionally non-transmitted knowledge gaps and misunderstandings identified, filled and resolved through practice.

Consider the same "expert asks LLM for advice, or asks other expert" scenario in other domains - say stock trading, playing chess, or writing a novel. Is the discussion really going to help the advisor (or rubber duck) gain the skill of the expert?!

Comment Re:What happens when AI can do the senior work too (Score 1) 150

Sure AI will only get better, and the human tasks it can't do today it will be able to do at some point in the future.

BUT ...

There is a widespread tendency for people making these arguments to couch it all in terms of "AI", as if this were some well defined technology whose advance is as inevitable as Moore's law for chips.

The reality of course is that advances in chip density have faced discontinuities such as the need to move to EUV, develop new techniques for power delivery, etc, etc. Without those new techniques having been discovered and finetuned, Moore's law would have stalled or stopped.

It's the same with "AI". Let's be specific - what we have today is pre-trained Transformers, coupled with RL post-training. Will this be enough to get us to AGI and wished-for future AI capabilities? Presumably not. The man best positioned to know, Demis Hassabis, is guessing we'll need another handful of "Transformer level" technology inventions, which may be compared to the need to develop EUV for Moore's law to be able to continue.

Human ingenuity is amazing, and the human brain is an existence proof that AGI is possible, but there is no reason to expect the progress towards it to be linear and without problems. It's one thing to say that hand-wavy "AI" will be better in the future and able to do things that it can't today, but only a fool would bet their business on the exact timeline for this happening or for specific capabilities to be achieved.

Comment Re: Maybe I'm missing something (Score 1) 150

LLMs will get better, and LLMs will eventually be replaced/augmented with better architectures and algorithms. Eventually we'll get to human level AGI, capable of continual on-the-job learning, able to pick up senior developer skills if you let it progressively learn on simpler projects, the same way humans progress from junior to senior.

The timeline for this is whatever you guess the timeline to be for development and deployment (no longer just "pretrain and ship") of human-level AGI.

In the meantime there is a logical fallacy if you think that pre-AGI scaling of LLMs and better RL post-training techniques and datasets will help them advance from coding to high quality software design and architecture - senior developer skills. The issue is where is the training data going to some from?!

Training data for coding is abundant - just train on all the code on the internet as has already been done. Training data for design and architecture decisions is extremely scarce - most of it locked up in the minds of developers, and never published. How many opensource projects are accompanied by detailed documents describing the decision making process that went into their design?

Of course you would not just need this for openspource projects but also for the much greater diversity of proprietary closed source corporate ones too, which obviously is mostly not going to be forthcoming.

This closed vs open source asymmetry of diversity (esp. when it comes to complex systems) is not such an issue for code vs design since to a large degree "code is code", and an LLM can learn to be a coder just by being trained on a large volume of code. Coding is a much lower level skill than design, and a complex design will be decomposed into simpler code whose variety doesn't much depend on what high level it is cintributing towards.

Comment Re:AI is a huge problem for programmers (Score 2) 150

No doubt every ill-conceived idea that can be tried is being tried, but the math doesn't really work on that one. How can the same person be 10x more productive generating code that they are then personally expected to review.

The "solution" to this is either you just don't review the code, since you didn't 10x your manpower to review the 10x more code, or you just issue some impossible mandate like Amazon just did (when some junior dev's AI slop took down one of their production system) and insist that the senior devs review the junior AI slop, which only acerbates the manpower math problem, but Amazon isn't alone in trying.

The poster child for the YOLO, 10x more productive, never review the code, approach is Antropic's Claude Code, whose creator Boris Cherny brags (corporate line I suppose) that it's all vibe coded. It currently has 5000+ issues/bugs filed against it on github, which should give some idea of the code quality. Maybe only 5-10% of those will turn out to be real bugs, but that's still a massive number. In most product areas shipping something full of bugs is not a great idea, but I guess if your product is revolutionary, and the customer is developers who want LLM toys, which they accept are inherently flaky, then you can get away with it.

Comment Re:How do you develop that skill (Score 4, Insightful) 150

There seem to be at least four "AI strategies" (if throwing spaghetti at the wall can be called a strategy) that different companies are currently trying.

1) This get rid of, and stop hiring, juniors and interns, and give AI tools to your senior developers. At least you've now got capable people doing your design and guiding the AI, but indeed where does the next generation of seniors come from, especially if you want seniors that actually know your business and IT systems. Taken to it's logical conclusion, no more juniors enter the field (because no-one is hiring them) and we end up with retirement age developers babysitting AI, then retiring, then ???

2) At least plan 1) works in the short term, but some companies have chosen to do the exact opposite and get rid of the seniors (hey, they're more expensive) and give AI tools to the juniors and contractors instead, Of course now you've got people generating AI slop without the skill to review or guide what it's generating, but at least it's cheap (until you belatedly realize you've destroyed your IT organization).

3) Do nothing meaningful with AI. Ignore your developers who say it would be helpful. Not really a strategy, but at least you're not destroying your IT organization.

4) Use AI in an appropriate way, mindful of it's current strengths and weaknesses. I have friends in IT working at companies who are using strategies 1-3, but category 4 seems much rarer. I guess it's perhaps not so sexy as "feel the AGI, fire some segment of your developers (toss a coin, fire the juniors or the seniors)", but you keep your IT structure, give SOTA AI to everyone (expensive, but cheap AI is mostly useless for coding), and treat it as a tool that your organization needs to develop best practices for, not a magic genie that you hope can currently do something that it cannot. Hint to CEOs: don't do what the AI execs are telling YOU to do - follow what they are doing at their own companies!

I'm guessing that companies following 2) will be first to fail then 1). It's largely a slow motion train wreck.

Comment Re:not saying I want this but... (Score 1) 150

That's better than nothing if you're using "LLM as judge" to try to catch the errors in your RAG outputs, but if you're talking about "code review" (or whatever we should call critiquing voluminous AI slop generated by junior developers), then the problem is that AI isn't yet at the level to do that (and likely won't be until we develop human-level AGI).

If AI was good enough to do meaningful code reviews, then it wouldn't be writing crap code in the first place.

Comment Fertility != Birth Rate (Score 1) 277

Just because people are having fewer children doesn't mean that fertility (ability to conceive) is down.

I don't see it as much of a mystery. People see an increasingly messed up US, with expectations that it's going to get worse, prices of housing, cars, education, healthcare etc spiraling out of control, and at the margin they decide they either can't afford to have and raise kids, or they don't want to bring kids into a messed up world/country like this.

Comment Re:Nah, we will hardly notice any fall (Score 3, Insightful) 112

> Manufacturing will catch up to demand, but we probably will not see the abnormally low prices again that we were enjoying for RAM and hard drives again.

These aren't "abnormally low" - they are market prices that are profitable for the manufacturers. Unfortunately there is currently more demand, from AI, for memory than the industry has capacity for, so those that have longer term purchase agreements, or are willing to pay more will win, and until supply/demand gets back in balance we're going to see higher prices for things like laptops and smartphones.

Comment Re:Yes shit (Score 1) 112

> The only reason prices would go up is if the supply cannot keep up with the demand.

This may well happen for a while, it is hard to ramp up SOTA chip manufacture very quickly, and the industry is notoriously boom and bust, and the chip companies (incl. memory) have learnt their lesson - they are not going to ramp up as hard as they can (which is anyways limited) to meet a short term demand bubble which they expect will level off.

Slashdot Top Deals

Kleeneness is next to Godelness.

Working...