Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:"AI works for us but not for our customers!" (Score 2) 18

Yep, have seen this repeatedly, "we are a software company that writes software for other people, we can just codegen it now!"

In the most extreme, when Gemini 3 came out, a sales manager sent an email to his sales team saying that if a customer asks for software that we don't already have, the salesguy can just put the customer prompt into Gemini to get the requested software, then sell it to the customer, without having to know how to write or review code. Of all things he thought Gemini 3 was up to the task of just a simple prompt to make software to sell and yet somehow the customers wouldn't be able to try that same thing.

A lot of these AI plans seem to be directed toward failure either way, the AI fails and your product fails, or the AI can succeed and the would-be customer does it for themselves.

Comment Re:How about macOS? (Score 1) 20

I've not used it in a while so I don't know if it is still the case, but *generally* the more flexible desktop platforms are frequently considered "less secure" because of how relatively more open ended interaction is between files and applications.

Mobile platforms started in a very isolated mode and implemented very precise, limited application permission model.

Desktop platforms, well, the cat is way out of the bag and attempts to add mobile-style permissions have been happening, but broadly speaking not nearly so far for fear of breaking existing application expectations.

Comment Re:Premise of the story is flawed (Score 1) 20

Yes, everyone fixated on this blog post, except that Friday had seen a increase with the Supreme Court ruling against Trump tariffs, and then over the weekend Trump doubled down on imposing tariffs, erasing the gains.

There's a *lot* going on that could by itself explain issues, plenty of room for false correlations.

Comment Re:"Deterministic" (Score 1) 23

Though I think AI proponents would claim that, *largely* that's not the idea, that they would hang their hat on the codegen, where a probabilistic engine is used to direct code changes that when done are determenistic.

That companies that outsource to offshore coding outfits, no *matter* the rationalization (despite what they say) will see the new hype is insourcing to random local dev guys that can LLM themselves to '10x engineers'. A few low end local devs will be cheaper than a lot of the remote devs and the overhead, and be under your business control and physical location too.

The issue is that this has already been the case, so LLM is a rationalization for a lot of these companies to try a strategy that would have worked out either way.

The portion of India IT that stems from offshoring is at a big risk, though domestic independent India IT is probably in fair shape, except the flood of low-quality offshore workers trying harder for those domestic jobs.

Comment Re:15 years out of date? (Score 1) 23

Suppose the counter argument would be that Indian developers are not *solely* providing outsource work to American companies.

Absolutely concur that nearly all offshore, outsourcing outfits are basically there to grift American companies with dubious credentials and work.

Based on experienced, occasionally in that stupid setup, you'll have a really useful and effective person for a while, but ultimately they go 'poof' to get higher pay at a more proper software development shop. So presumably those jobs that aren't outsourcing are the ones alluded to.

Comment Re:94% (Score 1) 28

Usually they at least have a concept of a different interface, either more flexibility because the thing they are 'ripping off' hard codes some expectations, or more usability by simplifying an overly complex interface provided by a framework or language.

Here we are talking about creating verbatim, exactly the same calls/arguments/syntax as an existing implementation. Which is a pretty rare occurrence for specific reasons. Off hand things like FreeDOS, Dosbox, Wine, a popular scenario of making open source clones of proprietary implementations. Here the desire was different underpinnings for something they found otherwise useful. But most "why the hell did they bother" projects that were already solved at *least* can offer some illustration of what they thought needed changing.

Comment Re:\o/ (Score 1) 28

Yes, but an LLM is only kind of useful for that sort of switch. It can do a bit, but it screws up a fair amount still (currently going through exactly that scenario now, it is much faster with LLM as it is kind of in its wheelhouse, but it's still a slog). Codegen maybe has a success rate of 75% for that sort of chore.

Making a knockoff implementation of an existing open source project with a very good existing suite of test cases... Well codegen goes over 90%, between likely having been trained on the prior codebase verbatim and having the entire codebase to work with in context and test cases to automatically run and auto-retry when it fails.

Comment Re:During this effort (Score 1) 28

They presumably didn't *want* next.js anymore, but wanted it because between porting their codebases to an alternative or making a knockoff of next.js, the latter is actually easier for LLM to pull off.

Porting a codebase to do the same functionality with a different dependency is also something LLMs are.... kind of useful for, but far less good at that than making knock-offs of an existing open source projects with some twists applied.

Comment Re:94% (Score 1) 28

Guess we need to know *which* 94%. If it were ordered by instances of invocation of a codebase to port, then sure, probably 6% of the surface of a library might have as few as zero actual users.

But if it were 'the 94% the codegen happened to get to pass tests', well then it's more of a crapshoot as to whether that 6% was actually important and needed bespoke attention.

That said, again we have another example of codegen being able to generate an alternative implementation of an existing open source implementation. Based on LLMs work, this should be a relative easy hit, the original codebase is baked into the training of the model, the codebase is available to put into the context window for more direct use, and copious existing test cases exist to automate the 'try something that seems right, it doesn't work, random stir, try again for something that seems like it should be right, repeat until test passes'. Making knockoffs of popular existing open source codebases is just supremely in the wheelhouse of LLM. However there is rarely a particularly good reason to do that.

Comment Re:Imbeciles (Score 2) 99

Even before the codegen craze, business folks had largely already decided the code doesn't matter, it's control of the data that really matters.

Have the clients data stored and controlled by you, Doesn't matter if they manage to faithfully clone your software, you have their data and export of the data is a big PITA so that clone is useless since they can't get the data.

Also to the extent that the codegen output isn't copyrightable, the human material mixed in is, and it is all a jumbled mix and no one can extricate the codegen cleanly to take advantage of that gap. Even very aggressive codegen used in a context where it kind of works will get human fixups because it will flub *something*, and that fix is human and therefore copyrightable and no one else can unwind that change to know the pure mach8ne output,

Comment Re:Eat your own dogfood? (Score 3, Interesting) 99

The issue is that they are meddling with how the job is done, rather than providing access to the tools.

Usually you don't track and penalize people for failing to use a tool that is available, you make it available and evaluate performance. If not using the tool has a bad effect, then it will show in performance.

Tracking how much material to get an LLM to emit is crazy, since you can just prompt it to generate thousands of lines of irrelevant code that is never even used, You offer a cheap way to game performance metrics with zero relevance to the work at hand assured. Ask it to generate a big website about poodles. Your metrics go up, but no business value in a site about poodles. The metrics on the dashboard at my work are unable to tell where the generated code really went, it just knows it was generated and "accepted", but that has nothing to do with even committing the code, and even if it did, it might not track the remote that the commit is pushed to.

It is bad enough with how people gamed lines of code and the industry broadly recognizes that as a stupid metric now. LLM codegen stats are even worse and more trivial to game.

Comment "Security researcher" (Score 5, Informative) 75

Now there's a security researcher I can't imagine having confidence in...

If it was a toy inbox, ok, good thing to play with, but on an actual inbox, with the universally recognized badness of OpenClaw, and a *security* engineer... Not even a misguided software person that just doesn't take security seriously enough which is bad enough, but someone who by any vague measure *should* know better...

Comment Indulging the hypothetical... (Score 4, Interesting) 62

So the hypothetical is that entry level coding jobs are toast but you still need the advanced folks to actually direct things, and they will need to be able to review and amend code.

In such a scenario, then education takes over the role of 'doing stuff humans don't have to do anymore, but still need to know how to do it'. Like how math education starts by banning calculators, then as the education advances increasingly advanced calculators and computer software are allowed to handle the tedium that was left behind.

Comment Re:Random blog post, or tariffs and politics? (Score 1) 52

Note that white collar jobs aren't all 'intelligent', there's a fair amount of tedious manipulation of purely abstracted data in computers. This is the part that, in theory, maybe, could be changed by LLM approaches. Some debate can be had about which white collar jobs are which, and how far the LLM can go or not, but those at least are in the ballpark.

If a part of a given blue collar job couldn't be done with gloves more substantial than medical gloves then it's pretty far from being within the reach of any AI technology today. The approaches to do anything require training data, and instrumenting the dexterous and sensitive work of hands is just not a thing. It's why the humanoid robotics demonstrations remain embarrassingly bad for these companies that *usually* at least can pull off superficially impressive demos. The fact that even without AI, the tele-operated demonstrations fail because humans suck at doing this stuff with even gloves on, let one trying to do it remotely with controllers.

By comparison, driving is just impossibly easier for machine learning to try to take a swing at. Humans operating almost entirely based on vision and input entirely into two pedals and a wheel. Supremely instrumented and very coarse grained controls to manipulate. Even with many lifetimes of "experience" to work with for relatively simple actuation, this has been a huge challenge and still not closed. HVAC and plumbing are so much more complex with inputs that aren't possible to capture as training fodder, with lower possible volume of training to get even if you could instrument everything.

Slashdot Top Deals

A good supervisor can step on your toes without messing up your shine.

Working...