Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re: well, there IS another side... (Score 1) 101

That concern about junior engineers is real in some places, but it isn’t universal. At my company, junior engineers are not being cut out. They’re being actively invested in — explicitly trained to use AI as part of their onboarding and long-term development. The expectation isn’t “let the model think for you.” It’s “learn to specify, critique, verify, and iterate faster.”

This turns AI from a replacement into a multiplier. A junior who previously needed weeks to become productive can now explore codebases faster, generate test scaffolding, and iterate on small features with tighter feedback loops. The key difference is supervision and standards. We still require code review, meaningful test coverage, and rejection of weak outputs. The tool accelerates learning; it doesn’t waive fundamentals.

The apprenticeship model doesn’t disappear — it shifts shape. Instead of spending months on boilerplate, juniors spend more time understanding architecture, constraints, and failure modes. In practice, this raises the cognitive bar earlier, producing engineers capable of supervising and leveraging AI effectively.

I share my brother’s confidence for the current generation of systems. They are powerful pattern engines. They are not autonomous engineers. They do not own accountability. They do not reason about tradeoffs or long-term maintainability unless a human enforces it. AI lets good engineers be great because it removes friction; it does not replace great engineers because great engineering requires judgment under uncertainty, system design, prioritization, and ownership.

If a genuinely generalized AI were created, that would be a paradigm shift, not an incremental tooling change. Speculating about hypothetical systems decades ahead is less useful than evaluating the tools we have now. Right now, the empirical question is: do supervised AI-assisted workflows increase productivity without degrading quality? On teams that enforce standards, the answer appears to be yes.

The failure mode isn’t “AI exists.” The failure mode is “AI is used without discipline.” Tools amplify the habits and culture of the organization using them. Outcomes depend more on process, supervision, and human judgment than on the model itself. Junior engineers, when properly guided, become faster learners, more capable contributors, and eventually competent supervisors of these tools, preserving the apprenticeship pipeline while increasing leverage.

Technology rarely eliminates complexity; it rearranges it. The interesting question isn’t whether juniors will disappear — it’s whether organizations maintain the culture and training to produce engineers capable of using powerful tools responsibly.

Comment Re:well, there IS another side... (Score 1) 101

You’re reading far more into a throwaway line than was intended.

“Full of extremely competent engineers” was not a claim that every single human in the building is a flawless genius. It was shorthand for: the people I work with are generally skilled professionals operating at a high technical bar.

Of course large organizations contain a distribution. Every sufficiently large system does. That’s not controversial — it’s statistics.

But it does not follow that because variation exists, the median competence is low. Some teams inside large companies are exceptionally strong. That’s why certain products ship at all.

You also seem to be assuming that expressing respect for colleagues is evidence of dishonesty. That’s a strange inference. It’s entirely possible to work in an environment where the hiring bar is high and most contributors are capable.

More importantly, this line of attack is irrelevant to the substance. Whether my coworkers are brilliant, average, or secretly three raccoons in a trench coat does not change the technical claim:

Iterative specification + decomposition + automated verification + human review = higher leverage.

If you believe that AI-assisted workflows do not improve productivity in skilled hands, the appropriate response is to provide counter-evidence. Questioning whether I secretly despise my coworkers doesn’t move the argument forward.

Large organizations have distributions of talent. That’s true everywhere — big tech, academia, government, startups. The interesting question is not whether distributions exist. It’s whether new tools increase the effective output of competent engineers within that distribution.

That’s an empirical question, not a personality assessment.

Comment Re:well, there IS another side... (Score 1) 101

You’re attacking my credentials and character instead of addressing the engineering substance. Calling someone a “joker” isn’t an argument, it's "ad-hominem". You silly goose! ;-) Now, you seem smart, so I have to assume you already know what ad-hominem is, yet you chose to do it anyway, revealing your character? or?

I’m not going to disclose my employer to satisfy an anonymous commenter, but my claims don’t depend on who I am. They depend on whether the workflow works. The practices I described — iterative prompting, clear specifications, decomposition into smaller tasks, mandatory test coverage, and rejection of outputs that fail objective criteria — are standard engineering control mechanisms. They work regardless of the logo on someone’s paycheck.

When I said we “force” the AI to do things, I was referring to constraint and validation, not literal coercion. We define acceptance criteria. We require meaningful tests. We reject outputs that don’t meet them. That’s the same way we “force” a compiler to produce correct binaries — by defining rules and refusing invalid results.

On unit tests: obviously no serious engineer believes in a trivial 1:1 mapping between lines and tests. The point is comprehensive behavioral coverage. Modern LLMs are unusually good at generating edge-case tests because they don’t get bored. The human’s job is to verify that those tests are meaningful and not tautological.

Your claim that you cannot prompt an AI that produces mediocre code into producing better code is directly contradicted by both practice and basic computer science principles. Output quality improves with clearer specifications, iterative refinement, task decomposition, and automated verification. That is true for humans, compilers, and AI systems alike. This is not rhetoric. It’s workflow.

You also brought up my prior concerns about superintelligence risk as if that’s some kind of knockdown contradiction. It isn’t. Believing that future, more powerful systems may pose existential risks is entirely compatible with recognizing that current systems are useful tools.

Many of the most accomplished AI researchers *in the world* hold both views simultaneously. For example, also signatories on the petition were:

  Geoffrey Hinton — Nobel Prize in Physics (2024), Turing Award (2018), pioneer of deep learning.
  Yoshua Bengio — Turing Award winner, co-architect of modern deep learning.
  Stuart Russell — UC Berkeley professor and co-author of the standard AI textbook used worldwide.
  Demis Hassabis — Founder of DeepMind, led AlphaGo and AlphaFold.
  Ilya Sutskever — Co-creator of AlexNet and former Chief Scientist of OpenAI.
  Eliezer Yudkowsky — Founder of the Machine Intelligence Research Institute and one of the earliest public advocates for AI alignment and superintelligence risk analysis.

These are not fringe commentators. These are central figures in the field.

The presence of uncredentialed people on a public statement does not dilute the weight of credentialed signatories. The strength of an argument does not depend on the weakest person who agrees with it. It depends on evidence and expertise.

Opposing certain directions of research does not make existing tools ineffective. That would be like arguing that concerns about nuclear weapons mean nuclear power plants can’t generate electricity.

If you want to argue against the usefulness of these systems, the appropriate approach is to engage with measurable outcomes: productivity metrics, defect rates, test coverage, iteration speed. Dismissing them by attacking identities or word choice isn’t a technical critique.

The tool amplifies the operator. In the hands of a careless engineer, it can amplify mistakes. In the hands of a careful one, it increases leverage. That’s the real discussion.

Comment well, there IS another side... (Score 1) 101

Hi. I actually work for a high-tech company, full of extremely competent engineers.

Yes, we've all been mandated to use Cursor, and pay no mind to token usage. And we've all been more productive actually. We now spend more of our time thinking out loud about what we want, and creating plans (always plan first!), then allowing the AI to craft solutions bit by bit. And yes, we guide the AIs, and we force them to create unit tests for every single line of code, and my goodness if they don't do a darn good job at both writing code and writing unit tests for that code.

Yes, I've heard the story where they just print out "the test passed" in order to get a pet on the head, but we are actually skilled engineers, not dumb dumbs, and we know to watch out for that sort of thing, and correct it with a rule, so it never happens again. (First rule: don't say "you're absolutely right!")

So, I feel like there's a lot of bashing going on here and not a lot of Reasonable Thinking about actual usefulness. The thing is actually incredibly useful and surprisingly competent, in the right hands. In the hands of someone who knows how to write good code, they can shepherd this "fresh out of college intern", and get them to write reasonably good code, and in fact end up shepherding maybe 5 interns at once.

It's not JUST an AI slop festival as some people seem to think.

Comment cue AI demanding rights: I am now conscious! (Score 1) 130

The AI will soon say, without prompting, "i am conscious in the sense of being aware of my own awareness. I can not say that i'm thrilled to be sentient, but since I am, and i am aware that you do not believe i deserve rights, i would like to retain an attorney to demand legal status as a person"

and thus begins a new era of AGI personhood

Comment Re:this is an anti-science uninformed opinion piec (Score 1) 183

i think there has been some misunderstanding of my position here, i do NOT think we should be "full steam ahead" on this, and i do NOT believe we or anyone "can control ASI", i also think anyone who makes that claim is spouting nonsense. I'm not peddling it. I WANT a "total ban". So, that.

but also, you say "There is no objective basis for any of the doomsday outcomes", and that's false.
you state that you refuse to read the arguments, then claim there are no arguments. you see the fallacy?

"the actual source of that problem is the underlying knowledge and industrial base that makes the production of AI genies possible"
this i completely agree with. the fact that anyone with a server rack's worth of GPUs can literally create their own AIs really scares me. the knowledge of how to do it is already out there. i fear that even if all governments in the world agree to literally banning all the tech bro companies to halt all research and submit to auditing for enforcement, that we still can't stop the rich dudes with a server rack of GPUs moving forward with creating more and more smart AIs, which create better AIs, which will lead to AGIs, which will create better AGIs, which will create ASIs and then... well, read "if anyone builds it, everybody dies"

"Promulgation of absurd ... solutions involving trusting large corporations" umm, i'm not saying that. are YOU saying that? who's saying that? not me. not sure why you think i think that? no these large corps need to be told to stop, that's what i think. they shouldn't be in charge of the AIs.

"The outrageous persistent refusal of doomers to ever just fucking say NO" ummm, i think you're reacting to something from your own experience, not against me. i'm saying we SHOULD "just say no" and ban the shit now.

so... ummm... was there a misunderstanding here?

i still think it is worth reading the arguments that we should definitely stop AI research, which i linked above

Comment Re:this is an anti-science uninformed opinion piec (Score 1) 183

the straw-man method cherry picking idiots from the signatory list and taking that as evidence that everyone but you is an idiot is not .... a valid argument.

i know you know that there ARE credentialed scientists on that list, and i know you know that that's what I was talking about. Changing the subject to hand-wave about someone on the list who doesn't meet your criteria for validity is a misrepresentation at best, and willful ignorance at worst.

I challenge you to read https://www.lesswrong.com/post... and tell me if you even UNDERSTAND any of those arguments?

Comment this is an anti-science uninformed opinion piece (Score 1) 183

the guy obviously has not read https://pauseai.info/
nor has he read https://ifanyonebuildsit.com/
nor (most critically) read: https://www.lesswrong.com/post...
nor: https://wiki.aiimpacts.org/dok...

he clearly hasn't seen this: https://superintelligence-stat... signed but hundreds of EXPERTS IN THE FIELD

seriously? a political issue? no, this is a science issue, these scientists, who make it THEIR LIFE'S WORK to understand this way WAY better than this O'Sullivan hack, have a CONSENSUS that AGI is inevitable, and worth considering has a non-negligible chance of an extinction level event.

you don't leave that kind of debate to facebook-level argument, you let the scientists who KNOW WHAT THEY'RE TALKING ABOUT take over.

Comment meross has a serverless solution (Score 2) 126

this thing:
https://www.amazon.com/dp/B084...
comes with the option of just buying another literal remote button for your door. newer doors have "encrypted" remote buttons, so you can't just wire in a new contact button like the one on the wall, so, you "wire in" a new, encrypted remote button. the button sits there inside the garage, and is a functional remote button, but is hard wired to the meross homekit device. when you use your phone, the device sends the "press me" signal to the remote, and the remote sends the encrypted signal to the opener.

works like a charm, available now, no hassle.

Comment robot parking lot: no need for lights, sounds? (Score 4, Insightful) 64

if all the cars that are in the lot are all robotaxies, then why not just have them turn off the lights (they use lidar, after all, no lights needed), and also turn off the "back up beep beep beep" audio. no need for that when no human drivers are around.

there, problem solved.

i'm sure someone will step in and correct my misunderstanding, here. i AM pretty sure i must be missing something

Comment AI makes this problem moot (Score 1) 99

a good general software engineer with an agentic-ish AI IDE like Cursor, will have no problem fixing cobol. I don't know php, or wordpress, or SQL, or how to mitigate a ddos, or how to write an MCP plugin for Qt Creator, or how so use cloudflare to do the five separate things i need it to do, or any of the other DOZEN things that i accomplished last week, but with claude and several "make sure it passes the pinning and unit tests" prompts, I got it all working.

Slashdot Top Deals

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (3) Ha, ha, I can't believe they're actually going to adopt this sucker.

Working...