Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:GIGO... (Score 1) 78

How about an interview where one is asked to code something to fulfill a task, and who cares how it is done, but the interviewers want to see the process? Or is the addiction to cheap body ship contracting firms too great to ask for this?

Did you read past the headline? (Of course not—this is Slashdot.)

Because what you’re describing—an interview where the candidate solves a real task and the interviewers observe how they solve it—is exactly what Canva has implemented. They retired the “invert a binary tree on the whiteboard and then find the RREF of this matrix” charade. Now they present open-ended, ambiguous problems that can’t be solved by copy-pasting a prompt into ChatGPT. The goal isn’t to see who can type the fastest—it’s to see who can guide an AI assistant with judgment, debug flawed suggestions, ask clarifying questions, and take ownership of the final product. In other words: process over parroting. You cite “GIGO” like it’s some kind of mic drop, but Canva bakes that reality into the test. They’re watching to see who can catch and correct garbage when it inevitably comes out.

This isn’t addiction to outsourced coding contractor mills. It’s the opposite: a deliberate filter for people who can think critically while using modern tools. That’s called engineering.

Honestly, your tone here suggests your job has been outsourced to a code mill in the past, and now your current job is going to be outsourced to an LLM. You are in denial, and you aren't hiding it very well.

Comment Canva gets it (Score 1) 78

I’m glad to see Canva flipping the script on technical interviews. If half your engineering team already uses LLMs like Copilot or Claude daily, why pretend otherwise when hiring? Their new approach—evaluating how candidates collaborate with AI, not just how they code in isolation—isn’t just a gimmick. It’s a course correction.

This is what it looks like when a company stops treating AI like a cheat code and starts treating it like a tool. I remember the first time I invoked EMACS syntax mode for a C homework assignment in an undergrad CS course more than three decades ago -- it felt like legal cheating. We’ve already lived through this movie before. Syntax highlighting begat IDEs, which in turn begat LSPs. The tools got smarter, the engineers got faster—and the sky didn’t fall. Shit -- I remember asking *slashdot* how to fix weird-ass corner cases when I was a sysadmin at a large defense contractor twenty years ago. The tools change, but the idea of collaboration is a constant. Instead of the collaboration being with a community of professional peers at stack overflow or reddit or quora or (yes, even) slashdot, it's with an LLM that happens to speak in natural language and write code. That doesn’t make it a threat. It makes it a force multiplier.

Canva isn’t throwing out fundamentals. They’re just testing them in the context candidates will actually work in: messy, ambiguous problems that demand engineering judgment plus tool fluency. You don’t evaluate a carpenter by taking away the power tools. This almost certainly guarantees finding the kind of carpenter you'd want to be shipwrecked with on a desert island—but not the one you'd trust to finish a C-suite in a Dubai high-rise. You don't want somebody who can invent the wheel from scratch, you want somebody who can bolt it to a MotoGP Ducati and win.

And that’s the deeper shift here. AI isn’t the rival. It’s the collaborator. If you want to know what kind of engineer someone is, ask: can they use these tools without outsourcing their thinking? Can they debug what the AI gets wrong? Can they guide the assistant with clarity and purpose? Those are the real questions now.

I’ve said it before: the future isn’t humans versus AI. It’s humans with AI versus humans without it. Canva seems to understand that. Good on them for hiring accordingly.

Comment Dangling A records are a known problem (Score 1) 17

Former sysadmin here. This is primarily a case of DNS neglect intersecting with IPv4 scarcity and cloud IP recycling. When there are only so many IP addresses and every marketing team on Earth wants a vanity subdomain for a two-week campaign, something’s gonna give. The problem is broken DNS hygiene and lax subdomain lifecycle management, so the solution is cultural and procedural.

The root cause is pretty straightforward: when an A record—or a CNAME chain—still points to an IPv4 address that’s been reassigned or left unmanaged, it becomes a live wire for abuse. Thanks to IPv4 exhaustion, cloud providers like AWS aggressively recycle IPs—there are no more unclaimed /8 blocks, so yesterday’s marketing splash page might resolve today to an EC2 instance spun up in a totally different context. Sometimes, scammers stumble across these abandoned links by accident—inherit the IP, see inbound traffic, and realize they’ve been handed a subdomain on a silver platter. But it isn't all luck -- subdomain takeover is a known, well-documented exploit, and there are open-source tools—like Subjack and Can I Take Over XYZ—built explicitly to scan for and monetize exactly this attack surface. TBH, this problem has been sitting there in plain sight since the day the last block of unassigned IPv4 got hoovered up by the cloud hosting gold rush. If a company is not auditing their DNS and tracking where their subdomains point, then sooner or later, someone else will, and exploit any dangling A records that they can surface.

Subdomains get spun up for hackathons, campaigns, and one-off projects—then left to rot. In a sane world, they’d be registered in DNS monitoring tools with a lifecycle process wrapped around them. But here in reality? Marketing pings IT for cats.events.company.com, gets their landing page, and moves on. Meanwhile, company.com's infosec team forgets to scrub the zone file once the confetti settles. Fast-forward a year: some SEO sweatshop joker lights up a container in Singapore, gets handed that recycled IP, and suddenly company.com's abandoned cats subdomain is serving AI-generated slop about lash salons and anatomically correct furry cosplay. And if company.com's wildcard cert is still live? It’s resolving under company.com's brand. That’s not on the scammer. That’s on company.com's sysadmins. They shouldn't act shocked when events.company.com becomes ground zero for “Top 10 Yiff Sims of 2025.”

The fix is pretty straight forward too, though. You don’t need AI to fix this—just a sysadmin with a functioning neuron and some shell-fu. Grep the zone files, ping every A record, curl for a 200, and awk out anything that doesn’t resolve or returns garbage. Toss in a reverse DNS check at the same time -- if it points to AWS, GCP, or anything outside company.com's ASN, flag it. Bonus points for a diff against IPAM or lease records. This problem is squarely in the fixable category. It isn’t rocket science; it's basic fucking net hygiene. Do it, or be prepared for an uncomfortable meeting in the CEO's office, trying to explain how company.com's brand got associated with furry porn.

Comment Midjourney lawsuit - both necessary and inevitable (Score 1) 87

It was only a matter of time. With Disney and NBCUniversal now suing Midjourney for training on their IP and outputting near-replicas of characters like Aladdin and the Minions, we’ve officially entered the next phase of the AI copyright wars. This isn't a fan-fiction dispute or a YouTube takedown. This is major-league litigation—backed by companies who understand copyright law better than anyone because they’ve weaponized it for decades.

And you know what? On this point, I’m with them.

I’ve argued before—and still believe—that creators, whether they’re indie artists or billion-dollar studios, deserve compensation when their work is harvested as fuel for someone else’s generative model. This applies to Midjourney, just as it does to Meta and OpenAI. Remember, Meta’s LLaMa has also come under fire for training on copyrighted books. The “but it was on the internet” defense doesn’t hold up when your model learns to replicate the style, structure, and soul of other people’s work. You’re not building from scratch—you’re remixing without consent or credit.

Yes, copyright law needs modernization. Yes, fair use is important. But let’s not pretend this is fair use. If you train a model on The Lion King, then ask it to draw a lion with big eyes in a sunset and get something that’s 95% Simba, you’ve crossed a line. That’s not transformative—that’s substitution.

To be clear: Hollywood wordsmiths are already using Midjourney, Sora, and open-source Huggingface models to generate visuals for the shows they are hired to create for. When it comes to generating locations, atmospheres, and character sketches, these tools are astonishingly good. Being able to see the scene, or generate a beat sheet for the emotional arc they are trying to capture, is beyond useful. I think generative models have real power to augment human creativity.

But augmentation doesn’t mean expropriation. And that power doesn’t excuse how these models were built. If your model can conjure a close approximation of a Disney character, that’s not fair use—it’s mimicry at scale. If it can generate Minions on demand because it was trained on millions of frames of Minions without paying Universal, you’re not in a legal gray zone. You’re in infringement territory.

Studios suing AI platforms doesn’t automatically make the studios the good guys. (Disney crying foul over creative overreach is rich.) But that doesn’t make them wrong. If you're going to claim your model learns like a human, then it’s time to follow the human rules: using someone else’s work without permission or payment isn’t innovation—it’s theft.

Comment Re:Weird (Score 1) 109

It's so weird that so many people are ignoring the massive accuracy issues of LLMs and have this misguided idea that you can just trust the output of a computer because... well, it's a computer.

What’s actually weird is pretending anyone in AI development is saying “just trust the computer.” Nobody is advocating blind trust—we’re advocating tool use. You know, like how compilers don’t write perfect code, but we still use them. Or how your IDE doesn’t understand your architecture, but it still catches your syntax errors.

Even weirder? Watching people whose jobs are 40% boilerplate and 60% Googling suddenly develop deep philosophical concerns about epistemology. Everyone who isn’t rage-posting their insecurity over imminent obsolescence is treating LLMs like any other fallible-but-useful tool—except this one just happens to be better at cranking out working code than half of GitHub.

You're not warning us about trust. You're panicking because the tool is starting to do your job—and it doesn’t need caffeine, compliments, or a pull request approval.

It's literally using random numbers in its text generation algorithm.

Translation: randomness isn’t the problem. It’s your discomfort with why it still works anyway that has your knickers in a twist.

That sentence is doing a lot of work to sound like it understands probabilistic modeling. Spoiler: it doesn’t. Claiming LLMs are invalid because they use randomness is like claiming Monte Carlo methods in physics or finance are junk science. Randomness isn’t failure—it’s how we explore probability spaces, discover novel solutions, and generate diverse, coherent outputs.

If you actually understood beam search, temperature, or top-k sampling, you’d know “random” here means controlled variation, not “magic 8-ball with delusions of grammar.” Controlled randomness is what lets LLMs generate plausible alternatives—something you’d know if you’d ever tuned a sampler instead of just rage-posting from the shallow end of the AI pool.

If your job is threatened by a model that uses weighted randomness, I have bad news: your stackoverflow-to-clipboard ratio was higher than you thought. Time to read the GPT-text on the wall and start plotting your next career pivot.

Why not just use astrology?

Because astrology never passed the bar exam, defeated a world-class Go champion, debugged a microservice, or explained value vs. reference semantics in C without making it worse. LLMs have. Hell, you probably leaned on one the last time your regex failed and you didn’t want to ask in the group chat.

But sure—let’s pretend astrology is the same as a transformer architecture trained on hundreds of billions of tokens and fine-tuned across dozens of domains to produce results you can’t replicate without five open tabs and a panic attack.

Want to know the real difference? Nobody ever replaced a software engineer because they wanted a Capricorn instead of an Aquarius.

You’re not mad because LLMs are inaccurate. You’re mad because they’re accurate enough, cheap enough, and scalable enough for management to finally put a price tag on your replaceability.

The AI doomsday clock for coders is ticking. And you just realized it's set to UTC.

Comment MAGA FDA: Deregulation Disguised as Innovation (Score 1) 109

The FDA just unveiled a sweeping set of policy shifts—faster drug approvals, tighter industry "partnerships," AI-assisted review pipelines, and a renewed focus on processed food additives. On the surface, it reads like a long-overdue modernization push. But dig a little, and it starts to reek of MAGA. When an administration this allergic to science starts promising "gold-standard science and common sense," what they usually mean is less science, more business. Replacing randomized trials with curated real-world data (who is doing the curation, I wonder...surely not Big Pharma?) cutting pre-market testing, and shaving safety review timelines down to national public health emergency levels? That’s not reform; that’s regulatory cosplay. And when these policy proposals are coming from a MAGA-approved physician who is on the public record denouncing school closures during the Covid 19 pandemic, you have to be...skeptical. Makary was approved by a party line vote, with three Democratic Party senators defecting. This isn't public health policy -- it's just more MAGA dumb-fuckery, hiding in plain sight.

That said, I’m not completely dismissive—particularly when it comes to the FDA’s use of AI. If there’s a defensible, low-risk entry point for generative AI in public health, it’s exactly where the agency is putting it: first-pass reviews of half-million-page submissions, table generation, and low-level document triage. Nobody’s pretending this replaces human judgment (yet), and unlike autonomous vehicles or predictive policing, the harm of a hallucinated table of contents is... manageable.

Still, this policy bundle isn’t just about AI. It’s about redefining what constitutes a sufficient standard of proof, under the guise of efficiency. And in that broader context, even the good ideas—like using causal inference from big datasets to monitor post-market outcomes—risk being co-opted as excuses to approve products faster and cheaper, not better. If the food dye bans and ultraprocessed food warnings survive this policy wave, great. But I wouldn’t count on it. The rest feels like an industry wishlist endorsed by MAGA and fed to their pet FDA chairman.

Comment Re:It makes sense. (Score 1) 71

Complex puzzles require deep reasoning.

True in spirit, but misleading in implication. "Deep reasoning" isn’t synonymous with explicit, stepwise logic. Much of human problem-solving relies on heuristics, pattern recognition, and compressed experience. We often simulate solutions rather than derive them. The complexity of a puzzle doesn’t necessarily demand conscious logic—it demands a good internal model that can make the right inferences efficiently, which is a broader and deeper capability than just reasoning.

As humans, we are programmed to use our brains and multi-paradigm experience to quickly trim down the decision tree of obviously-wrong solutions.

That’s not how cognition works. There is a 500ms wide wall between reality and our perception of it. That’s how long it takes for a photon striking the retina, or for vibrations hitting the tympanic membrane, to be transduced into neural signals, processed through the thalamus and primary sensory cortices, and integrated in our perception of the world. This gap is real, and not controversial, and any theory of cognition has to account for it. Yours does not.

Our brains do not perceive reality in real time. Instead, they have a model of reality, and then update that model with predictions based on the current model. Our brains don’t wait for options to trim; they’re constantly generating predictions about what we’ll see, feel, and do next. For example, when you reach for your coffee cup, you don’t scan a decision tree of possible cups or grasp points. Your brain already predicts where the cup is, how it feels, and how your hand will move. Action flows from a continuous simulation, not from post hoc evaluation. You “just do it” because the model is already in place.

As we go down the complexity depth, we prune more silly solutions and just refine the end outcome; we become better at homing in on the solution.

This is a neat narrative, but not supported by cognitive science. Humans actually get worse at solving deeply complex problems unless they can offload structure to external tools (math, diagrams, language). We aren’t exhaustive tree-searchers—we’re satisficers, pattern matchers, and model-builders. Pruning silly options, as you put it, makes sense after you've internalized the structure of a domain, not as a general-purpose hueristic for all complex tasks.

AI models are different in this regard. They are just statistical probability machines.

This assertion is true, in a reductionist sense, and like all reductionist arguments against AGI, it misses out on emergence. Modern language models are trained as probabilistic predictors, yes—but what emerges from that process are latent internal representations that encode abstract relationships, causal inference patterns, and even planning behaviors. Saying they’re just statistical is like saying the brain is just a pile of neurons firing. True in a reductionist sense, but profoundly uninformative.

The greater the complexity depth, the more variables they need to consider in the equation, and without actual intelligence and perception of the problem, they are fundamentally unable to accurately and efficiently discriminate against obviously wrong solutions;

Hmmm. Did you read the paper? The paper shows that even advanced LLMs fail at deeper compositional problems not because they can’t process more variables, but because their reasoning effort actually decreases as complexity increases. That suggests a misalignment between internal representations and inference strategies—not a hard ceiling on intelligence. A good analogy is a student taking a math test. On familiar problems, they work step-by-step and usually get it right. But when the problem is longer or phrased differently—something novel—they sometimes give less effort, not more. They rush, guess, or bail out, even with time left on the clock. Not because they’re incapable, but because their usual strategy doesn’t apply, and they haven’t internalized how to adapt. The paper shows that something similar is emerging in LLM behavior. When tasks get harder, they don’t dig deeper—they think less. It’s not a failure of compute; it’s a failure of alignment between what the model knows and how it applies that knowledge under strain.

paralysed and requiring more and more computational power with no guarantee of a good outcome.

Paralysis implies indecision; these models don’t dither—they confidently return incorrect answers, just like the math student above who circles “C” and moves on. That’s arguably worse than hesitation, because it hides failure behind fluency. And yes, deeper problems demand more compute—but that’s just as true for humans (we’re just better at concealing when we’re lost). Importantly, scaling does yield gains—until the model’s internal representations can no longer scaffold the task. That doesn’t mean AGI is doomed. It means we need architectures that can simulate structure, not just generate sequences—and we need to measure emergent behaviors, not just final answers.

Comment Re:I have a sneaking suspicion... (Score 1) 71

This is what I used to think, but I changed my mind. I think missing ingredient is evolutionary pressure. That is, complexity alone is not sufficient, you have to have selective pressures for self-organization to manifest itself.

You're absolutely right that evolutionary pressure played a central role in shaping human cognition—but it's not clear that such pressures are either necessary or relevant for AGI. Evolution optimizes across generations via death, mutation, and selection under scarcity. AGI, by contrast, is engineered and optimized across iterations via gradient descent, reward shaping, and architectural tuning—deliberately, and often under conditions of abundance (data, compute, replication). The two systems can converge on superficially similar behaviors (e.g., planning, abstraction, communication), but they don’t require the same mechanisms to do so. Emergent properties like compositional reasoning or tool use may appear in AGI via complexity and feedback, but not through survival-based selective pressure. Selection pressure was sufficient to create minds like ours, but it may not be necessary for other kinds of minds to emerge. I agree that complexity, constraint, and feedback may suffice to induce cognitive structure, but that is just one (very old, very slow) way of getting there. I think Darwin, Dawkins, and Dennett might remind us that while evolution explains a lot, it isn’t the only game in town—or the only way to get minds. Honestly, I think it risks turning AI research into a dogma—or worse, a theology. Just look at how Ptolemaic geocentrism and Aristotelian physics dominated for centuries, until Galileo had the temerity to point a telescope at Jupiter.

Comment Re:more garbage comments from non-experts (Score 1) 42

Your comment reeks of insecurity. NumPy exists precisely because Guido van Rossum—Python’s BDFL—understood over 30 years ago that extensibility was the future of programming. The same way Bjarne Stroustrup built C++ by extending the brilliance of Kernighan and Ritchie’s C, Python's power comes from being a foundation for libraries like NumPy, pandas, and TensorFlow. Your issue isn’t with a data scientist’s wording—it's with your own inability to handle a world that doesn’t revolve around your preferred definition of "real programming." Teaching kids Python with modern libraries isn't “sloppy”—it's practical, current, and empowering. Pretending otherwise isn’t insight—it’s fear dressed up as authority.

This is false. Python has 3rd party libraries that handle numbers well. Those libraries are not Python and learning Python does not mean learning those libraries.

You’re trying to draw a hard line between Python” and Python libraries as if the language and its ecosystem exist in isolation. They don’t. Python’s real-world value is its ecosystem. NumPy, pandas, scikit-learn, TensorFlow—these aren’t exotic add-ons. They are Python in applied settings. Suggesting otherwise is like claiming you can teach JavaScript while ignoring the DOM or Node.js. This idea that “those libraries are not Python” is nonsense. They’re written for Python, run within Python’s syntax and semantics, and require understanding Python to use effectively. Your distinction isn’t insightful—it’s pedantic, and worse, it’s educationally useless. This is exactly the kind of claim made by overconfident autodidacts who skipped Chapter 5 in K&R because pointers were hard, then spent the next decade compensating by lashing out at people with actual credentials. You’re not defending programming rigor—you’re defending your turf. And it shows.

if this "Walmart data scientist" cannot get language right, why are we interested in his comments regarding educating of children?

This data scientist isn’t writing a compiler; he’s explaining complex abstract concepts to a curious kid, not delivering a paper to an ACM SIG. Absolute technical precision is not the gold standard for outreach communication—clarity and accessibility are. And frankly, the speaker’s points were directionally accurate: Python is widely used for numerical computing, and its libraries are the reason it handles large datasets and databases well. What's more, the scare quotes around “Walmart data scientist” tell me everything I need to know about you as a person and a coder. You aren't critiquing educational rigor, you are having a status anxiety tantrum. You see a world in which people from non-traditional coding backgrounds (data scientists, educators, even students!) are using AI tools and high-level libraries to do work that used to be the domain of “real programmers” like you. And it terrifies you.

Comment Re:Confused? (Score 4, Insightful) 79

This is the modern equivalent of "You can hear a cellphone conversation from down the street so why can't the NSA collect all conversations?" They put pride stickers and BLM stickers on the boot so that people like you will lick it extra clean.

Oh, FFS. Yet another right-wing populist Bakunin strikes again—lobbing cultural Molotovs into every conversation, not because he cares where they land, but because fire draws a crowd. Take your anarchist cosplay elsewhere, troll.

You’re not defending privacy. You’re not defending due process. You’re performing a theatrical sneer for the cheap seats, dressing up contempt as insight. You saw someone ask a basic constitutional question about aerial surveillance and answered with a parody of yourself: NSA! Bootlickers! Pride flags on jackboots! It’s all so conveniently interchangeable, isn’t it?

Here’s what actual civil libertarians care about: the means of government intrusion, the legal thresholds for surveillance, and the technological scaling of state power. The Sonoma case is about warrantless drone surveillance of homes at low altitude, using zoom lenses capable of peering through windows and fences. That’s qualitatively different than taking pictures from the street. It's also exactly the kind of mission creep that the California Constitution—yes, we have one—explicitly guards against.

But none of that matters to you, because you're not here to argue law or liberty. You're here to moralize with napalm, to call everyone who doesn’t speak in your register a dupe or a stooge. Ironically, that's the same authoritarian impulse you claim to hate—just with better memes.

Try making a civil liberties argument next time. Or don’t. Just don’t pretend your firestarter cosplay is a substitute for principle.

Comment Re:All in on what? (Score 2) 112

It’s fair to be skeptical. But for most Britons, it’s not just about the electrons showing up when they put the kettle on—it’s about whether they can trust the system delivering them to make sense in ten years, not just tomorrow’s news cycle.

France won't start up EPR2 till 2038 and they unlikely have the capacity to do the same in Britain.

Fair. The EPR2 schedule is long, conservative, and shaped by France's need to rebuild public trust after Flamanville. But the UK's problem isn't whether EDF can replicate EPR2 builds in France—it's that Miliband and Starmer are still letting the choice default to “do nothing” or “wait for France to figure it out.” That’s not strategy. That’s paralysis. If they’re serious about energy security and net zero, then the upcoming spending review is where words turn into commitments—or get exposed as theater.

SMRs are toys.

This is where I have to push back. SMRs aren't toys—they’re a response to the Brexit-amplified financial and logistical train-wreck of Hinkley C. Are most SMRs still theoretical? Yes. But so was commercial nuclear, once. What makes SMRs viable isn’t the tech alone—it’s that they can be manufactured, modularized, and deployed incrementally, instead of betting the entire grid on a single 15-year gamble. Call them immature if you want, but dismissing them as toys is how you end up a decade from now with no new capacity and no plan B.

UK will dedicate some paltry funds to SMRs for show and wait some more till EPR2 can start construction.

If that’s what happens, then yes—it will be failure by design. But that’s not inevitable. The June spending review is the first real moment since Brexit where nuclear funding could get locked in with teeth. Going “all in” doesn’t mean throwing cash at every vendor with a reactor doodle. It means committing to two viable designs, standing up a UK-centered regulatory and fabrication ecosystem, and getting the first unit into the ground before the lights go out in 2030.

There is nothing good to go all in on at the moment ...

There’s nothing perfect. That’s true. But waiting for perfect is what got them Hinkley C—six years late, triple the budget, and still not online. “All in” doesn’t mean betting everything on one reactor. It means finally acting like a country that takes energy security seriously—even if the options on the table are flawed, fragmented, or unfinished.

Because doing nothing? That’s been the plan for 15 years. And look where it got them.

Comment Re:Will Net Zero Strategy in Limbo? (Score 3, Insightful) 112

I share some of your skepticism, but...

Took them long enough.

No argument there. The UK’s nuclear program has moved at a glacial pace, and it’s fair to call out both parties for years of dithering. But delay doesn’t equal failure—it magnifies the cost of inaction. That’s why urgency now isn’t ideological posturing—it’s belated damage control.

Honestly, at this rate, they’ll probably end up delaying—or possibly quietly scrapping—the whole net zero push.

Possible, yes. But increasingly unlikely. Net zero is now hardwired into multiple levels of UK law, finance, and international credibility. Backing out would require not just economic cowardice, but diplomatic self-harm. Even Sunak’s tepid rollback faced backlash—not just from climate activists, but from investors and insurers who’ve already priced in the transition.

They need to sort out their own economic mess first before chasing big ideological targets.

Here’s where I push back. Net zero isn’t a vanity project—it’s an industrial strategy. Countries that delay now will be importing energy tech from those that didn’t. The “mess” isn’t just fiscal—it’s structural. And part of sorting it out means building resilient energy systems that don’t collapse when gas prices spike or geopolitical tensions flare.

Fix the balance sheet, then talk ambition.

I get the impulse. But if we wait for the fiscal books to be perfect, we’ll be building desalination plants in Kent while bidding against Saudi Arabia for solar panels. The net zero transition is the balance sheet fix—done right, it creates jobs, modernizes infrastructure, and reduces volatility. The real risk is treating the future as a luxury.

Comment Re:It's just AI (Score 3, Insightful) 112

This wasn’t an argument. It was a vibe dump. And the vibe is “I skim headlines and call it insight.” Threads like this deserve better.

Everybody's going to be building nuclear power plants everywhere to power AI.

No, they're not. But thank you for opening with a reheated Black Mirror premise masquerading as energy policy analysis. Nuclear buildout is being driven by retiring plants, climate commitments, and energy security, not because Sam Altman has a GPU fetish. The UK isn't deploying fission reactors like WiFi routers—it’s racing to avoid a 2030 capacity collapse. AI is incidental. The crisis is structural.

AI also guzzles water. Yes there are ways to avoid that but they are pricey and AI is already unprofitable.

Please pick a lane. First it’s nuclear reactors choking rivers dry, now it’s datacenters. The reality: every major industrial process uses water, and datacenters are trivial compared to agriculture, mining, and fossil fuel cooling. Also, calling AI “already unprofitable” is laughable—it’s being subsidized, like every transformative technology. By your logic, we should’ve shut down the internet in 1996 because Pets.com wasn’t cashflow-positive.

So we're going to have even worse water shortages and we're going to have a fuck ton of poorly maintained nuclear power plants all over our cities

This is a techno-dystopian Mad Lib, not an argument. Nuclear plants in cities? Really? Please show us on the map where the UK is dropping SMRs in the middle of Manchester. Poorly maintained? That’s a projection—the UK’s regulatory bottlenecks are precisely because they over-index on risk aversion. Maintenance isn’t the problem. Your grasp of siting policy is.

...so that we can have a shitload of white collar workers replaced by chatbots.

Ah, there it is: the Luddite crescendo, where vague resentment gets stapled to energy infrastructure policy. If you think the goal of modern civilization is to keep you employed doing inbox triage instead of automating boilerplate, I have bad news. Progress doesn't wait for your comfort zone. And AI isn’t stealing jobs because of power plants—it’s doing it because people like you are easier to replace than you think.

I am pretty sure this is why the Fermi paradox happens. A species this dumb can't possibly survive long enough to make it to the stars. Carl Sagan is rolling in his grave...

You don’t get to cite Sagan while ignoring the actual science. Sagan supported nuclear power as part of climate mitigation. He also understood nuance—something your Reddit-flavored nihilism sorely lacks. The Fermi Paradox isn’t about species being “dumb.” It’s about civilizations that become too noisy, too fragile, or too late to matter. Exactly the kind of future we court when trolls hijack serious policy debates with cosmic doomposting.

Comment Nuclearis Quondam,et Futurus si non fuisset Brexit (Score 1) 112

The UK’s nuclear energy program is in trouble—badly behind schedule, wildly over budget, and staring down a 2030 cliff when most of its existing reactors go offline. These are known problems: nuclear is slow, expensive, and politically fraught everywhere. But Britain’s real failure wasn’t in struggling with those challenges. It was in choosing to face them alone.

Brexit didn’t invent the flaws in Hinkley Point C or the delays around Sizewell C. But it amplified them, then multiplied them, then institutionalized them. By leaving Euratom, the UK voluntarily discarded a functioning nuclear regulatory framework and forced itself to rebuild one from scratch. It fragmented the supply chains required for SMRs, restricted access to nuclear-grade talent, and added friction to every international collaboration. All in the name of sovereignty.

The irony? Nuclear energy is the ultimate international project. It can’t scale without trust, shared standards, and multilateral oversight. And yet, post-Brexit, the UK is now trying to bootstrap an SMR pipeline using bespoke licensing, bespoke supply chains, and bespoke financing—all while staring down a decade-long energy gap.

This didn’t have to happen. The challenges were real but tractable. What turned a difficult project into an incoherent one was the decision to decouple from systems explicitly designed to support it.

Brexit was sold as an assertion of national strength. But in sectors like nuclear—where success depends on cooperation, not slogans—it’s proving to be a self-inflicted wound.

Comment Embrace, Extend, Extinguish comes for USB-C (Score 1) 97

When Microsoft says it wants USB-C to 'just work,' what they mean is: just work with Microsoft’s drivers, certified hardware, and update pipeline—or else.

Microsoft’s latest pledge to “end USB-C confusion” via the Windows Hardware Compatibility Program (WHCP) sounds great—until you remember the company's long history of using standards enforcement as a means of channel control.

Let’s not pretend this is new territory. Microsoft has been running the “embrace, extend, extinguish” playbook since the browser wars.

Remember when Sun’s Java promised “write once, run anywhere”—a cross-platform utopia that threatened to make Windows just another runtime? Microsoft couldn’t let that stand. So they countered with ActiveX, a Windows-only Trojan horse built on top of COM, trading sandboxed safety for raw system access. It looked like empowerment but reeked of desperation. The security model? A joke. Why build a moat when you hand intruders a shovel and a HOWTO for bypassing it? COM gave bad actors a map of your network, and admin privileges for good measure. COM wasn’t just a leaky abstraction—it was a siege ladder built into the walls of your own fortress.

And that was an early example. There is a long line of precedent. Remember when Internet Explorer 6 mangled web standards so badly that an entire generation of web developers had to write IE-specific hacks just to make their sites functional? Or more recently, where they’ve embraced Kubernetes and containerization? Redmond extended it through Azure-specific tooling like Helm and Dapr, and nudged the entire ecosystem subtly but surely toward their preferred abstractions. And while OpenDocument wasn’t outright killed, it never stood a chance once Microsoft pushed “Open XML” as a supposedly open but functionally proprietary Office standard. And Slack? never had a chance. Forget fair competition —Teams came pre-installed, baked into Office, glued to Windows like all the other barnacles that the happy tourists up on the sun deck never see.

Not every attempt worked. Silverlight died a slow, quiet death. But most of these plays didn’t succeed because Microsoft offered the best technology—they succeeded because Microsoft controlled the channel.

And that’s exactly what this USB-C maneuver is about.

They’re not about to become a hardware company in any serious sense—Surface is a branding exercise, not a verticalization strategy. But they’re becoming extremely effective at controlling the hardware pipeline from the outside in.

Now, if you want to be WHCP-certified, you’ve got to use Microsoft’s USB driver stack. Your silicon has to be USB-IF certified. You no longer get to define your own USB-C port behavior—every port must follow Microsoft’s rules: data, charging, and display functionality on all ports, with full compliance to USB4 and Thunderbolt 3 support as defined and validated by Redmond. Updates? Delivered through Windows Update, thank you very much.

This isn’t about helping grandma plug in her monitor. It’s about eliminating ambiguity on Microsoft’s terms, forcing OEMs to become extensions of Microsoft QA, and making sure the entire PC ecosystem snaps ever tighter to the Windows mold.

Apple-lite isn’t just an insult anymore—it’s a strategic direction. Microsoft saw what Apple gained by locking down the channel, and they’ve decided they want in. Not by building better laptops. But by owning the rules for how yours are allowed to behave.

Cynical? Sure. But if you think this is really about solving USB-C confusion, I’ve got a Zune to sell you.

Slashdot Top Deals

We are not a loved organization, but we are a respected one. -- John Fisher

Working...