Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Arizona? (Score 1) 41

I get why Arizona sounds good on a checklist; I live here, and I know what we have to offer. But I also know how to calculate the carrying capacity of our desert paradise, and we've exceeded it. A while ago, actually. This isn’t a SimCity tile—it’s a real state with real limits. What looks good in a slide deck often breaks when scaled to reality. And Arizona is already failing under the weight of its current expansion. You make the case that Arizona is an ideal location for a trillion-dollar semiconductor hub, citing business-friendly policies, a skilled workforce, favorable weather, infrastructure stability, and general livability. On paper, it checks all the boxes. In practice? Not so much.

1. A favorable business climate.

That’s a euphemism for “massive taxpayer-funded subsidies.” Favorable for whom, exactly? Because when the bill comes due for roads, water infrastructure, power upgrades, and housing booms, it’s the public that pays—not the shareholders cashing in on Son's vision quest.

2. Skilled workforce, specifically in semiconductors

Arizona has some talent, sure—but not Shenzhen levels. We’re already seeing labor shortfalls in existing fabs. Where are the tens of thousands of additional engineers and technicians supposed to come from? Are we assuming they’ll teleport in fully trained and acclimated to 110F summers?

3. Low humidity

True. Also true: a low water table and evaporating reservoirs. We haven't hit the level of urban water rationing that California is undergoing, at least not yet, but we are already mandating tiered reductions in agricultural water deliveries from the Colorado River. The writing is on the wall. Dry air is great for clean rooms. It’s less great when the state’s aquifers are collapsing under the weight of industrial drawdown. Dropping a water-hoovering trillion dollar fab into our already strained water supply is catastrophically shortsighted.

4. Stable geology. No earthquakes. No hurricanes. No blizzards.

Those are all kinda true, but these are table stakes, not compelling reasons. No earthquakes? Pretty true, but so is subsidence induced by aquifer depletion, which is happening right now in the Casa Grande valley, and is only going to be accelerated with more fabs inhaling our already inadequately recharged aquifers. And not being hit by a hurricane is a low bar for a trillion-dollar investment. Meanwhile, the real threats—water, energy, housing—are already looming. No blizzards is nice, but it is not a compelling pitch when you’re about to overload the power grid, drain the aquifers, and price locals out of the housing market.

5. Stable utilities. No power blackouts.

Not yet, anyway. But they will come. TSMC’s plant alone pulls enough juice for 300,000 homes. Add a trillion dollar fab hub, and now you’re playing Jenga with the grid during summer AC season. Grid stability is a fragile thing, especially during a summer season that stretches from May - October and where triple digit temperatures are a daily occurrence. And, just to be clear, Arizona already imports roughly a third of its electricity annually, with spikes over fifty percent happening regularly during our very long summer season—hardly what you'd call grid independence. Add a trillion-dollar fab hub to that load, and you're not planning an industrial boom. You're planning brownouts subsidized by tax payers, giving the corporations that are causing them a free ride.

6. Located close to labor in Mexico for packaging. Direct flights to Asia.

So... we’re just going to drop a megacomplex into a desert because the flights are good? NAFTA lanes and proximity to packaging labor don’t magically conjure up water, housing, or energy capacity. That's logistics cosplay, not site planning.

7. Decent universities

Sure. Proud UofA grad, here. But let’s not pretend ASU and UofA are churning out tens of thousands of fab-ready engineers every year. The talent pipeline is real—but it’s not bottomless. And unless you start handing out EE degrees with every iced horchata at Eegee’s, you're still going to need to import a significant workforce.

8. An attractive location for new employees to relocate to. Arizona is a nice place to live with affordable housing.

Just...no. The idea that Phoenix is a low cost-of-living city has not been true since before the turn of the century. Yes, it’s cheaper than California’s worst offenders, but that’s a low bar—not a compelling argument. According to sources like Payscale and RentCafe, Phoenix’s overall cost of living now sits above the national average, driven largely by housing. Median home prices and average rents have surged in the last decade, while wages haven’t kept pace. Utilities aren’t exactly a bargain either—keeping your AC running six months out of the year isn’t cheap, no matter how stable the grid is. The narrative of Phoenix as a haven of affordability was true decades ago, but those days are long gone. The city's growth has priced out many of its own residents and a trillion-dollar fab hub would only accelerate the squeeze. Median home prices have more than doubled since 2015, jumping from around $200,000 to over $450,000. Phoenix is walking the same path San Jose did forty years ago and we know how that ended: tech wealth on one end, housing collapse on the other. Drop a trillion-dollar project on top of that and “affordable” becomes a historical footnote. Just ask the people already being priced out of Mesa and Chandler.

Comment Shenzhen in Phoenix? A Trillion Dollar Mirage (Score 1) 41

So SoftBank’s CEO wants to turn Arizona into the next Shenzhen by building a trillion-dollar semiconductor manufacturing hub. Ambitious? Sure. But it’s also jaw-droppingly irresponsible.

Let’s start with the obvious: water. Semiconductor fabrication is brutally water-intensive. Even with 90% recycling—and that’s the optimistic number—a 24/7 fab still gulps down staggering amounts of water. Arizona is already running on empty. We’re seeing real consequences from Colorado River cuts. Groundwater tables are sinking. Land subsidence is real, and it is becoming more and more common. And even if fabs recycle, they still need enormous volumes for initial fill-up and top-offs, which come straight from the same municipal sources serving homes and farms. In a state where water is already rationed, that’s not a sustainable tradeoff—it’s recklessnes in a bucket.

Then there’s the workforce. It’s true Arizona has some semiconductor presence already, but scaling to a Shenzhen-level hub isn’t just copy-pasting a few fabs. You’d need a dramatic ramp-up in highly skilled engineers and technicians—people those existing fabs are currently struggling to recruit and retain. Importing talent from out-of-state isn’t easy when housing prices are climbing and the MAGA nutbars that comprise a significant number of your neighbors don't want them here in the first place. Taiwanese workers at existing fabs in Chandler and Mesa have faced serious integration and retention challenges, leading to one of the highest churn rates in the sector. This is because Arizona is a MAGA poster child. We've managed to chase off or purge most of the MAGA nutbars that infected our politics at every level, but the racism and xenophobia of the MAGA base is still a factor. A big one. The idea that Arizona can just attract talent and it’ll all go smoothly ignores the very real, very present MAGA population that hates anything that isn't as white or as dumb as they are.

And let’s talk about what a favorable business climate really means. It means taxpayers eat the costs —tax breaks, subsidies, infrastructure upgrades, the whole kit. A trillion-dollar hub won’t just ask for incentives; it will demand an unprecedented level of public financial support. That means less money for schools, hospitals, and basic infrastructure. Speaking of which—have you looked at Arizona’s infrastructure lately? A project this size would stress everything from power and wastewater systems to roads and housing. And that brings us to the grid.

TSMC’s Phoenix plant alone is expected to draw as much electricity as 300,000 homes. Multiply that by whatever SoftBank has in mind and the probability of grid instability begins to approach unity, especially during peak summer demand when everyone’s cranking the AC. Meeting those energy needs means doubling down on existing fossil sources—because renewables can’t scale that fast—which completely undercuts any of Son's environmental PR bullet points.

And even beyond water and energy, fabs generate hazardous waste. PFAS, solvents, industrial gases—you name it. Yes, there are disposal plans. No, they’re not perfect. Now scale that to a trillion-dollar cluster and you’ve got a region-wide risk management nightmare. Air quality takes a hit, too. Even with scrubbers, we’re talking about a dense cluster of industrial emissions in an area already prone to dust and particulates.

Lastly, this will put even a bigger strain on Arizona’s already-stressed housing market. Thousands of high-paying jobs flooding into cities with limited affordable housing stock? That’s a recipe for displacement, not opportunity. Public services will be overwhelmed unless there’s a massive up-front investment.

Arizona has some of the pieces for semiconductor manufacturing. But building a full-blown Shenzhen-scale hub in the desert isn’t just stretching the limits—it’s ignoring them. The short-term gains look shiny, if you squint hard enough, but the long-term costs—in water, in power, in housing, in environmental damage—are staggering. Son's prospective investors need to walk around in Paradise Valley or Scottsdale for a few weeks in mid-July or August, and think long and hard before buying into Son's vision, because what he's trying to sell is a mirage, not a fab hub.

Comment Biometrics on the Barbie: Age-gating Down Under (Score 1) 45

When I was 16 (full disclosure: that was during the Carter administration), I managed to fake my driver’s license just well enough to pass. All it took was a little patience, a magic marker, some cheap plastic from the hobby shop, and five minutes alone with the Xerox 6500 color copier in the graphic design office where I worked summers.

Sure, the tech landscape has changed—but the ease of identity spoofing has only increased. In the internet age, tech-savvy teens don’t need art supplies; they need a VPN, a burner email, a crypto wallet—or some combo thereof. Voilà: instant adult.

The so-called “tech trials” mentioned in the article aren’t about enforcement—they’re exploratory. They’re trying (and mostly failing) to figure out whether any of these age-verification tools can reliably and safely work at scale.

Even calling them "tech trials" is a stretch. They’re more like a series of feasibility studies already showing cracks. The same dynamic that law enforcement faced during my Carter-era beer runs—adolescent ingenuity vs. brittle enforcement mechanisms—is still in play. It’s just been rebranded with AI gloss: biometrics will fix everything, don’t you know?

All these systems have known vulnerabilities and documented exploits. Teens will dodge them the same way they always have—by borrowing IDs, lying about their age, using someone else’s credentials, or just bribing an older sibling. The cat-and-mouse game hasn’t changed. The only thing that’s evolved is the user interface.

Comment Re:Policies should specify "whats" not "hows" (Score 1) 52

You raise some valid-sounding points, but your throughline is clear: you’re less interested in regulating AI responsibly than in resisting its integration altogether. This isn’t a critique of how we regulate—it's a veiled argument for not trusting AI at all. That’s a position worth debating, but let’s not pretend it’s neutral.

Just like project requirements, policies should specify "whats" and not "hows".

That’s a nice slogan, but it falls apart the moment you're dealing with high-risk technology. When the potential harms include synthetic bioagents or automated cyberweapons, regulators must get into the “how.” Specifying only the “what” in AI safety is like saying “we want bridges that don’t collapse” without requiring anyone to do stress testing.

Regulatory frameworks that specify the "hows" are more likely to result in meaningless compliance as game-playing organizations seek to maximize returns under the rules.

Yes, some orgs will game the system. I spent four decades in the military and corporate America—I’ve seen what supply sergeants and contractors try to slip through by pencil-whipping a compliance checklist. But that’s not a reason to avoid specifying how to do safety—it’s a reason to specify it better. Red-teaming, interpretability tests, and risk thresholds make compliance meaningful because they’re grounded in behavior, not bureaucracy.

Regulatory frameworks created mainly using input from major players... are more likely to align with how those major players want to do business...

True—and that’s why California’s report explicitly calls for transparency, independent red teaming, and training data disclosures. It’s trying to prevent exactly that capture by the big players by mandating external oversight.

One major concern I see with "AI" is the potential for harmful behavior that is excused because "the AI did it".

You’ll get no argument there. But guess what? The report addresses this too. It treats developers and deploying orgs as accountable regardless of whether the harm came from a person or a system. That’s not a loophole—it’s liability with teeth.

Another major concern... is the creation of dramatically unequal juxtapositions of people/human effort against... computationally driven [AI]...

Yes, imbalance is a real issue. But this is a deployment context problem, not a model alignment problem. The report focuses on frontier systems—those with scale and capability enough to cause real-world damage. Flooding forums with plausible BS may be annoying, but it’s not existential. Misuse of frontier models is.

If someone is going to really develop a policy framework... a substantial amount of original thinking based on first principles and identification of the "whats" of actual harms needs to be undertaken.

That’s exactly what this report does. It starts with clearly identified high-risk outcomes—biohazard misuse, cyber exploitation, runaway autonomy, and model misalignment—and works backward to define concrete, testable safeguards. That is first-principles thinking, the kind you’d recognize from any decent undergraduate systems course. You may not agree with their conclusions, but claiming they didn’t think from fundamentals is just wrong. If you’ve got better proposals, bring them—with evidence. The conversation’s better when it’s informed.

Telling an organization clearly that "if your AI kills someone... you will be held responsible" is much better than telling that organization "you must reduce risk by using red teams..."

Why not both? Accountability and prevention aren’t mutually exclusive. Telling someone they’ll be liable after the fact is useful—but telling them how to reduce the risk of the harm happening at all is how you build a functioning safety regime. Red-teaming is not a bureaucratic distraction. It’s how we find faults before the bodies start hitting the floor.

Comment Re:What "safeguards" are proper? (Score 1) 52

That is the billion-dollar question.

You know, trolls often open their posts by begging the question, like you just did. Since you didn’t immediately follow it with a bunch of strawman assertions and lame-ass whataboutisms, I’m going to give you the benefit of the doubt and try to address it in good faith.

So...“what safeguards are proper?” This question was asked and answered in the policy paper. The commission defines proper safeguards as concrete, enforceable measures grounded in engineering reality, and they clearly laid them out. Mandatory risk assessments before deployment to test for catastrophic misuse, misalignment, or autonomous behavior; clear compute-based thresholds (10 FLOP for training, 10 for fine-tuning) that trigger regulatory oversight; independent red teaming to expose vulnerabilities like jailbreaks, deception, or exploit generation; alignment testing to ensure the model behaves as intended—and proof of how that was verified; required incident reporting to a new oversight body (CAIRO) for any serious safety failure; transparency about where the training data came from and how it was handled; post-deployment monitoring to catch unexpected shifts; and kill-switch mechanisms to halt models if they go off the rails.

That’s a blueprint detailing exactly what the commission think are proper safeguards and the way in which those proper safeguards can be implemented. If you agree or disagree with any of that, fine. Let's have a discussion.

Comment Re:Logical outcome of a tech diaspora (Score 1) 39

The wounded shill doubles down—faux detachment, false equivalence, and a wall of whataboutist non sequiturs. Now you’re a shill and a troll. Congrats, dude. Now go away.

I don't know anything about the politics surrounding open source with Chinese characteristics, full disclosure.

That explains most of your original comment—and your follow-up.

If you're going to comment on a thread about China's use of open source as a strategic tool, maybe don’t start by pretending it’s all just neutral code and developer goodwill. This isn’t a philosophical debate about whether source code can be downloaded in a zip file. It’s about the political and infrastructural context that determines who gets to collaborate, how, and under what conditions. Yes, open source means you can audit the code. That’s not the point. The point is: who controls the repo, who can contribute without risking blowback, who decides which forks get visibility and which get throttled, and what happens when a contributor crosses an ideological line and offends some mid-level mandarin at the CAC tasked with detecting thoughtcrime.

Don't get your panties in a knot because I might have different ideas or admittedly don't know everything.

I didn’t call you out for having different ideas. I called you out for (a) having no grasp of the actual discussion, and (b) for the disingenuous way you tried to wrap a PRC talking point in a lazy, faux-naïve question. It’s not that you don’t know everything—anyone reading your post already got that memo. It’s that you’re doing it in a way that suggests you’re either practicing for your Wumao certification exam, or you’re already on the payroll.

Getting a good exchange rate on those propaganda points? Or are they paying you directly in digital Renminbi these days?

Comment Re: You Can Fork It—If Xi Lets You (Score 1) 39

Your description sounds an awful lot like what the USA is trying to do to China.

No, it really doesn’t. You’re conflating two fundamentally different things: export control and political control over the production process.

- The U.S. restricts exports of specific technologies—primarily advanced semiconductors and AI chips—for national security reasons.
- China restricts ideas. And access. And forks. And source code. And people.

There’s no U.S. equivalent to MIIT drafting ideological compliance standards for GitHub repos. No FCC blacklist for GPLv3 projects. And certainly no American version of the CAC combing through commits for subversive thoughtcrime. The PRC doesn’t just shape market behavior—it polices alignment with the Party. That’s not regulation. That’s control.

Export controls are geopolitical friction—tools of the trade. The U.S. uses them. So does China. So does the EU. But that’s not what this thread is about. It’s about open source cosplay by an authoritarian regime. What China does to its developers isn’t about trade policy—it’s about political obedience.

Comment Audiences want a dopamine payoff, not a plot. (Score 2) 180

James Gunn gave a revealing interview this week about the state of Hollywood, blaming "output mandates"—studio demands to meet yearly content quotas regardless of script readiness—for the industry's ongoing collapse. He’s not wrong, but let’s be honest: this isn’t a new problem. It’s a return to form. Hollywood has always been about quantity over quality. What’s different now is that the illusion of quality no longer holds.

The average audience member isn’t demanding emotional nuance or structural elegance. They’re chasing momentum, coherence, and dopamine payoffs. And for a long time, studios delivered that just fine. The MCU in particular perfected the formula: three acts, some snark, and a beam fight at the end. It worked because it felt like it meant something, even when it didn’t. Lore depth was simulated with callbacks. Character arcs were faked with a sad line before the CGI showdown. The illusion held.

What’s changed isn’t public taste. It’s studio execution. They’re not even bothering to fake it anymore. Projects are greenlit without finished scripts, without storyboards, without a coherent vision—because the Q3 roadmap demands four movies and two shows. This isn’t just creative failure. It’s an institutional surrender: a quiet admission that it’s cheaper to assume the audience is tasteless and stupid than to keep pretending they’re not.

Gunn isn’t lamenting a fall in audience standards—he’s pointing out that even the illusion of depth takes work. If you stop simulating quality, the whole scaffolding collapses—and so does the box office. The problem isn’t that the audience is stupid. It’s that studios are now acting like it’s safe to bet that they are. Hollywood isn’t chasing artistry anymore; it’s trying to be Netflix: frictionless, forgettable, and fed on a subscription.

Hollywood didn’t invent tastelessness. It exploited it. Then it refined it. Now it’s trying to automate it. And I think Gunn’s right—the illusion, finally, is wearing thin.

Comment Not bad - a clear, tech-literate policy proposal (Score 2) 52

California’s new Report on Frontier AI Policy (June 17, 2025) is a rare thing: a clear, technically literate framework for AI risk without the usual hysteria. No doom-laden speculation, no calls to ban technology, and—refreshingly—no breathless invocation of terrorism. That word—“terrorism”—does not appear once in the report. I checked.

Instead, this policy leans heavily on the kind of safeguards engineers can actually implement: pre-deployment evaluations, red teaming, transparency on training data, and risk thresholds tied to compute. They’re not trying to stop progress—they’re trying to make sure someone hits the brakes before a fine-tuned model starts confidently generating CRISPR exploits or accidentally-on-purpose reverse-engineers a US or PRC cyberweapon.

Yes, it focuses on high-compute models (10^25 FLOP and up), which you could argue is a crude proxy for capability. But the point is to establish a regulatory floor, not to kneecap the field. There’s no attempt here to ban local LLMs, outlaw open weights, or panic over generative art. In fact, copyright isn’t even the main frame—training data provenance is discussed, but not weaponized.

What’s most striking is what’s absent. This is not another case of “but criminals!” moral panic—the same kind that nearly sank the early internet. I remember when some congressional staffer stumbled across a USENET post with a zipped up copy of the Anarchist Cookbook, and lost their damn mind. This isn’t politicians circa 1996 trying to smother a transformative technology because they don’t understand it. It’s an honest effort to thread the needle: prevent catastrophic misuse without killing the tools.

Comment You Can Fork It—If Xi Lets You (Score 3, Insightful) 39

For those unfamiliar with how technology is managed in the PRC, it works like this: the Ministry of Industry and Information Technology (MIIT) sets the technical and compliance standards; the Cyberspace Administration of China (CAC) monitors and censors anything that drifts outside ideological boundaries; and the Ministry of Science and Technology (MOST) funds only what aligns with Communist Party goals. The PRC is the antithesis of true open source—these agencies ensure it stays “open” only so long as it serves the geopolitical agenda of Communist dictator-for-life Xi Jinping.

Yes, tech diasporas create opportunity. Yes, Huawei got kneecapped and responded by pivoting. But let’s not pretend this is some kind of righteous rise of the East against decadent Western hypocrisy. China’s embrace of open source is a contingency plan, not a philosophical awakening. Control in the PRC isn’t always overt, but it is always present. It’s about alignment. As long as open source supports Party objectives—self-reliance, global influence, AI parity—it’s permitted. When it doesn’t—labor organizing, cross-border collaboration, uncensored tools—the hammer (and sickle) comes down.

Comment Re:Logical outcome of a tech diaspora (Score 1) 39

Triggers organic growth in any welcoming community. Sleeping with the enemy is profitable until it isn't, then forces converge to forge an alternative.

What forces are you referring to? Surely not the market forces that thrive in open societies. This is China we are talking about, not the US or EU. In communist China, MIIT, CAC and MOST will make certain that any initiatives at collaboration, cross-fertilization and tech transfer will stop as soon as it interferes with the goals of the communists running the country. You know this, yet you are deliberately leaving this out of your comment. I wonder why?

As long as the source is open???.. I'm having a hard time thinking of anything to complain about.

Really? You can't be that out of touch. Let me give you a dose of reality, comrade. Chinese open-source projects are already showing signs of state pressure and censorship (see the 996.ICU GitHub saga). Major PRC tech firms contribute code, but they also gatekeep access when it suits their commercial or political interests. The same government backing OpenAtom also blocks GitHub mirrors, throttles Tor, and arrests technologists for unsanctioned collaboration.

Open source thrives in environments where freedom of expression, forkability, and transparency are protected—not tolerated until inconvenient. If you can’t see the contradiction in "authoritarian open source," you’re not thinking hard enough, or (more likely) you're just another PRC shill.

Comment When News Becomes a Dopamine Hit (Score 1) 169

Traditional media ran like a train schedule—predictable, slow, and mildly paternalistic. Someone in a suit told you what mattered and when to care. Social media nuked that model and replaced it with a variable-rate dopamine dispenser—basically a slot machine jacked into your limbic system. Instead of “the news at 6,” you get an infinite scroll engineered to stimulate your brain's reward system with novelty, outrage, or cleavage—sometimes all three at once. It’s parasocial conditioning masquerading as a news feed. AI just supercharges the cycle. Platforms like TikTok, YouTube Shorts, and Instagram Reels don’t inform you; they entrain your brain. They exploit the same reinforcement loops that drive gambling addiction, only now they're optimized with real-time engagement metrics and multi-modal targeting. After an hour of doomscrolling hyper-edited conspiracy reels, nobody’s going to downshift into a 1,200-word New York Times piece fact checking them. That’s not how the brain’s salience subsystem works—it’s been hijacked.

Social platforms didn’t just siphon clicks—they rewired the attention economy, killed the homepage, outsourced editorial authority to engagement metrics, and now function as both aggregators and arbiters of truth. This isn't just a content war—it's an epistemic war. Social media doesn't just redistribute attention; it redefines what counts as news.

Here's what I mean. BBC Headline: Russia invades Ukraine. But on TikTok? It's a missing influencer in Kyiv. And on reddit? It's how Taylor Swift attendance patterns reflect geopolitical reality. The sad part? Each audience believes their personalized feed reflects the world. This is epistemic closure in a bucket.

Platforms like X (Twitter), Reddit, and Facebook function as real-time news aggregators, but critically, they do so while disclaiming any traditional editorial responsibility. They wield immense, asymmetric influence over what information reaches billions, yet they operate without the infrastructure, ethics, or journalistic checks that underpin traditional media. They parasitize the legitimacy of news sites while siphoning the traffic, benefiting from the content without bearing the cost or accountability of its creation or verification.

The concept of a “home page” or a “front page” is mostly dead. People don’t go to BBC.com to see what’s happening—they get links, fragments, screenshots, or kludged-together memes on Signal or WhatsApp. I've seen this happen in real time in every group chat I'm in.

The irony? Static and linear media publishers optimized their headlines and thumbnails to feed that group chat beast. In doing so, they trained their audiences to never visit them directly. It's no surprise they are losing the attention war.

This is killing even news-forward websites. Why click when you can skim the Twitter summary, hear a two-minute YouTube hot take, or get your facts filtered through a Discord chat? Social media stole the audience and didn't look back. CPMs are awful unless you're Meta or Google. Subscription models work only for the top 1%, but even those news orgs are in trouble and bleeding money because they’re paying to produce real journalism but getting paid like mid-tier influencers reviewing toasters on Youtube.

Something is going to have to give, and it may be objective journalism itself. The center cannot hold -- the beast slouching towards Bethlehem is following a dopamine trail. Yeats saw it coming, and we are watching it arrive.

Comment Re:Duke Ellington (Score 1) 134

"If it sounds good it is good." - Duke Ellington

That quote hits even harder in an era where we’ve finally passed Turing’s other test—the one he never wrote down: not whether a machine can fool a judge in a sealed room, but whether it can write a hook catchy enough to make you not care who wrote it. :)

Duke was right: if it sounds good, it is good. But the question we’re grappling with now—culturally, economically, cognitively—is good for whom?

In the world we're heading toward (or honestly, already living in), the thing doing the performing isn’t a person, and the thing being sold isn’t just music—it’s a feeling of connection, repackaged and resold by a platform that owns neither the song nor the soul. That’s not artistry. That’s monetized imitation.

Comment Re: "increases people's creativity" (Score 1) 134

Drum machines and sequence generators are hardly new and player pianos have been around since forever. So why are they moaning about music generators now?

Let me walk you through this slowly, because you seem stuck in 1971.

A player piano doesn’t write music. Someone composed it. Someone else transcribed it onto a medium the piano could read. It’s a playback device.

A drum machine doesn’t spontaneously generate a beat. It loops what you tell it to loop. It’s a tool—like a guitar pedal or a mixing board.

An AI music model, by contrast, is trained on thousands of hours of human music, recombines it probabilistically, and generates outputs that simulate originality—often with invented backstories, synthetic “personas,” and no human disclosure. That’s what the El País article is dissecting: Concubanas is enjoyable music generated by an LLM, presented as the work of a band that never existed, and promoted by “fans” who were just as fake as the band itself.

The reason people are “moaning” now is because the authorship, attribution, and intent behind what you hear are being deliberately obscured, and platforms are more than happy to monetize that confusion. We’re not talking about tools anymore. We’re talking about deception by design.

But sure, go ahead and pretend your Spotify playlist is just the 21st-century equivalent of a Chuck E. Cheese animatronic mash-up. That take’s only about fifty years out of date.

Comment From the Monkees to Miku: it's all synthetic (Score 1) 134

I’m dating myself, but I remember when people made fun of the Monkees for being a fake band. They had all the scaffolding needed to be popular—catchy songs, TV exposure, manufactured charisma—but none of the history, none of the struggle, none of the dues-paying that “real” artists were expected to endure. And that was over fifty years ago. Fast forward to 2007 and the debut of Hatsune Miku, the first mass-market virtual idol—no flesh, no scandal, just a voicebank and a teal-haired avatar. And now? We’ve got imaginary Cuban jazz quartets with Cold War backstories and AI-generated albums surfacing on YouTube. The El País article making the rounds features Concubanas, a fabricated ensemble whose “1973” album fuses Cuban and Congolese rumba. It never existed. The band didn’t either. But the vibes? Spot on. Millions of views. Zero humans. Welcome to the era of AI-native music, where the fiction is deeper than ever—and the band never had to exist at all.

This isn’t the first time pop culture embraced the synthetic. The Monkees were Beatles cosplay for American TV. Japan took it further with Hatsune Miku—software as pop star, holographic concerts and all. Korea industrialized it with K-pop—idols trained from childhood, every gesture brand-managed. These weren’t just acts; they were interfaces—designed to tap directly into fans’ parasocial insecurities and convert them into revenue.

The uncomfortable truth is that all art is synthetic. It has to be. Even the most personal, heartfelt expression is shaped by tools, training, influence, and intent. What we’ve historically cherished is the belief that there’s a person on the other side—someone who meant it. Now we’re forming emotional connections with works that have no author, only tensor transformations over vector fields. You hear something that moves you, only to find out it was generated by prompt and pattern-matching, not by blood, sweat, or tears. That matters—at least, it should.

And here’s the part no one wants to say out loud: parasociality is no longer just a side effect of pop fandom. It’s the product. Entire industries now revolve around monetizing emotional attachment to things that aren’t people—VTubers, AI streamers, even playlist generators tuned for retention, not taste. We’re not just consuming music anymore. We’re subscribing to a simulation. And platforms love it that way—because simulations don’t sue for royalties.

The danger isn’t that we’ll enjoy fake bands. We always have. Spinal Tap was as real to me as the bands they parodied. I would have happily paid to see them live, and enjoyed it as much as a Zeppelin or Floyd show. The danger is that we’ll stop asking who made the music—and stop caring when the answer is “nobody.” AI music isn’t a novelty anymore. It’s trained on copyrighted catalogs, crafted by prompt engineers, and fed into streaming platforms with zero transparency. Some uploaders are honest. Most aren’t. Spotify hasn’t committed to labeling AI content. YouTube only adds a disclaimer if they think there’s an actionable copyright violation.

This isn’t a call to ban AI music. Nobody’s saying you shouldn’t dance to synthetic salsa if it hits right. But there’s a difference between feeling something and being misled into believing that feeling came from another human being. When the art is machine-made and the artist is a mirage, we’re not just erasing authorship—we’re turning that erasure into a feature.

We’re going to need a new kind of literacy to navigate this. Not just musical, but emotional. Because when the next great concept album drops—say, a generative AI remix of Tales of Mystery and Imagination reimagined through Stranger in a Strange Land—you might not care whether it’s “real.” But the people monetizing your reaction absolutely do.

Slashdot Top Deals

13. ... r-q1

Working...