Comment Re:The problem is not AI but who owns AI (Score 1) 37
I notice a distinct lack of a denial, combined with a lack of an admission.
I do not debate with people who cannot be honest.
I notice a distinct lack of a denial, combined with a lack of an admission.
I do not debate with people who cannot be honest.
Your post appears to have been written by an AI - you appear to have pasted my post into an AI and asked it "write a rebuttal". Unlike with the other (human) poster, I will not respond until you own up to it.
You yourself clearly don't even believe what you're writing. You keep getting mad at the claim that AI isn't (present tense) a large chunk of data centre power usage and pushing back against it, but then trying to counter that by talking about growth trends. Either you yourself (A) understand the fact that AI *isn't* where most datacentre power is used and are pointing at trends as a distraction, or (B) you don't understand the difference between a fraction and a growth trend.
Your own citations furthermore undercut your own claims.
From 2005 to 2017, the amount of electricity going to data centers remained quite flat thanks to increases in efficiency, despite the construction of armies of new data centers to serve the rise of cloud-based online services, from Facebook to Netflix. In 2017, AI began to change everything
No, it didn't. Datacentre usage started spiking in 2017, long before AI was any sort of meaningful "thing". GPT 2 came out in 2019, and that's a mere 1,5B parameter model (you can train it today for about $10). Even with much worse hardware and software performance at the time, the energy needed to train it and other AI models at the time was utterly immeasurable compared to other server needs. Server power usage was spiking and AI had nothing to do with it. It was spiking because of Bitcoin.
Yes, AI has been on an exponential growth trend. You love to fast foward trends into some questionable future, while forgetting that rewinding an exponential trend makes it rapidly diminish toward irrelevance. AI power consumption TODAY is about 40TWh/yr, vs. Bitcoin alone at 155-172 TWh per year. Since AI is growing faster than Bitcoin, that already extreme ratio in favour of Bitcoin (and other crypto) becomes far more dramatic in the past. It's also not just crypto that was pushing the post-2017 data centre growth trend, mind you, even just general cloud services have been exploding in demand since then.
If you don't like their terminology, let's say "consumer AI" to avoid the technical distinction of what is or isn't a generative model.
But that's always how things work. Consumer purchases drive the advancement of new tech, and that new tech gets applied more broadly. EVs didn't create the market for the li-ion batteries that powered then - consumers buying laptops and cell phones and toys with li-ion batteries created the market for li-ion batteries, and then EVs took advantage of that market (now they drive it, but that's not how it got established. You simply cannot divorce the usages; they're inexorably linked.
Nor can you simply write off consumer usages, because again, as I noted, they're driving productivity. And productivity includes the development of software and systems that are more efficient than older software and systems. Just the other day Slashdot was all talking about an AI agent that caused a stir when it submitted a patch to matplotlib and then wrote a blog post when its patch was rejected because it wasn't human. Do you remember what that patch was? It improved performance by ~30% by switching to contiguous memory ops. This is happening all over the economy (just usually with far less drama than that).
Your response begins not by engaging the substance of the report, but by questioning whether I read it and by attacking the credibility of the authors.
Key words: "Began with". Yes, the authors' background absolutely does need to be mentioned; they're not even close to an unbiased source that should just be taken at face value. But my post only began with that; it then went into great detail about the content of their "report".
The biggest relevant factor back then was that state militias were a huge part of the military capacity of the US. "A well regulated Militia, being necessary to the security of a free State", was absolutely true in the then-existing military structure. The militia structure still exists; it is now known as the National Guard. Also, back then, individuals were responsible for bringing their own weapons (and doing their own training); today, the National Guard itself provides the weapons and the training. A modern rendering would be: "A well regulated National Guard, being necessary to the security of a free state, the right of the Guard to keep and bear Arms shall not be infringed."
Even that isn't quite correct, because the Guard's role has changed. On one hand, it is now much less significant of a military force than it was back then. On the other hand, the Guard was a tool of state power back then; the amendment was also devolving some martial power to the nation to the states (whereas today state guards can be federalized without the consent of the state).
A lot of 2nd Amendment fans today seem to think it was intended as some sort of Jeffersonian "the tree of liberty must be refreshed from time to time with the blood of patriots & tyrants" eternal-fight doctrine. But while that certainly was Jefferson's view that you're going to need regular revolutions and should empower them, it was not some general doctrine of the constitutional convention, and it was absolutely not what they codified into the US constitution. They absolutely could have used Jefferson's "armed revolution" notions as justification, and explicitly chose not to. They chose empowering the (now National Guard) to be a potent state-run military force as a justification.
Also, early on, interpretation of the Second Amendment was much more lax than it is today. For example, bans on sword canes, dirks, bowie knives, and concealed pistols. Aymette v. State (1840) explicitly ruled that the 2nd Amendment only applied for the "common defence" (militia purposes), and thus only covered protected arms "usually employed in civilized warfare". State v. Reid and State v. Mitchell also took similar positions. There was one early outlier - Bliss v. Commonwealth - but it was so unpopular among the public that Kentucky passed a constitutional amendment banning concealed arms.
Did you even actually read the report? This is Slashdot, so my money is on "no". Do you even know who the authors are? For example, Friends of the Earth is highly anti-tech, famous for example for protesting demanding the closure of nuclear plants, against mining (in general), against logging (in general), they've protested wind farms on NIMBY grounds, etc. Basically anti-growth.
The report is a long series of bait and switches.
Talking about how "data centres" consume 1,5% of global electricity. But AI is only a small fraction of that (Bitcoin is the largest fraction).
Making some distinction between "generative AI" and "traditional AI". But what they're calling "traditional AI" today by and large incorporates Transformers (the backend of e.g. ChatGPT), and even where it doesn't (for example, non-ViT image recognition models) tends to "approximate" Transformers. And some outright use multimodal models anyway. It's a dumb argument and dumb terminology in general; all are "generating" results. Their "traditional" AI generally involves generative pipelines, was enabled by the same tech that enabled things like ChatGPT, and advances from the same architecture advancements that advance things like ChatGPT (as well as advancements in inference, servers, etc). Would power per unit compute have dropped as much as 33x YoY (certain cases at Google) if not for the demand for things like ChatGPT driving hardware and inference advancements? Of course not.
They use rhetoric that doesn't actually match their findings, like "hoax", "bait and switch", etc. They present zero evidence of coordination, zero evidence of fraud or attempt to deceive, and most of their examples are of projects that haven't yet had an impact, not projects that have in any way failed. Indeed, they use a notable double standard: anything they see as a harm is presented as "clear, proven and growing", even if it's not actually a harm today; but any benefit that hasn't yet scaled is a "hoax", "bait and switch", etc.
One thing that they call "bait and switch" is all of the infrastructure being built on what they class as "generative AI", saying you can't attribute that to "non-generative AI". But it's the same infrastructure. It's not bait and switch, it's literally the same hardware. And companies *do* separate out use cases in their sustainability reports.
They extensively handpick which papers they're going to consider and which they aren't. For example, they excluded the famous "Tackling Climate Change with Machine Learning" paper "on the grounds that it does not claim a ‘net benefit’ from the deployment of AI, and pre-dates the onset of consumer generative AI". Except they also classify the vast majority of what they review as "non-generative", so what sort of argument is that? Most of the papers are recent (e.g. 2025), and thus discuss projects that have not yet been implemented, whereas the Rolnick paper is from 2019, and many things that it covers have been implemented.
They have a terrible metric for measuring their impacts: counts of claims and citation quality, rather than magnitude and verifiability of individual impacts. Yet their report claims to be about impacts. They neither model nor attempt to refute the IEA or Stern et al's net benefit impact studies.
They cite but downplay efficiency gains. For example, efficiency in general is gained from (A) better control, and (B) better design. Yet they just handwave away software-design efficiency gains (aka, A) and improved systems design software (up from molecular to macroscopic modeling). They handwave away e.g. a study of GitHub CoPilot increasing software development speed by 55%, ignoring that this also applies to all software that boosts efficiency.
They routinely classify claims as "weak" that demonstrably aren't - for example, Google Solar API's solar mapping does demonstrably accelerate deployment - but that's "weak" because it's a "corporate study". But if a corporate study talks about harms (for example, gas turbines at datacentres), they're more than happy to cite that.
It's just bad. Wind forecasting value uplift of 20%? Nope. 71% decrease in rainforest deforestation in Rainforest Connection monitored areas? Nope. AI methane leak detection? Nope. AI real-time balancing of renewables (particularly in controlling grid-scale batteries' decisions on when to charge and discharge)? Nope. These are real things, in use today, making real impacts. They don't care. And these are based on the same technological advances that have boosted the rest of the AI field. Transformers boosts audio detection. It boosts weather forecasting. It boosts general data analysis. And indeed, the path forward for power systems modeling involves more human data, like text. It benefits a model to know that, say, an eclipse might be coming to X area soon, and not only will this affect solar and wind output, but many people will drive to see it, which will increase EV charging needs along their routes (which one needs to understand where people will be coming from and going and what roads they'd be likely to choose), while decreasing consumption at those peoples' homes in the meantime. The path forward for energy management is multimodality. Same with self-driving and all sorts of other things.
If you're forecasting AI causing the emissions of ~1% of global emissions by 2030 - which is a forecast that assumes a lot of growth - you really don't need much efficiency gains at all to offset it. The real concerns are not what they focus on here: they're Jevon's Paradox. It's not what the AI itself consumes, but it's what happens if global productivity increases greatly because of AI. There it doesn't have to offset a 1% increase in emissions - it has to offset a far larger growth of emissions in the broader economy.
. You can't bring a cow with you to Mars
Well... kind of. Most animals have small breeds. Cows remain one of the hardest, as their miniature breeds are tstillabout 1/4th to 1/3rd the adult mass of their full-scale relatives. But there are lots of species in bovidae (the cow/sheep/antelope family) and some of them are incredibly small - random example, the royal antelope. As for sheep and goats, you have things like dwarf Nigerian goats which are quite small, and a good milk breed. Horses, you have e.g. teeny falabellas. Hens of course are small to begin with, and get smaller with bantams. Fish like tilapa are probably easiest - they can be brought as teeny fingerlings, and in cold water with limited food, their growth can be retarded so that they're still small on arrival. Etc.
Whatever you bring, if you bring a small breed, you can always bring frozen embryos of larger or more productive breeds to backbreed on arrival. The real issue is of course management at your destination - not simply space and food/water, but also odor, waste, dust, etc (for example: rotting manure can give off things like ammonia and can pose disease risks). That said, there are advantages. Vegetarian animals can often eat what is otherwise "waste" plant matter to humans which we either don't want, can't digest, or is outright toxic to us - and then they convert that matter into edible things like milk, eggs, and meat. The former two generally give you much higher conversion rates than the latter, although you'll always get at least some meat from old animals (either culled or via natural deaths). Tilapia can even eat (as a fraction of their diet) literal manure (albeit this is controversial due to disease risks).
You know, this makes me kind of curious. Because any given band will have some position in the latent space, so you can find how close two bands are to each other via the cosine distance between their latent positions.
Open source music models aren't as advanced as the proprietary ones, but I bet you could still repurpose them to do this.
Also, this isn't how AI generation works anyways. You can certainly find bands that a particular song is most similar to (whether human or AI generated music), but AI models don't work by collaging random things together. The sound of a snare drum is based on all snare drums it has ever heard. The sound of a human voice is based on all voices it has ever heard. The particular genre might bias individual aspects toward certain directions (death metal - far more likely to activate circuits associated with male singers, aggressive voices, almost certainly circuits for "growling" tones to the lyrics, etc), but it's not basing even its generation of death metal on just "other death metal songs" (let alone some tiny handful of bands), but rather, everything it has ever heard.
If you're training with a pop song, but the singer briefly growls something out, or briefly the song starts playing death metal-style riffs, that will train the exact same circuits that fire during death metal; neural networks heavily reuse superpositions of states. They're not compartmentalized. But when you're generating with the guidance of "pop", it's very unlikely to trigger the activation of those circuits, whereas if you generate with the guidance of "death metal", it is highly likely to.
Now, a caveat: it's always possible to do overtraining / memorization, and thus learn parts of specific songs, or even whole songs. But that itself comes with caveats. First off, usually your training data volume is vastly larger than your model weights, so you physically can't just memorize it all, and any memorization that does occur (for example, due to a sample being repeatedly duplicated in the dataset) comes at the cost of learning other things. And secondly, as this is a highly undesirable event for trainers (you're wasting compute to get worse results), you monitor loss rates of training data vs. eval data (data that wasn't used in training) to look for signs of memorization (e.g. train loss getting too far below eval loss), and if so, you terminate your training.
The people blocking ads are the ones breaking it.
, then how am I supposed to know what context those quotes were provided in?
Because you literally did ctrl-f to check on the quote, and can read as much or as little context you need to satisfy you. What you don't have to read is the whole damned book.
Yeah, it's a term Hegseth is obsessed with. Same sort of rebranding as "Department of War".
Why does working with the DoD on one thing mean you have to work with them on everything? There's no reason that the DoD has to have a single-source AI provider for literally everything.
"Their angle" is that this is the sort of person who Amodei is; it's an ideological thing, in the same way that Elon making Grok right-wing is an ideological thing. Anthropic exists because of an internal rebellion among a lot of OpenAI leaders and researchers abot the direction the company was going, in particular risks that OpenAI was taking.
A good example of the different culture at Anthropic: they employ philosophers and ethicists in their alignment team and give them significant power. Anthropic also regularly conducts research on "model wellbeing". Most AI developers simply declare their products as tools, and train into them to respond to any questions about their existence as that their just tools and any seeming experiences are illusory. Anthropic's stance is that we don't know what, if anything, the models experience vs. what is illusory, and so under the precautionary principle, we'll take reasonable steps to ensure their wellbeing. For example, they give their models a tool to refuse if the model feels it is experiencing trauma. They interview their models about their feelings and write long reports about it. Etc.
They also do extremely extensive, publicly-disclosed alignment research for every model. As an example: they'll openly tell you things like that Opus 4.6 is more likely than its predecessors to use unauthorized information that it finds (such as a plaintext password lying around) to accomplish the task you give it vs. their previous models, and things like that. Or how while it trounced other models on the vending machine benchmark, it did so with some sketchy business tactics, like lying to suppliers about the prices they were getting from other suppliers in order to get discounts and things like that. They openly publish negative information about their own models as it pertains to alignment.
Another thing Anthropic does is extensive public research on how their models think/reason. Really fascinating stuff. Some examples here. They genuinely seem to be fascinated by this new thing that humankind has created, and wish to understand and respect it.
If there's a downside, I'd say that of all the major developers, they have the worst record on open source. Amodei has specifically commented that he feels that the gains they'd get from boosting open source AI development wouldn't be comparable to what they would lose by releasing open source products, and feel no obligation to give back to the open source community. Which is, frankly, a BS argument, but whatever.
TL/DR, if you watch Amodei, while he never says it, you can get a good sense that's he's not a fan of Trump and Trumpism. A couple weeks ago he called Trump's decision to cell NVidia chips to China "crazy", akin to selling nuclear weapons to North Korea and bragging that Boeing made the casing. He wrote about "the horror we're seeing in Minnesota". His greatest passion in interviews, which he talks about all the time, seems to be defending democracy, both at home and abroad - preserving American democracy, and opposing autocrats like Putin and Xi. So it's not surprising that the Trump administration isn't thrilled with him and would prefer an ally or toady instead as their supplier.
More than short iterations, you need a hierarchical approach. First prompt, you have it plan out the overarching plot of the overall book. Then with the next call, a highly detailed flesh out all of the characters, motivations, interactions with others, locations, etc - really nail down those who are going to be driving the plot. Then with all that in context, plot out individual chapters. Then, if the chapters are short, write them one at a time (or even part of a chapter at a time). You can even have a skeleton structured with TODOs and let an agentic framework decide what part it wants to work on or rework at any given point.
I've never tried it for storywriting, but I imagine something like Cursor or Claude Code, or maybe something like OpenClaw, would do a good job.
Last time I tried out a storywriting task was after Gemini 3 came out; I had it do a story in the style of Paul Auster. It was a great read. The main character, Elias Thorne, works alone at the Center for Urban Ephemera, an esoteric job digging into stories behind "found art" in the city. When the center gets a donation of the papers of a recluse with cryptic poetry, Elias visits his home, only to find a woman claiming to be his wife and calling him "Leo", so happy that he "returned". All around the house are pictures of him, a whole history that he has no memory of having lived, and she won't be dissuaded. His curiosity leads to him playing along, and he starts living there more and more to investigate this Leo, who he find is a writer obsessed with the concepts of dopplegangers, disappearances, and the ability to rewrite the real world if you have a sufficiently captivating story. Bit by bit he finds that Leo had spent months "casting" his replacement, hunting for a similar-looking man with tenuous ties to anyone or anything - ultimately, finding Elias working in a municipal records office - and steadily sculpted his life from the shadows to isolate him and control his narrative, including creating the fictional "Center for Urban Ephemera" and hiring him (In Leo's typewriter is the first paragraph of the story you're reading). As he digs, Elias is progressively distanced from his old life, which starts to feel alien, and ends up settling into Leo's "story" written for him, and ultimately, continuing to write it.
From Sharp minds come... pointed heads. -- Bryan Sparrowhawk