Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:The problem is not AI but who owns AI (Score 2) 23

Did you even actually read the report? This is Slashdot, so my money is on "no". Do you even know who the authors are? For example, Friends of the Earth is highly anti-tech, famous for example for protesting demanding the closure of nuclear plants, against mining (in general), against logging (in general), they've protested wind farms on NIMBY grounds, etc. Basically anti-growth.

The report is a long series of bait and switches.

Talking about how "data centres" consume 1,5% of global electricity. But AI is only a small fraction of that (Bitcoin is the largest fraction).

Making some distinction between "generative AI" and "traditional AI". But what they're calling "traditional AI" today by and large incorporates Transformers (the backend of e.g. ChatGPT), and even where it doesn't (for example, non-ViT image recognition models) tends to "approximate" Transformers. And some outright use multimodal models anyway. It's a dumb argument and dumb terminology in general; all are "generating" results. Their "traditional" AI generally involves generative pipelines, was enabled by the same tech that enabled things like ChatGPT, and advances from the same architecture advancements that advance things like ChatGPT (as well as advancements in inference, servers, etc). Would power per unit compute have dropped as much as 33x YoY (certain cases at Google) if not for the demand for things like ChatGPT driving hardware and inference advancements? Of course not.

They use rhetoric that doesn't actually match their findings, like "hoax", "bait and switch", etc. They present zero evidence of coordination, zero evidence of fraud or attempt to deceive, and most of their examples are of projects that haven't yet had an impact, not projects that have in any way failed. Indeed, they use a notable double standard: anything they see as a harm is presented as "clear, proven and growing", even if it's not actually a harm today; but any benefit that hasn't yet scaled is a "hoax", "bait and switch", etc.

One thing that they call "bait and switch" is all of the infrastructure being built on what they class as "generative AI", saying you can't attribute that to "non-generative AI". But it's the same infrastructure. It's not bait and switch, it's literally the same hardware. And companies *do* separate out use cases in their sustainability reports.

They extensively handpick which papers they're going to consider and which they aren't. For example, they excluded the famous "Tackling Climate Change with Machine Learning" paper "on the grounds that it does not claim a ‘net benefit’ from the deployment of AI, and pre-dates the onset of consumer generative AI". Except they also classify the vast majority of what they review as "non-generative", so what sort of argument is that? Most of the papers are recent (e.g. 2025), and thus discuss projects that have not yet been implemented, whereas the Rolnick paper is from 2019, and many things that it covers have been implemented.

They have a terrible metric for measuring their impacts: counts of claims and citation quality, rather than magnitude and verifiability of individual impacts. Yet their report claims to be about impacts. They neither model nor attempt to refute the IEA or Stern et al's net benefit impact studies.

They cite but downplay efficiency gains. For example, efficiency in general is gained from (A) better control, and (B) better design. Yet they just handwave away software-design efficiency gains (aka, A) and improved systems design software (up from molecular to macroscopic modeling). They handwave away e.g. a study of GitHub CoPilot increasing software development speed by 55%, ignoring that this also applies to all software that boosts efficiency.

They routinely classify claims as "weak" that demonstrably aren't - for example, Google Solar API's solar mapping does demonstrably accelerate deployment - but that's "weak" because it's a "corporate study". But if a corporate study talks about harms (for example, gas turbines at datacentres), they're more than happy to cite that.

It's just bad. Wind forecasting value uplift of 20%? Nope. 71% decrease in rainforest deforestation in Rainforest Connection monitored areas? Nope. AI methane leak detection? Nope. AI real-time balancing of renewables (particularly in controlling grid-scale batteries' decisions on when to charge and discharge)? Nope. These are real things, in use today, making real impacts. They don't care. And these are based on the same technological advances that have boosted the rest of the AI field. Transformers boosts audio detection. It boosts weather forecasting. It boosts general data analysis. And indeed, the path forward for power systems modeling involves more human data, like text. It benefits a model to know that, say, an eclipse might be coming to X area soon, and not only will this affect solar and wind output, but many people will drive to see it, which will increase EV charging needs along their routes (which one needs to understand where people will be coming from and going and what roads they'd be likely to choose), while decreasing consumption at those peoples' homes in the meantime. The path forward for energy management is multimodality. Same with self-driving and all sorts of other things.

If you're forecasting AI causing the emissions of ~1% of global emissions by 2030 - which is a forecast that assumes a lot of growth - you really don't need much efficiency gains at all to offset it. The real concerns are not what they focus on here: they're Jevon's Paradox. It's not what the AI itself consumes, but it's what happens if global productivity increases greatly because of AI. There it doesn't have to offset a 1% increase in emissions - it has to offset a far larger growth of emissions in the broader economy.

Comment Re:Bad title, bad summary: missing key information (Score 1) 136

The actual problem: The buses need replacement batteries as the current ones pose a fire risk. To allow the buses to operate with existing batteries, software restrictions were installed to not allow the buses to charge under 41F. The software could be changed or the buses could get their replacement batteries sooner. The summary makes it seems like there are zero solutions to the issue.

Another solution: Get buses with proper thermal management systems in their batteries. The batteries should be able to warm themselves to the safe charging temperature using stored power, while driving to the charger.

They may actually have that, and it's some bug in the thermal management system that creates the fire risk, or something similar. But if they don't, that's an actual issue that should be looked into: Why did Vermont buy buses without such a critical cold-weather feature? If that's what's going on, there is an issue but it's a governance issue, not an EV issue. It would be like buying diesel buses in North Dakota without block heaters.

Comment Re: Working in Canada (Score 1) 136

The point is perhaps we should listen to the people who stated total EV tech might never reach the predicted nirvana in some climates.

"never" and "nirvana" are both excessively strong.

There will undoubtedly be teething issues as we learn how to build EVs for different environments. In this case, it sounds like these busses really need proper battery thermal management systems that are capable of warming the batteries to the required temperature for charging. Done right, that increases charging time only trivially because the batteries should use stored energy to warm themselves on the way to the charging station. My car (Tesla) does this, so it's not like the technology is in any way unusual.

On the other hand there will be some ways in which EVs are forever inferior to ICEVs, just as there are ways in which ICEVs are forever inferior to EVs. In both cases, the solution is to structure the system around the strengths and weaknesses of the technology, and in some cases choosing the otherwise less-desirable technology. "Nirvana" will never be achieved with any tech.

Comment Re:This is all so stupid (Score 1) 38

What matters is that LLMs reliably and dependably say how fucking awesome I am

LOL.

FYI, if that's not actually what you want, it's fairly easy to fix. All of the models allow you to specify a "personal preferences" prompt that is automatically applied to all conversations. For example, my preferences prompt for Claude says:

Ask clarifying questions when necessary and avoid trying to confirm my biases or opinions, or lauding my insights or views. Avoid calling me "astute", "shrewd", "incisive" or similar, or describing my comments with those sorts of superlatives. Take a neutral and fact-based position and err on the side of being critical of my ideas and positions.

I find this mostly fixes LLM obsequiousness, including to counter the tendency of the models to be easily convinced to agree with me.

Comment Re:Deeper than food safety (Score 2) 196

. You can't bring a cow with you to Mars

Well... kind of. Most animals have small breeds. Cows remain one of the hardest, as their miniature breeds are tstillabout 1/4th to 1/3rd the adult mass of their full-scale relatives. But there are lots of species in bovidae (the cow/sheep/antelope family) and some of them are incredibly small - random example, the royal antelope. As for sheep and goats, you have things like dwarf Nigerian goats which are quite small, and a good milk breed. Horses, you have e.g. teeny falabellas. Hens of course are small to begin with, and get smaller with bantams. Fish like tilapa are probably easiest - they can be brought as teeny fingerlings, and in cold water with limited food, their growth can be retarded so that they're still small on arrival. Etc.

Whatever you bring, if you bring a small breed, you can always bring frozen embryos of larger or more productive breeds to backbreed on arrival. The real issue is of course management at your destination - not simply space and food/water, but also odor, waste, dust, etc (for example: rotting manure can give off things like ammonia and can pose disease risks). That said, there are advantages. Vegetarian animals can often eat what is otherwise "waste" plant matter to humans which we either don't want, can't digest, or is outright toxic to us - and then they convert that matter into edible things like milk, eggs, and meat. The former two generally give you much higher conversion rates than the latter, although you'll always get at least some meat from old animals (either culled or via natural deaths). Tilapia can even eat (as a fraction of their diet) literal manure (albeit this is controversial due to disease risks).

Comment Re:Upgrading Memory on Second Hand Laptops (Score 1) 34

But if you install Linux, it will cut disk usage in half and RAM use by perhaps 25%.

True, for workloads that don't involve a lot of web browsing. For web browsing, I've seen a single article on Ars Technica open 40 or more Firefox content processes: one for each origin that is running its scripts in the document.

Comment Re:Price (Score 2) 196

Slightly different, but a few years ago in Canada there was a push for plant based meat replacements. The problem was not that I wouldn't be willing to eat it, it was the price. In fact, I was curious as one of my siblings is a vegan, so it would be nice if there was something we both could enjoy. "Beyond Meat" for example would sell 4 burger patties for $18. Whereas I could buy 8 ground beef patties for $15. When the company starts by charging double the price for a "meat substitute" it's hard to get people on board.

When lab-grown or plant-based meat substitute taste the same and cost half as much as real meat, people will find that their concerns about it not being "natural" subside and their concerns about the morality of eating "real" meat increase. Motivated reasoning FTW. Oh, there will still be some qualms for a while about whether it might not be as good as the real thing, but those will subside over time.

The real question is whether the stuff can be made and sold cheaply enough without economies of scale. I think most likely it'll follow a typical sigmoid adoption curve. At the beginning, only people with strong moral motivations to avoid real meat will pay the high prices, but that will provide enough scale to bring the price down a little, which will increase demand a little, and so on. At some point it will become cheaper than "real" meat and adoption will skyrocket. States that banned it to protect their livestock industries (which is the real reason they're doing it) will find public pressure pushes them to reverse those bans even as the livestock industry's power wanes due to revenues lost in states that allow cultivated meat. Eventually, "real" meat will become a somewhat-distasteful luxury product that gets more and more expensive as the livestock industry scales back.

Assuming AI doesn't kill us all first, I predict that it will take about two generations for the curve to reach the upper inflection point.

Comment Re: Is it true? (Score 1) 106

I'm not saying those tools are not useful, or effective, only questioning the legality.

And I'm saying that effectively the whole software industry is using LLMs to write software approximately the way I am. Some a little less so, some more. If the courts were to decide five years from now (it takes that long for courts to decide anything) that AI-produced code is not copyrightable, it would be an incredible rug pull. It would throw years of work by hundreds of thousands of developers into legal limbo. Worse, it would be impossible even to tell what the legal status of that code was because there is no reliable way to distinguish LLM-written code from human-written code, even when the code is entirely one or the other, and in fact it rarely is except for vibe-coded product produced for people who don't have the ability to write it themselves.

If the outcome of a legal decision would be incredibly disruptive, courts don't make that decision. Programmers tend to think of laws the way we think of code, instructions to be followed exactly with no reflection about their impact. But that's not how courts work. Courts exercise judgement and if the outcome of interpreting the law in one way is too bad, they find a different interpretation that is not so bad. This is particularly true in the case of copyrights, whose legal basis is rooted in a clause in the constitution that is explicitly focused on promoting progress in the useful arts and sciences. Interpretations (and even laws) that specify a view of copyright that clearly harms progress are unconstitutional. "Clearly harms progress" is a high bar, of course.

As an example, consider Oracle v Google, the case over whether Google violated Oracle's copyrights by reimplementing the Java APIs. The ultimate resolution was on Fair Use grounds, not copyrightability, but the initial district court ruling found that APIs could not be copyrighted explicitly based on the argument that allowing APIs to be copyrighted would be too harmful to the software industry (both that ruling and the Fair Use ruling were overturned on appeal, but SCOTUS upheld the Fair Use argument and sidestepped the copyrightabiliy argument because they didn't actually need to decide it).

So, if it comes up, courts will decide that AI-written code is copyrightable, and this will happen precisely because so much commerce today and in the near future is based on the assumption that it is.

In the longer run, AI may make this question moot, not by rendering code not worthy of copyright protection in legal terms, but by reducing the value of software to zero. Things that have no monetary value are generally not valid subjects for legal disputes, that is, not justiciable, because civil remedies are largely limited to monetary awards.

Comment Even netbooks have had x86-64 for 16 years (Score 1) 34

Even Linux Distros have "Arbitrary support dates". I guess there are not too many distros that will support a version released in 2021 beyond 2032 without making you go to the next version of the distro

Most well-known GNU/Linux distributions aren't charging for the next version, nor increasing the system requirements quite as sharply as Microsoft did from Windows 10 to Windows 11. The system requirements of Windows 10 differed little from Windows Vista's recommended specs. Windows 11 began to require much newer features in the CPU, particularly mode-based execution control (to limit the damage that an old vulnerable driver can cause) and an ongoing commitment from the CPU manufacturer to update microcode with new Spectre mitigations. (See williamyf's post on Ars OpenForum.)

There's also the small issue of Linux dropping 32-bit support going forward.

Some GNU/Linux distributions are indeed ending i686 kernels. But by 2010, practically all desktop and laptop CPUs were supporting x86-64, even netbook CPUs such as the Atom N450 in the Dell Inspiron mini 1012 that I used to have. So that's at least 16 years' worth of used PC hardware that you can repurpose. Anything older than that probably has 2 GB or less of DRAM sockets, and Wirth's law has corrupted the websites that people are required to use for work or for life administration for so long that 2 GB is inadequate.

Comment Re:Yes and no (Score 1) 34

But you have others like some mini-boxes, where there is RAM soldered to the motherboard, but they still have DIMM sockets to add extra RAM, if needed

I suspect that in the long term, after the memory crunch, this sort of design is the way to go. Instead of swapping to the soldered SSD's SLC intake buffer, as Macs appear to do, they could swap to a RAM disk in a CAMM socket, the sequel to SODIMM.

Comment Timing of Windows 10 end of support (Score 1) 34

Unlike in the 90s, when there was a rapid growth in the demand for computing resources, today w/ multi-core CPUs, 64-bit computing and 8GB of RAM and higher, most laptops are likely to last longer

That's not the impression that I got from doomers griping about Windows 10 end of support coinciding with the memory shortage.

Comment Re:Mazda was correct (Score 1) 47

Only tactile feedback has any hope of keeping your eyes on the road while using the dash.

Definitely not the way Mazda did it.

With their interface you had physical wheels and buttons, sure, but they were used to move around on a screen. So rather than just a quick glance to tap the screen icon you wanted, you had to watch the screen as you moved over to the selection and "clicked" it.

It's the worst of both worlds.

Honestly, I don't mind touch-based UIs for infotainment as long as they're well-organized and keep the important things in fixed locations, and make the buttons big enough.

Slashdot Top Deals

"Engineering meets art in the parking lot and things explode." -- Garry Peterson, about Survival Research Labs

Working...