Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Submission + - Netflix is way worse for the environment than ChatGPT (nerds.xyz) 4

BrianFagioli writes: Netflix and YouTube streaming produce far more COâ than asking ChatGPT a question, according to a new analysis of digital energy use. An hour of HD video streaming generates about 42 grams of COâ, while a chatbot prompt is around 0.1 grams. Even AI image generation (about 1 gram per image) comes in well below binge-watching. The study also found that Zoom calls and text-to-video AI generation sit in the middle, but streaming is still the standout energy hog because it requires continuous data transfer and processing.

Researchers say the bigger problem isnâ(TM)t individual behavior but the energy sources that power data centers. The tech sector produced an estimated 900 million tons of COâ last year, with only about 30 percent powered by renewables. If that shifted to 80 or 90 percent, emissions from all digital activities would drop significantly without people changing their habits at all.

Comment Re:Bubbles are strange. (Score 1) 83

Electricity costs are just the measure of the worth. If the coin is worth more than the electricity cost, people start mining, what means coins get cheaper (because you want someone to buy them instead of mine own ones). This way the game theoretic optimum lies at the price of the electricity required to mine them. This alsos creates some imbalance because electricity prices are not everywhere the same, so mining is only lucrative where you have cheap electricity.

And this concept means the whole proof of work is a ecological disaster, because it requires wasting electricity worth the coin. Solutions like proof of stake are better, but people who bought expensive mining hardware have no interest in helping to migrate Bitcoin to a ecologically better system.

Comment Bad news (Score 3, Insightful) 44

I think it is a bad idea to let an AI shop in a (semi-)automated way, but if Amazon can disallow users to use certain programs to access their site, they (and other sites) can also start making rules about adblockers, supported browsers in general and other details about what software may be used with their site.

Comment Re:It's Not THAT Sloppy (Score 1) 60

I wouldn't exclude, that currently companies who demonstratively create experimental AI videos make sure they look like AI and not perfect. You want to show people you're trying something new.

One can also wonder if we will see effects that people start liking images/videos with AI artifacts like they like MP3 artifacts and (minor) JPEG artifacts in "blind" tests.

Comment Re:Surprised (Score 1) 49

They probably just pulled it, because Gemma is their open weight model for nerds. It isn't important to provide it to end users, the purpose is to let nerds explore what cool things one can do with it, so Google can take the best ideas for Gemini. They probably just had it in the app because it doesn't hurt and now removed it, because it hurt nevertheless.

Comment Re:I have multiple opinions (Score 1) 50

Of course it does. Maybe the word is not clear, let's give a few examples:

- cat, dog
- red, blue, pink, black, white
- upside down
- smiling
- horror style
- line art

While some are subjects and others are style elements, these are concepts the model knows and can arbitrarily combine to create something new. It only knows them as concepts, which means you don't have the one horror style, but for every new seed a new horror style (except you combine enough concepts to clearly communicate an unique style).

Comment Re:I have multiple opinions (Score 1) 50

The term "exactly" is wrong, but the idea is that it is legally the same and in particular to debunk the idea of "AI is just photobashing".

AI learns concepts, humans learn concepts.
AI models are simple, that's why concepts are represented simple and counting fingers is a challenge. Human brains are complex, which allows them not only to grasp the concept of "A hand has five fingers" but also the concept of "Let's give the alien hand another number of fingers" and finally the concept of "Even the alien has the same number of fingers on both hands if the body is also otherwise symmetrical".
Both are rarely able to reproduce their reference material. You can overfit an AI model and you can have an eidetic memory, but that are both rare exceptions.

The problem with "exactly" is that it should prove one point but can be refuted on another point, and that many people don't get that refuting "It works exactly like this" doesn't refute "the concept how it relates to copying source material or not is the same"

Slashdot Top Deals

A verbal contract isn't worth the paper it's written on. -- Samuel Goldwyn

Working...