Forgot your password?
typodupeerror

Comment What's your beef with Betteridge? (Score 1) 100

There was no need to phrase this as a question: if you have ever used YouTube subtitles, you'll know they are bad.

The real question is why. While I think AI is overhyped in general, I don't believe that this is the best Google could do if they put in some effort. It seems the model is just picking the most likely word on an individual word level, not considering any context like the video title or description (which can contain the very names it gets wrong), the type of content the channel typically publishes or even the previously transcribed text. As evidence of the latter, a name might get transcribed incorrectly in different ways during the same video.

While names are probably the #1 thing it gets wrong, it also fails on English words quite often in ways that seem very avoidable. Machine learning is all about detecting likely combinations and their transcription model should be able to guess much better if it was provided with more context than just the audio. I wouldn't be surprised that if you fed a YouTube transcript to an LLM that doesn't have access to the audio and asked it to correct unlikely words, it would end up better on average.

Comment Re:Local? (Score 3, Interesting) 66

Mac Mini is a popular device for local inference, as it has a built-in GPU which shares the system RAM, so you can run relatively large models for its price.

I assume Clawdbot/Moltbot can work with any inference backend with an OpenAI-compatible API, so it's up to the user to choose between local inference or using a subscription to someone else's LLM service.

What the value is of running an agent locally when all your data is in cloud services is a good question, but I guess it could also use self-hosted data sources, if you have those.

Comment Re:If all of AI went away today (Score 1) 149

NVIDIA is bathing in money right now, so unless they over-extend, they should be fine.

The most useful company that I've seen invest in OpenAI would be AMD, but I'm not a financial expert so I don't know how bad it would be for them if their investment ends up losing its value.

I think Microsoft is large enough to be able to take the blow.

Oracle has announced a ridiculously large data center deal, which is unlikely to ever be realized, but I suspect they know that and wrote the contract in such a way they'll be fine whatever happens.

But that is just the current situation: while the AI hype seems past its peak among developers, investments are still increasing for some reason.

Comment Re:If all of AI went away today (Score 1) 149

The question isn't whether we can do without OpenAI's products: as you point out, it would be an inconvenience but not critical. The real risk is that with all those huge financial deals, when OpenAI fails, they'll take other companies with them, companies that produce goods and services that we cannot easily do without.

Comment Re:Indeed (Score 3, Insightful) 65

The UE5 games I've played are Satisfactory, Talos Principle 2 and Ender Magnolia, which all ran fine on my aging mid-range GPU (GeForce 3060 12GB). I haven't played Expedition 33 yet, but that's a UE5 game that reportedly runs smooth as well. So it's certainly possible to release a performing game with UE5.

One theory I've heard is that early releases of of UE5 performed poorly and later releases improved a lot, but some developers didn't want to risk upgrading the engine after a certain point in development. I lack the knowledge to say how credible that theory is.

Comment Re:I get the point... (Score 1) 58

The #1 question when buying a game is whether it's worth your time. Only if that answer is yes, does the price matter. There are a lot of games out there, so a game must either be really good or do something unique to stand out. A metroidvania that's just okay isn't going to be successful commercially, no matter what it's priced at; the competition is just too stiff.

I don't think the $20 price point matters much for the competition. People into this genre are going to play Silksong first and only look for other games once they've either completed it or have given up. And it's clear it over-delivers in terms of content and polish for its price point, so I don't think it would shift expectations for people who regularly buy games.

I think they had a clear grasp of their goals: earn enough money to be able to continue making games, while also getting a huge number of players to start playing immediately instead of waiting for reviews and/or a sale. Maybe they would have earned more money launching at $50, but then they wouldn't have had half a million concurrent players on Steam alone. What's the point of being indie if you're not allowed to care more about people engaging with your art than about making money?

Comment Re:Influencers != reviewers (Score 4, Insightful) 66

If it's marked as sponsored content, then it's fine that they add conditions on how the device can be presented. Even if there is no money paid, these are expensive devices, so something of value was transferred.

If it's not marked as sponsored content, the device should be considered a gift and the receiver should be able to do with it as they see fit. I don't think it matters much whether the receiver is a reviewer or influencer.

It seems to me that Google is trying to have their cake and eat it: use influencers to reach potential customers in a way that feels more spontaneous and trustworthy than advertising, but without accepting that the company is then no longer in control of the message, even though the former is caused by the latter.

Comment Re:Mass Revolt by Senior Researchers? (Score 2) 49

"Move fast and break things" is a Silicon Valley motto. There is no legal issue as long as the investors are aware of what they're getting into. The problem with SBF was that he told customers that their money was safe at FTX, while in fact they were making risky investments with it at Alameda.

Comment Re:good for them (Score 2) 43

With the entire video team (*) following Nick after his departure, I think those higher-ups will soon realize they shot themselves in the foot with this decision. A brand does't have lasting value once whatever people liked about the brand is no longer there.

(*) TFA says "a number", but after hearing the names in the Second Wind announcement stream, I think it's pretty much everyone on the video team. In any case, no new videos have been posted on The Escapist's channel for a few days.

Comment Re:Link to the actual paper (Score 1) 78

None of that has any chance of happening in the near future.

Machine learning takes a huge amount of computation. In particular, while a larger capacity networks become more powerful, it requires exponentially larger networks. For example, Microsoft already admitted that while GPT-4 performs well, it is too computationally expensive to deploy at a large scale. Any AI with superhuman levels of intelligence would require so much compute power that it would be easy to detect and shut down: you could literally pull the plug on it. This might not be true forever, but it will take many advancements in both ML training and hardware to change this in any significant way.

The training advancements might be accelerated by the AI itself, but at the same time there will be diminishing returns on each advancement. There will be a limit on how much intelligence you can get from a certain amount of hardware and power. Hardware advances might stretch further, but are going to be much slower and much more dependent on human cooperation. So it's not likely that AI suddenly and unnoticed jumps to superintelligence.

Also I'm not certain we're all that close to AI actually becoming intelligent. While the improvements in image recognition and language processing are very impressive, AI's ability to reason is still very weak. An LLM can produce good prose, but if it's writing about two persons, it's very likely to mix them up, because it has no mental model of the world it is writing about.

The extinction of all Earth-born life sounds highly unlikely: if a super-intelligent AI has no sense of self-preservation, it would be easy to get rid of. If it does, it wouldn't eliminate humans when it depends on human activity to keep the infrastructure that hosts the AI running. By the time AI and robots no longer depend on humans at all, I'd argue that they have become a new life form.

Comment Re:Link to the actual paper (Score 1) 78

Thanks for digging that up. It's rather pointless to discuss the risks without actually naming those risks.

It seems to me that all of the risks they mention are things that humans are already doing but might be boosted by AI. Is AI really the problem here?

Economic inequality is a problem, but in our current economic system it's going to get worse over time whether there is AI or not. The cynic in me wonders whether slowing down AI is just a way to stretch the status quo a bit longer, slow-boiling us frogs.

The people deliberately spreading misinformation aren't going to held back by regulations. People who want to use AI to combat misinformation might be though. It reminds me of the crypto export bans, which were a huge hassle but didn't do much to make the world a safer place. In fact, protocols and systems from that era that are still in use put infrastructure at risk, so I'd argue that even on the safety front it was a net loss.

Intended or unintended biases leading to social injustices happens with statistical models as well, unless care is taken to avoid it. There might be more crime in a particular zip code, but that should not be a reason to automatically flag anyone living there as a potential criminal.

I think there is a solution: regulate the outcomes, not AI itself. Hold organizations responsible for the actions AI takes in their name, just like they are responsible for the actions of their human employees.

Comment Re:I have yet to see evidence of capabiility (Score 1) 78

LLMs are stupid in the sense that they are good at working with language but terrible at understanding the world. Which only makes sense, since they were trained on language. It's frankly amazing that being good at language yields correct results outside of the language domain as often as it does. But if your job is threatened by an algorithm that gets it right 70% of the time, your job wasn't very secure to begin with.

The comparison to Eliza is unfair though: Eliza only works with the information in the conversation, while LLMs can draw upon a huge amount of information lossily compressed inside the model. For example, Eliza cannot translate one language to another or generate code.

Slashdot Top Deals

The only person who always got his work done by Friday was Robinson Crusoe.

Working...