Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Soo, who to trust? (Score 2) 34

But Perplexity is basically admitting it:

"If you can't tell a helpful digital assistant from a malicious scraper, then you probably shouldn't be making decisions about what constitutes legitimate web traffic."

It really doesn't matter if Perplexity thinks they are "a helpful digital assistant". That's not what the robots.txt file says. There's no flag in there to allow only the "helpful" ones to scrape. Just don't scrape, m'kay?

Comment One thing that bothers me about drone technology (Score 1) 50

The same technology that enables drone package delivery also enables hit men to steal a drone, attach a handgun to it, go to your address, wait for you to come outside, and shoot you... all anonymously. Ukrainians have already demonstrated the cost effectiveness of drones as killing machines.

Comment Re:Yet they have 6 million slop articles (Score 1) 27

This doesn't seem right. So some obscure language might not have an article at all because someone hasn't written it in that language or facts are different or missing from one language to the next?

Despite how it seems to you, that is both right and correct.

It's correct because that's how it works, and it's right because requiring that articles in a given language be written by someone who speaks that language is a requirement for it to be known whether they are slop.

Seems wikipedia should be taking all these different language articles merging the most factual details from each into a master article and then creating translated articles

If you want translations, use a translation tool.

If you want details to be propagated from articles in languages you don't speak into the articles in languages you do speak, then make that happen.

If you don't want to put in the time to account for the barriers in place to prevent slop articles, Wikipedia doesn't want your input. Make your own encyclopedia. You may use Wikipedia articles as your starting point. GLWT!

Comment Re:Going for gold... (Score 1) 104

...in the olympics of terrible ideas
I don't want to talk to my OS or have my OS talk to me
I don't want my OS to be any kind of agent
I want my OS to be a functional, reliable, stable OS

I've had some great "discussions" with ChatGPT's voice interaction on topics where I needed a sounding board to throw ideas at. It worked much better than trying to type on a phone keyboard while I was taking a walk and I think provides a better way to just talk out ideas when I don't have a human.

That said I trained text to speech on a windows machine a decade or so ago and it became highly accurate. However I still found keyboard and mouse as a much more efficient way of interfacing with a keyboard, so it did not take long to discard.

Comment Re:Onsite generation (Score 1) 49

Grid conditions are highly variable, and if you're in the AI biz, you aren't gonna want to shut down your LLMs for a heat wave.

There's plenty of "AI"-related processing which could be delayed and nobody would notice. Training of new models, for example. You get [access to] a new model a couple days later and you won't even notice, because you get it when you get it already. Google is also sufficiently distributed that they can simply move this processing to another location, since both the queries and the results are very small and there will be no appreciable delay associated with doing the processing far away.

Comment Re:Going for gold... (Score 2) 104

Focus group results are subject to two pretty obvious problems. One is that the kind of people who want to do them and have time to do them are not usually the people you actually want input from. Two is that the criteria for selecting focus group members can be selected for the purpose of getting a desired result, you read research that says certain types of people want certain things and then you select people like that to give positive feedback for your shitty ideas.

Comment That was not inadvertent (Score 1) 12

Inadvertent? I do not think you know what that word means.

A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing.

USERS SET THEM TO SHARE PUBLICLY
THAT IS NOT INADVERTENT
IT IS A CHOICE

TL;DR: GFY clickbait clowns

Slashdot Top Deals

"It is better for civilization to be going down the drain than to be coming up it." -- Henry Allen

Working...