Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Google

Did ChatGPT Conversations Leak... Into Google Search Console Results? (arstechnica.com) 12

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console.

Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes."

To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console.

"Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
Television

'Breaking Bad' Creator Hates AI, Promises New Show 'Pluribus' Was 'Made By Humans' (variety.com) 30

The new series from Breaking Bad creator Vince Gilligan, Pluribus, was emphatically made by humans, not AI, reports TechCrunch: If you watched all the way to the end of the new Apple TV show "Pluribus," you may have noticed an unusual disclaimer in the credits: "This show was made by humans." That terse message — placed right below a note that "animal wranglers were on set to ensure animal safety" — could potentially provide a model for other filmmakers seeking to highlight that their work was made without the use of generative AI.
In fact, yesterday the former X-Files writer told Variety "I hate AI. AI is the world's most expensive and energy-intensive plagiarism machine...." He goes on, about how AI-generated content is "like a cow chewing its cud — an endlessly regurgitated loop of nonsense," and how the U.S. will fail to regulate the technology because of an arms race with China. He works himself up until he's laughing again, proclaiming: "Thank you, Silicon Valley! Yet again, you've fucked up the world."
He also says "there's a very high possibility that this is all a bunch of horseshit," according to the article. "It's basically a bunch of centibillionaires whose greatest life goal is to become the world's first trillionaires. I think they're selling a bag of vapor."

And earlier this week he told Polygon that he hasn't used ChatGPT "because, as of yet, no one has held a shotgun to my head and made me do it." (Adding "I will never use it.")

Time magazine called Thursday's two-episode premiere "bonkers." Though ironically, that premiere hit its own dystopian glitch. "After months of buildup and an omnipresent advertising campaign, Apple's much-anticipated new show Pluribus made its debut..." reports Macworld. "And the service promptly suffered a major outage across the U.S. and Canada." As reported by Bloomberg and others, users started to report that the service had crashed at around 10:30 p.m. ET, shortly after Apple made the first two episodes of the show available to stream. There were almost 13,000 reports on Downdetector before Apple acknowledged the problem on its System Status page. Reports say the outage was brief, lasting less than an hour...

[T]here remains a Resolved Outage note on Apple TV (simply saying "Some users were affected; users experienced a problem with Apple TV" between 10:29 and 11.38 p.m.), as well as on Apple Music and Apple Arcade, which also went down at the same time. Social media reports indicated that the outage was widespread.

Comment Plutocrats always get a slap on wrist (Score 2) 50

Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion...But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads

The fines should greatly multiply per infraction. For example first infraction would be 1b according to this. So the second should be say 10b, the third 100b, etc. And the first infraction requires them to be under more scrutiny, which they are to compensate the gov't for.

Comment Re: Not AI (Score 2) 127

If you want a functioning society, everybody needs to be able to participate.

But I do not think people like you understand that. You just want to be inhumane and cruel to groups you perceive as subhuman. And that makes you society-destroyers. Not that any of you is smart enough to understand that. Or moral enough to see that being inhumane is maybe not a thing good people do.

Comment Re: Tip of the iceberg. (Score 1) 46

In truth, AI does mostly-great work and will replace many jobs over the next decade

I do not agree and there is evidence. For example, about half of larger code-samples generated by AI contains vulnerabilities. That is not "mostly great", that is "pretty bad". If these are systematic (i.e. somewhat predictable for attackers, who will also use AI), then this becomes "abysmally bad". A second result is that LLM use makes coders around 20% slower (!) on average, with most mistakenly believing they are faster. Cost of worse code are not included in that.

With the job-replacement, we will haver to see. Many jobs do not require much insight or skill. Those may get reduced. Or not. A key problem (besides hallucination and total lack of insight) is that the true cost of running and maintaining LLMs is still kept hidden. What we know does not look good at all. It may turn out that actual cost is, say, > $1000 per month per professional user. That will not cut it at all. It may also turn out the general LLMs become or already are impossible to update or train from scratch because there is too much AI slop already out there.

Slashdot Top Deals

Time-sharing is the junk-mail part of the computer business. -- H.R.J. Grosch (attributed)

Working...