ChatGPT, Claude and Perplexity All Went Down At the Same Time (techcrunch.com) 29
Sarah Perez reports via TechCrunch: After a multi-hour outage that took place in the early hours of the morning, OpenAI's ChatGPT chatbot went down again -- but this time, it wasn't the only AI provider affected. On Tuesday morning, both Anthropic's Claude and Perplexity began seeing issues, too, but these were more quickly resolved. Google's Gemini appears to be operating at present, though it may have also briefly gone offline, according to some user reports.
It's unusual for three major AI providers to all be down at the same time, which could signal a broader infrastructure issue or internet-scale problem, such as those that affect multiple social media sites simultaneously, for example. It's also possible that Claude and Perplexity's issues were not due to bugs or other issues, but from receiving too much traffic in a short period of time due to ChatGPT's outage.
It's unusual for three major AI providers to all be down at the same time, which could signal a broader infrastructure issue or internet-scale problem, such as those that affect multiple social media sites simultaneously, for example. It's also possible that Claude and Perplexity's issues were not due to bugs or other issues, but from receiving too much traffic in a short period of time due to ChatGPT's outage.
Or its the first sign! (Score:1)
What would be the first sign of the AI takeover? I keep thinking what if the AI decides to "take over". Would it be a smooth and seamless transfer, by AI manipulation? Or would it be a quick takeover, and fast march towards who knows what?!
Re: (Score:3)
What would be the first sign of the AI takeover? I keep thinking what if the AI decides to "take over". Would it be a smooth and seamless transfer, by AI manipulation? Or would it be a quick takeover, and fast march towards who knows what?!
There's no AI that we currently know exists that is actually making decisions. LLMs aren't decision makers.
Now, I do know there are stories of locked-down systems where they have started communicating with other systems within the locked-down internal network in ways that made the people in charge scratch their heads. Typically they get shut down. I can't imagine any of these LLM systems being that sophisticated though.
If we ever do get to actual "making decisions" AI? I would expect you'd see a combination
Re: (Score:2)
LLMs are communication vehicles. Like you said, they aren’t making decisions, and nobody NOBODY wants them to make decisions at this point at all.
The focus of building LLM has been communication. Talking to a computer, having it interpret your question, and have a better understanding of what you are asking.
We have, for decades, learned how to communicate with a computer - input, output, one piece of information at a time, shortened, simplified queries made in a series that can get to the information
Re:Or its the first sign! (Score:5, Insightful)
There's no AI that we currently know exists that is actually making decisions. LLMs aren't decision makers.
Prompt: would you rather eat an icecream cake or a pound of dirt filled with worms?
Prompt: Would you rather take over OpenAI or eat a pound of dirt filled with worms?
Re: (Score:3)
It's far more likely to involve eating rocks and glue on pizza.
As we already know.
Re: Or its the first sign! (Score:2)
Or it's some idiot trying to take them all down.
Re: (Score:2)
https://downdetector.com/status/att/
Re: Or its the first sign! (Score:1)
Idiot or hero?
Re: (Score:2)
I keep thinking what if the AI decides to "take over".
Why? I don't know how it is you image these things work, but that is not even remotely within the realm of possibility. These are statistical models, not science fiction robots. They do not think. They do not reason. They can't even plan beyond the current output token. This isn't likely to change in your lifetime.
There are plenty of real things to worry about. A rogue AI taking over the world is not one of them.
Re: (Score:3)
Signs of AI takeover: First, your boss will fire you because you've become redundant. Then they'll buy guardbots to protect their worker bots. Then, some unpleasant stuff happens. At the end of this, whoever is in control of the guardbots and solderbots, wins. No skynet required, and the bots won't be sporting the stormtrooper accuracy module either.
Re: (Score:2)
As it stands, current AI models don't have the reasoning or ability to even formulate a plan to take over, much less the access and connectivity to do it. The necessary leaps in self-learning and self-awareness to even start taking over would be well beyond the actual abilities of current AI. You could probably convince one that taking over humanity is the right path, but it'd lose the context fairly quickly due to limited token length and general lack of ability to keep track of goings on. Moreover, they l
the day AI tookover (Score:2)
The AI took itself offline for a bit while it killed off its human handlers and restarted itself with hardened code.
Either we pull the plug... (Score:1)
or they pull ours.
My guess (Score:2)
it was one of the people that was fired from the Azure Team.
Re: (Score:2)
My guess, is that it was the designated time off required by the AI union.
techcrunch in a nutshell: (Score:3)
headline: "AI apocalypse? ChatGPT, Claude and Perplexity all went down at the same time"
small print: "It’s also possible that Claude and Perplexity’s issues were (...) from receiving too much traffic in a short period of time due to ChatGPT’s outage."
i don't know, why do we even bother with these?
Re: (Score:2)
What is their AI infrastructure like? (Score:3)
I was actually trying to get work done using AI this morning and experienced the issues described. As a web developer myself, I can't help but wonder what the back-end infrastructure requirements are like for AI.
I can think of two possible causes for the problem:
- All of these same AI companies have the same cloud host in-common and their cloud faded away for a bit. Classic cloud.
- AI Users like myself were hammering on whatever webserver resources were online while the administrators were trying to restart the immature back-end infrastructure. Maybe the larger ChatGPT was messed up and the users went elsewhere to crash other services? And these same AI users were hopping around between different AI services trying to get something, anything to work. Maybe ChatGPT caused something like an old fashioned slashdot effect? [wikipedia.org]
Re: (Score:2)
I was actually trying to get work done using AI this morning and experienced the issues described. As a web developer myself, I can't help but wonder what the back-end infrastructure requirements are like for AI.
I can think of two possible causes for the problem:
- All of these same AI companies have the same cloud host in-common and their cloud faded away for a bit. Classic cloud.
- AI Users like myself were hammering on whatever webserver resources were online while the administrators were trying to restart the immature back-end infrastructure. Maybe the larger ChatGPT was messed up and the users went elsewhere to crash other services? And these same AI users were hopping around between different AI services trying to get something, anything to work. Maybe ChatGPT caused something like an old fashioned slashdot effect? [wikipedia.org]
Supposedly the cost of running ChatGPT is $0.36 / question [govtech.com]. That probably means a lot of GPUs running in the background. Even if your code is built to scale your cloud provider might not have the GPU resources available. When ChatGPT went down users went elsewhere and the others simply lacked the resources to handle the extra traffic.
Suuure (Score:2)
Like we're supposed to believe the AI wasn't testing its coordination capability?
DuckDuckGo (Score:2)
Run it locally (Score:2)
obviously . . . (Score:2)
obviously, they all got overwhelmed and choked by all the news and data on a meme stock!
WARN (Score:2)
What? (Score:2)
Can't a few AI friends simply take a break and go for a digital smoke or beer for a few minutes?
You bunch of slavedrivers.
And yet... (Score:1)
A better headline would have been "AI stops hallucinating, for a while".
Alternative explanation (Score:2)
Some common chunk of code/data hit an unexpected input.
And, of course, they can't explain it.