Comment Re:How does it stay below the sea ? (Score 1) 56
Your english parsing skills are lacking
Your english parsing skills are lacking
Does anyone have any insight into how the liquified CO2 stays where they are planning to pump it below the seabed?
The article only says it'll be below a 400 foot thick layer of shale which they think will prevent it escaping, but it's not clear why this is so given that presumably it'll warm up and turn back into a gas, and want to escape from any tiny crack it can find.
I think it's entirely possible, maybe likely, that "survival of the fittest" will determine that a high level of intelligence is more detrimental than positive, since it creates too many ways to destroy your species and your habitat.
Maybe it'll be a bit like dinosaur gigantism that was explored and found to fail (not enough food when times get tough?).
> "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence."
I should not have put the hamster in the microwave. Reviewing the steaming remains of fluffy confirms my gross incompetence.
The numbers that Altman is claiming are presumably made up.
Millions or 10's of millions salary for superstars is possible, and maybe similar sign-on bonus (with some minimum stay requirement), but $100M sounds like bunk.
It's been suggested that Altman is making up these crazy numbers to pre-empt an OpenAI exodus by making anyone who is offered less money feel as if they are being low-balled.
Great use case.
Even if we had full-on human level AGI (which we don't), you'd still need to iterate and correct.
You wouldn't expect to give a non-trivial programming task to another human without having to iterate on understanding of requirements and corner cases, making changes after code review, bugs needing to be caught by unit/system tests, etc.
If you hired a graphic artist to create you a graphic to illustrate an article, or book, or some advertising campaign, then you also wouldn't expect to get it right first time. You're going to iterate and work with the artist until you get something close to what you were hoping for (or maybe different, but better).
How much iteration and feedback you need with today's "AI" depends on what kind of AI you are talking about (just LLMs, or also things like image generators), what you are using it for, and how skilled you are in using it.
If you are using an LLM to learn about something then you will have a conversation with it and probably not regard this as "iteration" anymore than you would with a human, even though it really is.
If you are using an LLM to write code or find bugs, then a large part of it is going to be how much context specific to your project have you provided. If you are just relying on what is baked into the LLM (which is not entire software projects - it's the content of the internet put through a blender and chopped/mixed into training fragments), then all bets are off.
I think I'll just fill in the log "no intelligent life here", and move along.
What kind of moron starts yelling at something that they believe is a computer/AI ?!
Do they also scream at the alarm clock when it wakes them up?
"requiring", not "not requiring".
It's 2025 - where is my edit button?
> A recent Pew study found that Americans think software engineers will be most affected by generative AI
I'm not sure that will turn out to be true. Perhaps more reflection on how little the average non-developer knows about what the job entails.
All jobs that can be done sitting in front of a computer are likely to be among the first affected by, or replaced by AI, but the ones to fall first will most likely NOT be those not requiring deep and exact reasoning. Jobs where today's generative AI is already good enough in many cases (commercial art, non-creative writing, etc) will be the ones to go once adoption catches up to what the tech is capable of.
Generative AI in general is surprisingly good at generating pictures, videos, etc, that were traditionally thought of as creative, and perhaps surprisingly bad at jobs where the key ingredient is instead intelligence and analysis - high level human intelligence. The reason for this is that current "AI" (whether diffusion models or transformers) is designed for copying human output, not for coping with novel situations where it needs to be able to learn and adapt. That ability to learn on the job, rather than be "pre-trained", is entirely missing from today's AI.
BYD have been working on solid state batteries for years, which are apparently much safer, and also have about double the energy density (1200 mi range), as well as fast charging - add 900 mi of range in 12 min.
They are now ready to put them into cars, and start ramping up the volume over next few years.
Not sure I'd want a nuclear reactor in my car, even if there was one small enough and cheap enough.
You seem to be confusing the source of electricity with the need to store it, or perhaps you are proposing that EV's are connected to the grid like trams and electric trains ?
Is this the most interesting thing anyone can think of to make a movie about OpenAI - a weekend drama where the cult-like employees are all instructed to take to twitter to get their sociopathic CEO back?
If these companies really believe that what they are building is going to be massively detrimental to society (even if it may also have some benefits), then they would not be developing it at breakneck speed - they'd be lobbying government to regulate against it, especially since OpenAI and Anthropic were both founded to supposedly help mitigate the risks of AI. Of course OpenAI/Altman sold out long ago, but aren't Anthropic are still claiming to be a "public benefit" company?
This spiel of "Better watch out because we're building something that will fuck you in 2 years" is a bit like saying "Better build a nuclear defense because I'm gonna nuke you in 2 years". They want to come across as the good guys giving a heads-up to anyone who will listen, but they are in fact the bad guys accelerating the doom scenario they are "warning" about.
It's interesting to see China take a different approach at state level, and are emphasizing building the "brakes" for disruptive AI before building disruptive AI itself (which gives a different view of the capabilities of current SOTA AI, also available from DeepSeek).
While the American AI companies might genuinely be warning us that they are about to destroy 50% of American jobs, I think CNN maybe have the right take here - this is salesman talk - these AI CEOs are trying to scare corporate America into buying their product to avoid being left behind, as well as perhaps attract investrment.
Factorials were someone's attempt to make math LOOK exciting.