
OpenAI's Spectacular Video Tool Is Shrouded in Mystery 26
Every OpenAI release elicits awe and anxiety as capabilities advance, evident in Sora's strikingly realistic AI-generated video clips that went viral while unsettling industries reliant on original footage. But the company is again being secretive in all the wrong places about AI that can be used to spread misinformation. From a report: As usual, OpenAI won't talk about the all-important ingredients that went into this new tool, even as it releases it to an array of people to test before going public. Its approach should be the other way around. OpenAI needs to be more public about the data used to train Sora, and more secretive about the tool itself, given the capabilities it has to disrupt industries and potentially elections. OpenAI Chief Executive Officer Sam Altman said that red-teaming of Sora would start on Thursday, the day the tool was announced and shared with beta testers. Red-teaming is when specialists test an AI model's security by pretending to be bad actors who want to hack or misuse it. The goal is to make sure the same can't happen in the real world. When I asked OpenAI how long it would take to run these tests on Sora, a spokeswoman said there was no set length. "We will take our time to assess critical areas for harms or risks," she added.
The company spent about six months testing GPT-4, its most recent language model, before releasing it last year. If it takes the same amount of time to check Sora, that means it could become available to the public in August, a good three months before the US election. OpenAI should seriously consider waiting to release it until after voters go to the polls. [...] OpenAI is meanwhile being frustratingly secretive about the source of the information it used to create Sora. When I asked the company about what datasets were used to train the model, a spokeswoman said the training data came "from content we've licensed, and publicly available content." She didn't elaborate further.
The company spent about six months testing GPT-4, its most recent language model, before releasing it last year. If it takes the same amount of time to check Sora, that means it could become available to the public in August, a good three months before the US election. OpenAI should seriously consider waiting to release it until after voters go to the polls. [...] OpenAI is meanwhile being frustratingly secretive about the source of the information it used to create Sora. When I asked the company about what datasets were used to train the model, a spokeswoman said the training data came "from content we've licensed, and publicly available content." She didn't elaborate further.
Ulterior Motives (Score:1)
Just so everyone is on the same page - there is nothing AI can do that couldn't be done before. People can drum up fake images. People can fake audio clips. With more work, video can be faked as well. You could argue that AI makes it easier for people who don't know what they are doing to make faked media, but the tools to do this stuff manually used to cost thousands or tens of thousands of dollars, and can now be done on a mid-range phone.
Re:Ulterior Motives (Score:5, Insightful)
But most importantly: People don't need any perfectly faked video or audio to believe the most outrageous bullshit. They believe it even against all real evidence.
Re: Ulterior Motives (Score:5, Insightful)
Re: (Score:2)
Re: (Score:1)
You could argue that AI makes it easier for people who don't know what they are doing to make faked media, but the tools to do this stuff manually used to cost thousands or tens of thousands of dollars, and can now be done on a mid-range phone.
Yes, thank you mister obvious. Any other deeply insightful commentary you would like to share?
Re: (Score:2)
You could argue that AI makes it easier for people who don't know what they are doing to make faked media, but the tools to do this stuff manually used to cost thousands or tens of thousands of dollars, and can now be done on a mid-range phone.
Yes, thank you mister obvious. Any other deeply insightful commentary you would like to share?
He was obviously using AI to comment on his behalf.
Re: Ulterior Motives (Score:3)
Ok, whatever (Score:2)
I wait for the movie to come out, it doesn't sound THAT interesting or mysterious that I'd want to dig deeper into it.
Maybe have AI unearth it and write a story about it.
bouncy castle world (Score:2)
Maybe I'm naive, I'm not so concerned about AI outsmarting humans and doing nefarious things. Humans doing bad things with AI is a concern, in the same way humans doing bad things with guns and bombs is a concern. I don't know about you but I do not want to live in a bouncy castle world where someone has decided for me what is safe and it is impossible to get hurt or do anything remotely interesting.
I think a lot of fears around AI are mostly marketing. Nothing magical happens because a computer can write m
Re: (Score:2)
https://xkcd.com/2228/ [xkcd.com]
Re: (Score:2)
While this technology won't best people that are actually good at doing things, it will likely be good enough to replace many lower tier people. The technology is continuing to improve as well.
Likely these kinds of tools will be used by people to increase their efficiency and where you may employ 5 people, you may now get by with 2 or 3 people instead. That's still quite a huge savings. Sure, when perfect is required, this won't be as helpful but most things don't need to be perfect, just good enough.
It wou
Re: (Score:2)
Re: (Score:2)
You don't need to get by with 2-3 people instead of 5, you can keep all 5 and double the output.
Did we keep all the farmers after the introduction of the tractor?
The AI models are "information tractors", and replacing "information farmers".
The tractor dramatically reduces the need for skilled farmers.
Except the "information tractor" continuously grows in scope and is not limited to information.
Re: (Score:3)
Yet. Right now AI can't best people who are good at doing things, but in the not too distant future that will change. We've already seen the effects AI is having the programming industry. This latest story shows what AI can do now. Imagine what it can do in five years.
AI will have a detrimental effect on several industries, and people will lose their jobs over it. Where they will go from there when human-created wor
Re: (Score:2)
"While this technology won't best people that are actually good at doing things, it will likely be good enough to replace many lower tier people."
Replace lower tier? You have it reversed.
Watch the videos again, and this time imagine that the videos were created by expert humans. ...
Created by:
- actors
- animators
- programmers
- film directors
- set designers
- technicians
Then, poof, all those people are replaced with one person and computer.
The person writes:
- lis
Re: (Score:2)
Oh my god who cares (Score:1)
Democracy is already dead or dying.
Just hook us up to the matrix already.
Re: (Score:2)
A quote comes to mind... (Score:1)
Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.
Re: (Score:2)
Ideally this technology could help lead us to a utopia one day but given human nature, our greedy overlords will just let us die instead. We're not generous to those we perceive as less than and since everything is about money in our society, those who have no money are clearly less than and don't deserve anything.
I wonder if our social programs will catch up before enough of us decide that burning civilization down is preferable to the status quo. What good is all this advancement if the gains are only bei
Re: (Score:3)
Science is no longer driving AI development. The driving factor now is money. A company can do more with less people. It is the company's legal requirement to the stockholders to reduce costs and increase profit. I can see no reason why this would not continue even with every single employee being replaced.
Place your bets now ... (Score:2)
Shrouded in Mystery (Score:1)