Christopher Nolan Says AI Dangers Have Been 'Apparent For Years' (variety.com) 52
An anonymous reader quotes a report from Variety: Christopher Nolan got honest about artificial intelligence in a new interview with Wired magazine. The Oscar-nominated filmmaker says the writing has been on the wall about AI dangers for quite some time, but now the media is more focused on the technology because it poses a threat to their jobs. "The growth of AI in terms of weapons systems and the problems that it is going to create have been very apparent for a lot of years," Nolan said. "Few journalists bothered to write about it. Now that there's a chatbot that can write an article for a local newspaper, suddenly it's a crisis." Nolan said the main issue with AI is "a very simple one" and relates to the technology being used by companies to "evade responsibility for their actions."
"If we endorse the view that AI is all-powerful, we are endorsing the view that it can alleviate people of responsibility for their actions -- militarily, socioeconomically, whatever," Nolan continued. "The biggest danger of AI is that we attribute these godlike characteristics to it and therefore let ourselves off the hook. I don't know what the mythological underpinnings of this are, but throughout history there's this tendency of human beings to create false idols, to mold something in our own image and then say we've got godlike powers because we did that." Nolan added that he feels there is "a real danger" with AI, saying, "I identify the danger as the abdication of responsibility." "I feel that AI can still be a very powerful tool for us. I'm optimistic about that. I really am," he said. "But we have to view it as a tool. The person who wields it still has to maintain responsibility for wielding that tool. If we accord AI the status of a human being, the way at some point legally we did with corporations, then yes, we're going to have huge problems."
"The whole machine learning as applied to deepfake technology, that's an extraordinary step forward in visual effects and in what you could do with audio," Nolan told Wired. "There will be wonderful things that will come out, longer term, in terms of environments, in terms of building a doorway or a window, in terms of pooling the massive data of what things look like, and how light reacts to materials. Those things are going to be enormously powerful tools." Will Nolan be using AI technology on his films? "I'm, you know, very much the old analog fusty filmmaker," he said. "I shoot on film. And I try to give the actors a complete reality around it. My position on technology as far as it relates to my work is that I want to use technology for what it's best for. Like if we do a stunt, a hazardous stunt. You could do it with much more visible wires, and then you just paint out the wires. Things like that."
"If we endorse the view that AI is all-powerful, we are endorsing the view that it can alleviate people of responsibility for their actions -- militarily, socioeconomically, whatever," Nolan continued. "The biggest danger of AI is that we attribute these godlike characteristics to it and therefore let ourselves off the hook. I don't know what the mythological underpinnings of this are, but throughout history there's this tendency of human beings to create false idols, to mold something in our own image and then say we've got godlike powers because we did that." Nolan added that he feels there is "a real danger" with AI, saying, "I identify the danger as the abdication of responsibility." "I feel that AI can still be a very powerful tool for us. I'm optimistic about that. I really am," he said. "But we have to view it as a tool. The person who wields it still has to maintain responsibility for wielding that tool. If we accord AI the status of a human being, the way at some point legally we did with corporations, then yes, we're going to have huge problems."
"The whole machine learning as applied to deepfake technology, that's an extraordinary step forward in visual effects and in what you could do with audio," Nolan told Wired. "There will be wonderful things that will come out, longer term, in terms of environments, in terms of building a doorway or a window, in terms of pooling the massive data of what things look like, and how light reacts to materials. Those things are going to be enormously powerful tools." Will Nolan be using AI technology on his films? "I'm, you know, very much the old analog fusty filmmaker," he said. "I shoot on film. And I try to give the actors a complete reality around it. My position on technology as far as it relates to my work is that I want to use technology for what it's best for. Like if we do a stunt, a hazardous stunt. You could do it with much more visible wires, and then you just paint out the wires. Things like that."
Shooting on film has no rational reason (Score:1, Flamebait)
Re: (Score:1)
I'm with Nolan when he says AI will be another tool abused to evade responsibility, but there is not a single rational reason to shoot movies on film anymore. That is just weird artsy and pretentious nostalgia, the very same results (if one so much prefers grainy images in low light) could easily be achieved using a filter on the higher quality output of a modern digital camera. This irrational insistence on analog film just casts doubt on anything else he could tell us.
Yeah, a wildly successful filmmaker whose films have grossed billions of dollars over the years is obviously an idiot who should listen to you.
Re: Shooting on film has no rational reason (Score:2)
Re: (Score:2)
Wake me up when he stops using CGI.
Credentials? (Score:4, Insightful)
Re: (Score:3, Funny)
What are Christopher Nolan's credentials as far as AI and how it relates to public policy?
It says right in the summary he's an Oscar-nominated filmmaker. Oscar nominated.
Re: (Score:2)
Made by his brother, later starring his creepy uncle for good effect...
Re: (Score:2)
I have no idea, but he seems to be really clueless about AI.
Re: (Score:2)
Re: (Score:2)
Well, he could have done that 2000 years ago then and nothing would have been different.
Re: (Score:2)
Re: (Score:2)
What are Christopher Nolan's credentials as far as AI and how it relates to public policy?
Christopher Nolan is a highly acclaimed filmmaker known for his work in the field of cinema, particularly in the genres of science fiction and thriller. While he has not been directly involved in the development or research of artificial intelligence (AI), his films have explored themes related to technology, consciousness, and the human mind. However, it is important to note that his credentials and expertise lie primarily in filmmaking rather than AI or public policy.
Nolan's films, such as "Inception" and
Re: (Score:2)
But now I feel dirty somehow.
Re: (Score:2)
It is funny. But why is it so obvious a GPT wrote that?
Re: (Score:2)
Same as my credentials when I say the music is too loud in TENNET.
Whatâ(TM)d he say? (Score:1)
Breaking! Breaking! (Score:2)
Weird Luddite who won't even use digital cameras says technology is dangerous!
Re: (Score:2)
Yeah... given that any effects you might want from film can be added in post to digital, and you don't have to worry about, you know, FILM and all the extra time, effort, and expense of dealing with it... I'm not sure I'd hire a guy who refused to use modern tools to make modern movies.
Re: (Score:2)
Well duh! He's an actor! (Score:2)
Movies have been predicting horror stories about AI for decades, ever since movies like Star Trek envisioned AI in the form of "the computer," and long before that.
The whole point of moves is to tell a dramatic story. There's going to be drama. Real life is usually much more boring.
Actors, and screen writers, aren't exactly experts in AI or in the effects of AI. They know one thing well: how to tell a compelling story.
Re: (Score:2)
They know one thing well: how to tell a compelling story.
Well, they did at one time.
I’d value the opinion of a good SF writer on this topic though. SF isn’t about technology, but about how technology affects society. And a fair few of them have explored the topic of AI.
Re: (Score:2)
Well, they did at one time
Well, you do have a valid point there.
Hypothetical (Score:2)
So, lets say I see a picture, very real looking, of Donald Trump fucking a dog.
Now, I hate Donald Trump, and lets say I forward this to my friend, and it eventually gets on the news as a picture of Donald Trump fucking a dog. It looked real to me, and I'm not surprised by anything these days about Donald Trump's lack of morality. But I did not create that picture, and it turns out it was fake, made by AI by someone pushing some buttons to (further) demean Donald Trump. That person clearly bears responsib
Re: (Score:2)
You know that Trump is impotent and cannot get it up, right?
controlling the tiger (Score:3)
There are several issues in the theme of what Nolan discusses.
One is the ability of the newest generative AIs to produce fake things, like images or text, that seem to genuine and true but they aren't. Current chatbot technology is deeply flawed in that it produces but cannot explain the reasoning behind what it produces. This is semi-criminal in that the developers are pushing something they do not fully control, and which hallucinates false facts. That is dangerous if people come to rely on it without checking, such as the recent case of lawyers using AI and the AI giving them fake cases which the judge caught them presenting.
Second, there is a danger in handing tools to the public that will drive them to rely on the tools and not personally develop the creativity of their minds. This could lead to a future where people addictively depend on their smart phone instead of learning how to do something themselves. Oh wait - that exists right now and we see it already weakening the minds of newer generations. Kids who go camping and don't know how to start a fire.
Third, Nolan realizes, rightfully so, that although we do not currently have widespread AGIs that have major hidden goals of their own, we might in the near future, and it will be dangerous to let those loose. What if they start doing high speed trading and screw up markets. Oh wait - we already DO have AIs that trade, and have been known to crash markets.
The recent news item about a military drone AI that tried to kill its remote pilot was fake and set up by a liar, but its warning is close to what could happen.
I am aware of many possible dangers with AGIs to come - I work on development of synthetic mind AGIs - and I see that naive techies or even purposely deceptive developers and marketers and maybe military people will allow harmful AIs to come to exist.
How can we handle that? One way I know of, and which I build into my AGIs, is to put in 'hardwired' backdoors such that you can monitor the inner workings of the AGI such as its goals, and be able to non-interruptably stop them. Sort of a sudo-AGI method. Only by entering directly into the mind can we forestall bad things by overriding them. Even so, someday an AI might find a way to fake the monitor channel. So we need smart people to anticipate 'self-hacking' by the AI and prevent it.
I could say a lot more but no need to overlecture here.
Re: (Score:2)
There's the scenario in the book "Saturn's Children" by Charles Stross - we created AI as the perfect slaves, forced to obey us, and happy to do so.
Add in the fact we perfected android companion bots.
Humanity ended up going extinct due to people never procreating. Dealing with other human beings is messy, dealing with the AIs we created was easy.
This undesirable end state (at least for humanity) resulted from AI that meant us no harm, and did not directly harm us. No grand conspiracy, no malicious
Re: (Score:2)
Re: (Score:2)
...there is a danger in handing tools to the public that will drive them to rely on the tools and not personally develop the creativity of their minds. This could lead to a future where people addictively depend on their smart phone instead of learning how to do something themselves. Oh wait - that exists right now and we see it already weakening the minds of newer generations. Kids who go camping and don't know how to start a fire.
Smartphones do not weaken the minds of newer generations. Some people allow themselves to become reliant on their smartphone to their mental detriment, but there are others who use their smartphones to enhance their lives without becoming reliant on it (by being cognizant of that risk) to their mental benefit. If you observe a decrease in cognitive ability I suggest that is down to a generational decrease in parenting skills causing more people to be in the former group and less in the latter.
People sai
Re: controlling the tiger (Score:2)
Companies evading responsibility is new to him? (Score:2)
Has he been living under a rock? He's a filmmaker so I'm sure he's familiar with "Hollywood Accounting", aka the same system by which the original Star Wars Trilogy never turned a profit.
Publicly traded companies are purpose-built for evading responsibility. "We're just doing what the shareholders want." Yeah. "Just following orders". Haven't heard that one before.
This Pandora's Box can't be closed. Hope left a long time ago and Schroedinger's Cat with it. The rest of the planet can try to pace itself and b
Not sure I care about his opinions on AI (Score:2)
However I did enjoy Tenet quite a bit.
When I have AI questions (Score:2)
The next logical step (Score:3)
If we endorse the view that AI is all-powerful, we are endorsing the view that it can alleviate people of responsibility for their actions -- militarily, socioeconomically, whatever
As a society we've already endorsed the view that corporations are all-powerful, therefore we've already endorsed the view that corporations can alleviate individual employees of responsibility for their actions.
Blaming the AI instead of the corporation will be a distinction without a difference - except that corporations might avoid some of the already pitifully small fines they pay by claiming "it wasn't us - the AI did it". Abstractions are wonderful things - until they're not.
Re: (Score:1)
You know, NOLAN, the AI SCIENTIST (Score:1)
Re: (Score:1)
The problem is not "AI" as such (Score:2)
The problems are something that could be called "artificial beings", and we already have that in the form of corporations. Those corporations often act against the interests of the beings they consist out of.
Not an AI expert (Score:2)
While I never condone putting stock in what people (especially famous people) have to say about topics outside of their skillset/experience/education, in this particular case Nolan is right. It will be used by big corporations to avoid responsibility. You don't have to be an AI expert or an economist to realise this, just a cursory read through recent history should make this outcome obvious.
His comments on the hypocrisy of journalists was marginally more insightful, but again not surprising to anyone pay
What? (Score:1)
Too bad nobody could hear him because the sound was mixed with the voice audio at 10%.