Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

Christopher Nolan Says AI Dangers Have Been 'Apparent For Years' (variety.com) 52

An anonymous reader quotes a report from Variety: Christopher Nolan got honest about artificial intelligence in a new interview with Wired magazine. The Oscar-nominated filmmaker says the writing has been on the wall about AI dangers for quite some time, but now the media is more focused on the technology because it poses a threat to their jobs. "The growth of AI in terms of weapons systems and the problems that it is going to create have been very apparent for a lot of years," Nolan said. "Few journalists bothered to write about it. Now that there's a chatbot that can write an article for a local newspaper, suddenly it's a crisis." Nolan said the main issue with AI is "a very simple one" and relates to the technology being used by companies to "evade responsibility for their actions."

"If we endorse the view that AI is all-powerful, we are endorsing the view that it can alleviate people of responsibility for their actions -- militarily, socioeconomically, whatever," Nolan continued. "The biggest danger of AI is that we attribute these godlike characteristics to it and therefore let ourselves off the hook. I don't know what the mythological underpinnings of this are, but throughout history there's this tendency of human beings to create false idols, to mold something in our own image and then say we've got godlike powers because we did that." Nolan added that he feels there is "a real danger" with AI, saying, "I identify the danger as the abdication of responsibility." "I feel that AI can still be a very powerful tool for us. I'm optimistic about that. I really am," he said. "But we have to view it as a tool. The person who wields it still has to maintain responsibility for wielding that tool. If we accord AI the status of a human being, the way at some point legally we did with corporations, then yes, we're going to have huge problems."

"The whole machine learning as applied to deepfake technology, that's an extraordinary step forward in visual effects and in what you could do with audio," Nolan told Wired. "There will be wonderful things that will come out, longer term, in terms of environments, in terms of building a doorway or a window, in terms of pooling the massive data of what things look like, and how light reacts to materials. Those things are going to be enormously powerful tools." Will Nolan be using AI technology on his films? "I'm, you know, very much the old analog fusty filmmaker," he said. "I shoot on film. And I try to give the actors a complete reality around it. My position on technology as far as it relates to my work is that I want to use technology for what it's best for. Like if we do a stunt, a hazardous stunt. You could do it with much more visible wires, and then you just paint out the wires. Things like that."

This discussion has been archived. No new comments can be posted.

Christopher Nolan Says AI Dangers Have Been 'Apparent For Years'

Comments Filter:
  • I'm with Nolan when he says AI will be another tool abused to evade responsibility, but there is not a single rational reason to shoot movies on film anymore. That is just weird artsy and pretentious nostalgia, the very same results (if one so much prefers grainy images in low light) could easily be achieved using a filter on the higher quality output of a modern digital camera. This irrational insistence on analog film just casts doubt on anything else he could tell us.
    • I'm with Nolan when he says AI will be another tool abused to evade responsibility, but there is not a single rational reason to shoot movies on film anymore. That is just weird artsy and pretentious nostalgia, the very same results (if one so much prefers grainy images in low light) could easily be achieved using a filter on the higher quality output of a modern digital camera. This irrational insistence on analog film just casts doubt on anything else he could tell us.

      Yeah, a wildly successful filmmaker whose films have grossed billions of dollars over the years is obviously an idiot who should listen to you.

    • Using film imposes restrictions on the process and this discipline can lead to better films. Does anyone wonder what the last Marvel film would have been like if they hadn't had a bottomless effects budget? The story might have been better, for sure
  • Credentials? (Score:4, Insightful)

    by JeffOwl ( 2858633 ) on Tuesday June 20, 2023 @06:25PM (#63619232)
    What are Christopher Nolan's credentials as far as AI and how it relates to public policy?
    • Re: (Score:3, Funny)

      by Anonymous Coward

      What are Christopher Nolan's credentials as far as AI and how it relates to public policy?

      It says right in the summary he's an Oscar-nominated filmmaker. Oscar nominated.

    • by gweihir ( 88907 )

      I have no idea, but he seems to be really clueless about AI.

      • by jhoegl ( 638955 )
        Its not AI hes commenting on, its human nature and lack of ethical development on new technology.
        • by gweihir ( 88907 )

          Well, he could have done that 2000 years ago then and nothing would have been different.

    • Comment removed based on user account deletion
    • by PJ6 ( 1151747 )

      What are Christopher Nolan's credentials as far as AI and how it relates to public policy?

      Christopher Nolan is a highly acclaimed filmmaker known for his work in the field of cinema, particularly in the genres of science fiction and thriller. While he has not been directly involved in the development or research of artificial intelligence (AI), his films have explored themes related to technology, consciousness, and the human mind. However, it is important to note that his credentials and expertise lie primarily in filmmaking rather than AI or public policy.

      Nolan's films, such as "Inception" and

    • Same as my credentials when I say the music is too loud in TENNET.

  • I couldnâ(TM)t hear him, hope it wasnâ(TM)t anything important.
  • Weird Luddite who won't even use digital cameras says technology is dangerous!

    • Yeah... given that any effects you might want from film can be added in post to digital, and you don't have to worry about, you know, FILM and all the extra time, effort, and expense of dealing with it... I'm not sure I'd hire a guy who refused to use modern tools to make modern movies.

  • Movies have been predicting horror stories about AI for decades, ever since movies like Star Trek envisioned AI in the form of "the computer," and long before that.

    The whole point of moves is to tell a dramatic story. There's going to be drama. Real life is usually much more boring.

    Actors, and screen writers, aren't exactly experts in AI or in the effects of AI. They know one thing well: how to tell a compelling story.

    • They know one thing well: how to tell a compelling story.

      Well, they did at one time.

      I’d value the opinion of a good SF writer on this topic though. SF isn’t about technology, but about how technology affects society. And a fair few of them have explored the topic of AI.

  • So, lets say I see a picture, very real looking, of Donald Trump fucking a dog.

    Now, I hate Donald Trump, and lets say I forward this to my friend, and it eventually gets on the news as a picture of Donald Trump fucking a dog. It looked real to me, and I'm not surprised by anything these days about Donald Trump's lack of morality. But I did not create that picture, and it turns out it was fake, made by AI by someone pushing some buttons to (further) demean Donald Trump. That person clearly bears responsib

  • by Walt Dismal ( 534799 ) on Tuesday June 20, 2023 @06:55PM (#63619322)

    There are several issues in the theme of what Nolan discusses.
    One is the ability of the newest generative AIs to produce fake things, like images or text, that seem to genuine and true but they aren't. Current chatbot technology is deeply flawed in that it produces but cannot explain the reasoning behind what it produces. This is semi-criminal in that the developers are pushing something they do not fully control, and which hallucinates false facts. That is dangerous if people come to rely on it without checking, such as the recent case of lawyers using AI and the AI giving them fake cases which the judge caught them presenting.

    Second, there is a danger in handing tools to the public that will drive them to rely on the tools and not personally develop the creativity of their minds. This could lead to a future where people addictively depend on their smart phone instead of learning how to do something themselves. Oh wait - that exists right now and we see it already weakening the minds of newer generations. Kids who go camping and don't know how to start a fire.

    Third, Nolan realizes, rightfully so, that although we do not currently have widespread AGIs that have major hidden goals of their own, we might in the near future, and it will be dangerous to let those loose. What if they start doing high speed trading and screw up markets. Oh wait - we already DO have AIs that trade, and have been known to crash markets.

    The recent news item about a military drone AI that tried to kill its remote pilot was fake and set up by a liar, but its warning is close to what could happen.

    I am aware of many possible dangers with AGIs to come - I work on development of synthetic mind AGIs - and I see that naive techies or even purposely deceptive developers and marketers and maybe military people will allow harmful AIs to come to exist.

    How can we handle that? One way I know of, and which I build into my AGIs, is to put in 'hardwired' backdoors such that you can monitor the inner workings of the AGI such as its goals, and be able to non-interruptably stop them. Sort of a sudo-AGI method. Only by entering directly into the mind can we forestall bad things by overriding them. Even so, someday an AI might find a way to fake the monitor channel. So we need smart people to anticipate 'self-hacking' by the AI and prevent it.

    I could say a lot more but no need to overlecture here.

    • by dasunt ( 249686 )

      There's the scenario in the book "Saturn's Children" by Charles Stross - we created AI as the perfect slaves, forced to obey us, and happy to do so.

      Add in the fact we perfected android companion bots.

      Humanity ended up going extinct due to people never procreating. Dealing with other human beings is messy, dealing with the AIs we created was easy.

      This undesirable end state (at least for humanity) resulted from AI that meant us no harm, and did not directly harm us. No grand conspiracy, no malicious

    • by Ormy ( 1430821 )

      ...there is a danger in handing tools to the public that will drive them to rely on the tools and not personally develop the creativity of their minds. This could lead to a future where people addictively depend on their smart phone instead of learning how to do something themselves. Oh wait - that exists right now and we see it already weakening the minds of newer generations. Kids who go camping and don't know how to start a fire.

      Smartphones do not weaken the minds of newer generations. Some people allow themselves to become reliant on their smartphone to their mental detriment, but there are others who use their smartphones to enhance their lives without becoming reliant on it (by being cognizant of that risk) to their mental benefit. If you observe a decrease in cognitive ability I suggest that is down to a generational decrease in parenting skills causing more people to be in the former group and less in the latter.

      People sai

    • Some of the dangers of AI would be mitigated if you had conscientious and thoughtful humans at the helm, but guess what, AI is likely to be implemented first by companies looking to cut costs. The same companies that once screwed over their workers by outsourcing will be the same ones that try to get rid of the last remnants of human staff. Or the ones that see H&S as dirty words... you know, the real cheapskates, with shitty middle-management...
  • Has he been living under a rock? He's a filmmaker so I'm sure he's familiar with "Hollywood Accounting", aka the same system by which the original Star Wars Trilogy never turned a profit.

    Publicly traded companies are purpose-built for evading responsibility. "We're just doing what the shareholders want." Yeah. "Just following orders". Haven't heard that one before.

    This Pandora's Box can't be closed. Hope left a long time ago and Schroedinger's Cat with it. The rest of the planet can try to pace itself and b

  • However I did enjoy Tenet quite a bit.

  • I go to christopher nolan, because he knows more than all the other AI researchers.
  • by jenningsthecat ( 1525947 ) on Tuesday June 20, 2023 @08:36PM (#63619606)

    If we endorse the view that AI is all-powerful, we are endorsing the view that it can alleviate people of responsibility for their actions -- militarily, socioeconomically, whatever

    As a society we've already endorsed the view that corporations are all-powerful, therefore we've already endorsed the view that corporations can alleviate individual employees of responsibility for their actions.

    Blaming the AI instead of the corporation will be a distinction without a difference - except that corporations might avoid some of the already pitifully small fines they pay by claiming "it wasn't us - the AI did it". Abstractions are wonderful things - until they're not.

    • by pbasch ( 1974106 )
      That is an excellent point. There are lot of parallels between AI and corporations. As you say, the spreading out of responsibility such that nobody actually feels it's up to them, and also the inability to explain how decisions actually occur. I am reminded of a very funny Tom the Dancing Bug, https://boingboing.net/2012/07... [boingboing.net]
  • We supposed to get our entertainment information from Stephen Hawking and our science news from a clown who makes colored light move across vellum. sure thing!
    • Next week: "We asked an award-winning children's book illustrator what they thought about the likelihood of meaningful practical progress emerging from ongoing fusion energy research the next three years. It's all just matter and energy and human endeavour, yeah?"
  • The problems are something that could be called "artificial beings", and we already have that in the form of corporations. Those corporations often act against the interests of the beings they consist out of.

  • While I never condone putting stock in what people (especially famous people) have to say about topics outside of their skillset/experience/education, in this particular case Nolan is right. It will be used by big corporations to avoid responsibility. You don't have to be an AI expert or an economist to realise this, just a cursory read through recent history should make this outcome obvious.

    His comments on the hypocrisy of journalists was marginally more insightful, but again not surprising to anyone pay

  • Too bad nobody could hear him because the sound was mixed with the voice audio at 10%.

"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah

Working...