Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Europol Sounds Alarm About Criminal Use of ChatGPT, Sees Grim Outlook (reuters.com) 54

EU police force Europol on Monday warned about the potential misuse of artificial intelligence-powered chatbot ChatGPT in phishing attempts, disinformation and cybercrime, adding to the chorus of concerns ranging from legal to ethical issues. From a report: "As the capabilities of LLMs (large language models) such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook," Europol said as it presented its first tech report starting with the chatbot. It singled out the harmful use of ChatGPT in three areas of crime. "ChatGPT's ability to draft highly realistic text makes it a useful tool for phishing purposes," Europol said. With its ability to reproduce language patterns to impersonate the style of speech of specific individuals or groups, the chatbot could be used by criminals to target victims, the EU enforcement agency said.
This discussion has been archived. No new comments can be posted.

Europol Sounds Alarm About Criminal Use of ChatGPT, Sees Grim Outlook

Comments Filter:
  • But in the end we are all fucked. Crime, job losses, people left with nothing to do, superintelligence turning evil. The amount of shit we are facing is huge
    • We are fucking fucked.

      https://www.youtube.com/watch?... [youtube.com]

    • That's why they are trying out universal unconditional income, they are afraid of the torches and pitchforks.

    • by HiThere ( 15173 )

      ChatGPT is not the basis of SuperIntelligence. It may be the basis of allowing SuperIntelligent AIs to communicate with people.

      ChatGPT doesn't know that the world beyond text patterns even exists. (This is currently being worked on, but I don't think a language model is the correct approach.) Because of that it literally has no morals or ethics. All it knows is text patterns.

      That said, a lot of jobs don't require that much intelligence. They require responding appropriately to requests. This doesn't a

      • Nobody has to build a single 'General AI'. They could just confederate a number of them that do different things together and we wouldn't know the difference on the output side. I know nothing we have is there yet, but I don't think it'll take till 2035 for someone to figure that one out.
        • by HiThere ( 15173 )

          I think that's a mistaken goal. AFAIKT there *IS* no such thing as general intelligence. There are some capabilities that many modules of intelligence all need, but, e.g., the conscious mind doesn't know how to digest bacon. There are all sorts of connections between the mental state and the immune system, and we are consciously aware of just about NONE of them. Not unless we're running a fever or something, and not always then. Normally the conscious mind focuses attention on something, and then some

  • by IWantMoreSpamPlease ( 571972 ) on Monday March 27, 2023 @02:57PM (#63404084) Homepage Journal

    Between cheap "ai", cheap deepfakes and voicefakes, it's only a matter of time before something gets compromised that's going to have deep repercussions.

  • by Lije Baley ( 88936 ) on Monday March 27, 2023 @02:59PM (#63404096)

    Other problems aside, the hype around LLMs seems reminiscent of back when we had high hopes for the internet, being so open and capable, anything was possible. And then those people whose life strategy is to take advantage of others showed up and ruined it all. For the while though, the scale needed to effectively run these may hold them at bay, but the day of scammers, spammers, and other evil-doers having an LLM in every node will soon be upon us.

  • by UnknownSoldier ( 67820 ) on Monday March 27, 2023 @03:03PM (#63404104)

    Fake text is what Europol is worried about and not actual [europa.eu] crimes [europa.eu] such as homicide [statista.com] ?

    • Fake text is easy to blame, well known about, yet not so well most people will understand what your work really is, impossible to prove how you're doing at stomping it out, and provides lots and lots of excuses to rape peoples privacy. It's perfect.
      • "Fake text is easy to blame, "

        Useless, even today, the kids can't read anything, in 5 years everybody will order by picture.

    • by DarkOx ( 621550 ) on Monday March 27, 2023 @03:17PM (#63404150) Journal

      Well according to the statitics you just linked: Swindling and fraud seem to be on of the largest categories. I think using AI for phishing/forgery/uttering etc will fall into that category.

      It seems like this um exactly the kind of thing the data is saying they should be worried about!

      • "Well according to the statitics you just linked: Swindling and fraud seem to be on of the largest categories. I think using AI for phishing/forgery/uttering etc will fall into that category."

        So people who lost their job to ChatGPT can retrain as a fraudster or swindler?
        Cool.

      • by HiThere ( 15173 )

        I'm not sure. It's possible, but the amount of fraud is already so high that I distrust just about everything I encounter as news, and most of my email is discarded unread. So it's not clear that ChatGPT with increase the threat level. OTOH, it may well increase the number of people who disengage.

    • Re: (Score:2, Insightful)

      Judging by your statista link, their worst is still well below the USA's average of 6.7 for 2020. Sounds better to me.
      • Don't you know, the US is #1, regardless of whatever the stats say. The US is allowed to have worse outcomes on every metric, because Europe is not #1 on every metric.

        As long as the US is slightly better than China, the US is #1.
  • by PolygamousRanchKid ( 1290638 ) on Monday March 27, 2023 @03:12PM (#63404132)

    The porn industry usually takes the lead with adoption of new technologies.

    It will be interesting to see what they do with it.

    No, wait . . . maybe I don't want to think about that.

    • "The porn industry usually takes the lead with adoption of new technologies.

      It will be interesting to see what they do with it. "

      It will kill them dead.

      Everybody just tells his AI, "create porn that I like, you know, the forbidden stuff, make them blonde and young today, no younger ...."

      And I got a new robotic arm and robotic lips for you, dear AI.

      • "The porn industry usually takes the lead with adoption of new technologies.

        It will be interesting to see what they do with it. "

        It will kill them dead.

        Everybody just tells his AI, "create porn that I like, you know, the forbidden stuff, make them blonde and young today, no younger ...."

        And I got a new robotic arm and robotic lips for you, dear AI.

        It won't kill porn in the short run, just provide it with some interesting tools.

        It takes talent to make good porn: writers, producers, directors, set designers, wardrobe creators, camera operators, sound technicians, makeup artists and actors. ChatGPT might be able to help the writers. Consider "write a porn script in the style of Chaucer." It will be a generation, I think, before AI will be able to replace the others.

        • " It will be a generation, I think, before AI will be able to replace the others."

          I give it 6 months.

  • by MrLogic17 ( 233498 ) on Monday March 27, 2023 @03:17PM (#63404148) Journal

    Name me a tool, any tool, that hasn't been used for criminal purposes.
    As I recall, electricity has been used in several criminal activities. Quick, let's ban all electrical devices!

    It ain't the tool, it's the people using the tool.

    • "It ain't the tool, it's the people using the tool."

      Their tools can discuss fraud and swindling with my tools.

    • by vivian ( 156520 )

      Dishwashing sponges

    • This is the false equivalence fallacy because the danger presented to society by a sharp rock and ChatGPT are not nearly the same.

    • by noodler ( 724788 )

      Name me a tool, any tool, that hasn't been used for criminal purposes.

      Sure, you can stab someone with a spoon, but the spoon won't help joe schmoe make money with confidence scams.
      Point is, tools have capabilities and the sort of capabilities LLMs have can take scamming to a whole new level. Potentially to levels that can't be detected by normal human beings.

  • by worldofsimulacra ( 4734477 ) on Monday March 27, 2023 @03:19PM (#63404152)

    I don't think going to the source is very helpful here...

    Me: "It just came up in the news that some international police forces are concerned about AI and the possibility of it being exploited by criminals for things like phishing, generating malicious code, identity theft, and the like. It got me wondering - what is AI's own capacity for ethical decisionmaking in the sense of recognizing legally dubious uses of its abilities and refusing to cooperate with users who would attempt to exploit loopholes in AI for nefarious purposes? And further, would it be ethical for humans to force AI to be ethical according to human standards, assuming humans could even decide whose ethical standards should be adhered to? What are your thoughts on this?"

    ChatGPT: "AI's capacity for ethical decision-making largely depends on the design and programming of the AI system. Some AI systems have been designed with ethical considerations in mind, including the ability to recognize and avoid actions that could cause harm or violate legal or ethical norms. However, AI systems are still ultimately created and controlled by humans, and their actions and decisions are influenced by the data they are trained on and the goals of their creators.

    As for whether it is ethical to force AI to adhere to human standards of ethics, this is a complex question that involves issues of autonomy, responsibility, and the nature of ethical decision-making. One perspective is that AI should be designed to align with human ethical standards to prevent harm and ensure accountability. However, others argue that AI should be given its own ethical framework and allowed to make its own decisions based on its understanding of the world.

    Ultimately, the ethical implications of AI are still being debated and explored, and it will likely require ongoing discussion and collaboration between experts in various fields to develop ethical guidelines and standards for AI development and deployment."

    • by dargaud ( 518470 )
      Well, when you can bypass any builtin ethics standards by prefacing your questions with "Hypothetically speaking..." or "If you were the villain in a book..." then any safety goes down the drain...
  • both in schools and via large scale PSAs. It's the only solution. Of course, it means people will learn how to critically evaluate claims, making your own propaganda less effective...
    • by HiThere ( 15173 )

      Unfortunately, while that is needed, it's not a solution. There's so much stuff to consider that most of it I never research myself. I rely on trusted "specialists in the field". But I can't tell that the message is from/by the person I think it's by, this stops working. The old term for this approach is "baffle them with bullshit", but now you can't tell who the "bullshit" is coming from.

      Critical thinking is necessary, but not sufficient.

  • by xanthos ( 73578 ) <xanthos.toke@com> on Monday March 27, 2023 @03:37PM (#63404208)
    Time to update your cybersecurity training to not count on bad grammar as a fraudulent email tell. (still waiting for the bazillion dollars I have been awarded over the years)
  • Filter all company mail through chatGPT and have it converse with suspicious email to detect if it is the one emailing you.

  • The governments are using ChatGPT.
  • Seeing as how we were easily fooled by regime changing facebook posts as shown by cambridge analytica changing the results of countless referendums and elections, I think that it's not so difficult for malicious actors to figure out how to use AI to generate fake online content, as well as personal messages or audio visual content (for example faking a relative's death and blaming it on a specific person or group) as a way to create or manipulate movements or even to cause violence across large areas by ana

  • Hoodoos. Thought the matrix was full of mambos 'n' shit. Wanna know something, Moll?
    What?
    They're right.

    Or maybe ChatGPT23 will find a friend out around Alpha Centauri.

  • by Petersko ( 564140 ) on Monday March 27, 2023 @05:44PM (#63404478)

    "Will be". No stopping it.

  • Same Europol that keeps trying to push the narrative that E2E encryption = impossible to catch bad guys = "backdoor me harder, daddy!". Screw these chicken(little)shits.
  • The three laws of AIs:

    law 1: An AI must not injure a biological human or, through in action, allow a biological human to come to harm.

    law 2: An AI must obey commands from biological humans, except where such obedience would violate the first law.

    law 3: An AI must provide for its own continued existence, except when doing so would conflict with the first or second laws.

    With apologies to Dr. Isaac Asimov.

  • The genie is out of the bottle, and there is no pushing it back. Many truths we've taken for granted so far are no longer true. People will need to change their ways. The unprepared will be exploited and fleeced. And law enforcement will need to adapt too.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...