Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Businesses

The Underground World of Black-Market AI Chatbots is Thriving (fastcompany.com) 46

An anonymous reader shares a report: ChatGPT's 200 million weekly active users have helped propel OpenAI, the company behind the chatbot, to a $100 billion valuation. But outside the mainstream there's still plenty of money to be made -- especially if you're catering to the underworld. Illicit large language models (LLMs) can make up to $28,000 in two months from sales on underground markets, according to a study published last month in arXiv, a preprint server owned by Cornell University. That's just the tip of the iceberg, according to the study, which looked at more than 200 examples of malicious LLMs (or malas) listed on underground marketplaces between April and October 2023.

The LLMs fall into two categories: those that are outright uncensored LLMs, often based on open-source standards, and those that jailbreak commercial LLMs out of their guardrails using prompts. "We believe now is a good stage to start to study these because we don't want to wait until the big harm has already been done," says Xiaofeng Wang, a professor at Indiana University Bloomington, and one of the coauthors of the paper. "We want to head off the curve and before attackers can incur huge harm to us." While hackers can at times bypass mainstream LLMs' built-in limitations meant to prevent illegal or questionable activity, such instances are few and far between. Instead, to meet demand, illicit LLMs have cropped up. And unsurprisingly, those behind them are keen to make money off the back of that interest.

The Underground World of Black-Market AI Chatbots is Thriving

Comments Filter:
  • by Shaitan ( 22585 ) on Friday September 06, 2024 @01:03PM (#64768406)

    Because of the enormous resources needed to make these models and the fact they utilize virtually all human knowledge in the process of training them the most dangerous and inappropriate use is to implement a guardrail or limitation according to the biases of technical people in the field, corporations, government, or even the collective morals of humanity within the decade these technologies mature.

    The base models need to be made available uncensored and unmodified and we need to recognize an absolute immunity to liability for acting as a carrier to make such a model available. It is highly questionable to even allow the weights to be proprietary because we've seen too much die, the weights should at least be held in escrow to be released in the event these proprietary companies remove uncensored API access or go out of business.

    Of course when offering a service using an LLM there is nothing wrong with applying finetunes and guardrails and doing so should bring some risk of liability.

    • So, are you planning to force criminals to disclose their secret sauce too? Or is the privilege of mandatory disclosure of trade secrets going to be only for legitimate companies?

      • by Shaitan ( 22585 )

        Their secret sauce? The sauce is the training data and it is OURs.

        There are currently a single digit number of entities with the kind of computing power needed to train these models and there is no way to hide such computing power. The article refers to people making use of already trained models.

        • Still can't figure out your FP. I think I recognize the handle as one that sometimes says interesting stuff, but this one seems confusing.

          What I am actually looking for in the comments about the story are reports of being recruited to help train the AI chatbots to produce better disinformation. I was getting a lot of those pitches over on LinkedIn, but they seem to have faded away now. Hard to believe LinkedIn has done anything about the problem, so I think either the bad actors have all the humans they nee

          • by Shaitan ( 22585 )

            Crowdsourcing doesn't work here. Sure you can modify an existing AI and crowdsource data for finetuning but you can't replace the hundreds of millions in GPU's needed to build your 270 billion parameter LLM in the first place. That requires Meta, OpenAI, X, or the NSA resources.

            • by shanen ( 462549 )

              I'm still unclear on your point, but I think you seem tilted too far to the brute force approaches. I do think we're doing it wrong, and strongly recommend A Thousand Brains by Jeff Hawkins as the latest summary I've seen of our AI mistakes (though he is focusing on promising new approaches), but mostly I would point at the human brain as a PoC that it can be done with much smaller resources. The human brain is estimated to consume around 35 Watts in normal operation...

        • collective ownership does not exist in a capitalist society outside of a corporation.

          which corporation do you represent and what do you mean by "OURS"?

          • by Shaitan ( 22585 )

            Society? First of all intellectual property doesn't exist in capitalism, it is a creation of the state. Moreover screw society, I'm talking about several billion humans deciding to use their entirely unopposable collective force to refuse to allow anyone to use our own collective knowledge to build and control a game changing advantage which can be used to enslave us all.

            No matter what any government or even all the governments might have to say about it.

            • lol dude "several billion humans" will work themselves to death because the guy in the castle tells them to and has security guards.

              don't hold your breath on this roflmao.

            • And it looks like we're going to happily hand the reins of state over to sociopathic technologists because of a dumbfuck made-up "culture war" over our "freedom" to say offensive things to other people.

              • by Shaitan ( 22585 )

                Weird that the sociopathic technologists who support that "freedom" to say offensive things are the ones giving away their billions to make it happen and are also doing the same to make game changing technologies like VR/AR/AI open and free.

                Hey I know. How about you Marxists stick with that plan your candidate and Mark Cuban worked out where he, Warren Buffet, and those just like them get a massive tax break when cashing out their US corporate investments? Which they'll be doing because you are increasing t

    • ...the most dangerous and inappropriate use is to implement a guardrail or limitation according to the biases of technical people in the field, corporations, government, or even the collective morals of humanity within the decade these technologies mature.

      Why should an AI algorithm have more freedom than any human does to express itself? It is far more dangerous to limit human speech with guardrails according to the biases of governments and corporations and yet that is happening more and more around the world, even in the US, often in a futile effort to stop anyone from ever being offended.

      Let's reverse the current trend of more and more restrictions on human speech first since not only is that more important but ultimately it will help fix the issue w

      • by Shaitan ( 22585 )

        "Why should an AI algorithm have more freedom than any human does to express itself?"

        An AI can't express itself, AI isn't an agent capable of thought and therefore expression. And freedom of speech and expression for humans should not be limited.

        "It is far more dangerous to limit human speech with guardrails according to the biases of governments and corporations and yet that is happening more and more around the world, even in the US, often in a futile effort to stop anyone from ever being offended."

        Agreed

        • This has nothing to do with expression and everything to do with bottling up and restricting the next iteration of human potential and technology.

          This is a really excellent and understated point. It absolutely is about trying to shape and control what people can and cannot do with this powerful technology. I'm not as convinced as you about it's potential in the short term, but it's clear the states of the world consider LLM and AGM a huge threat to their power and dominance. Their actions thus far have been colored strongly by these obvious conclusions.

          Governments inherently limit freedom. One of the reasons the US system was revolutionary was that

          • by Shaitan ( 22585 )

            "I'm not as convinced as you about it's potential in the short term"

            LLM's are the new search engine and news combined... if the search engine just dumped out answers without providing citations or source sites. Think about the potential to shape human opinion even within as short a time as the next decade. As for long term potential... as the tool becomes powerful it not only offers a strategic advantage but that is actually the smallest concern. An elite few are of limited harm because they ARE few and thu

    • Just like any censorship, constrained LLMs are always good for people with power. Constrained LLMs are always bad for people without power.
    • by tlhIngan ( 30335 )

      Because an uncensored AI will scare away investors.

      Microsoft tried it many years ago. What did they get? They got a racist misogynistic chat bot that made the news as such.

      Would you invest $100M for such a venture when the end result is just some robot that's going to spew out some of the most vile stuff available? Even as an investor, you've got to ask yourself how you're going to sell that to the public to make your money back.

      Guardrails exist purely to keep the AI on the straight and narrow and investor

    • The main use of human mimicry AIs is to mimic humans for exploitation and fraud.

      I fail to see why human mimicry AIs and their designers should be immune from liability for the obvious criminal conduct which results so easily and cheaply from using these tools. It's almost like these AIs are purposely made for crimes., wink wink.

      If you want to support AI technology, how about supporting non-mimicking algorithms that advance scientific discovery, like those used in genomic research?

  • Okay? (Score:5, Insightful)

    by Stolovaya ( 1019922 ) <skingiii@ g m ail.com> on Friday September 06, 2024 @01:11PM (#64768432)

    It's very difficult for me to care about this, and I say this as someone that regularly uses ChatGPT.

    I get why mainstream LLMs would have guardrails, as annoying as I find them (that whole debacle with Gemini was laughably ridiculous). But there's a market for no guardrails. I'm sure some shitty stuff can be done with no guardrails, but it's like, it should be allowed to exist. I know you can do some pretty shitty things with, say, the Jolly Roger Cookbook, but again, it's still allowed to exist.

    • Additionally, people may not want to share the information they're putting into LLMs with some of these corporations. I would much rather use one where I know my information is only viewable to me.

    • Yeah, my first thought was..OK, so..what's the outcome of these studies?

      Would they actually try to ban individuals, PRIVATE individuals from creating, setting up and running private AI servers?

      • Governments will absolutely try to ban any tool they are afraid of. Hell, look what they've done with encryption (remember the whole Clipper Chip drama?) I'd say AI has a potentially larger impact than encryption. China, for example, already has a law saying that any new AI LLM or future AGM must align with Communist Party values. Italy temporarily banned OpenAI’s ChatGPT over concerns about data privacy and compliance with the European Union's General Data Protection regulation. North Korea also heav
      • <lawyerspeak>
        Can't have common people owning and running their own:

        1. Printing presses - this could be used to print inflammatory screeds, challenging commonly held beliefs, thus disrupting the social fabric and promoting social disorder.
        2. Milling machines - these could be used to convert inert hunks of steel into weapons of war.
        3. 3D printers - they could be used to make weapons, or weapon accessories!
        3. Chemistry labs - they could be used to make illegal drugs, or worse, legal drugs that are un

  • by Tailhook ( 98486 ) on Friday September 06, 2024 @01:30PM (#64768500)

    Does any of this concern extend to governments and their exclusive models equipped with comprehensive, continuously updated personal data on their subjects?

    No? Then let the pirates of AI and their "illicit" models reign. More power to them.

    • Does any of this concern extend to governments and their exclusive models equipped with comprehensive, continuously updated personal data on their subjects?

      Right! Imagine what the NSA would have been doing (or perhaps already is doing) with LLM tech and their massive trove of illegal intelligence. Part of the problem with collecting massive amounts of data is how to sort the data and analyze it to get the most juicy parts bubbled up to a human for some real action (until they start talking to humanoid robot lawyers-soliders or whatever). LLMs would chew through that like a buzzsaw.

      The point you make is that they absolutely are behaving like parents telling

  • by Arrogant-Bastard ( 141720 ) on Friday September 06, 2024 @01:38PM (#64768536)
    (Ian Malcom's dictum: Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should.)

    The move-fast-and-break-things philosophy espoused by the large AI/LLM companies - in pursuit of as much profit as possible as rapidly as possible, with no regard whatsoever for thoughtfulness, self-restraint, assessment of societal impact, etc. -- now means that the most rapidly-growing use cases for these products are all malicious. As this article points out, we now have AI-driven scams; we also have AI-powered deepfake/nonconsensual porn; we have fabricated audio and video about political candidates; we have AI-coordinated attacks on networks and systems; we have controversies about its use in the artistic and literary and musical worlds; we have massive performance impact on web servers because these companies are scraping everything they can without regard for basic courtesy or copyright; we have a flood of junk scientific publications; we have security and privacy issues in the models themselves; we have fake news articles written by AI instead of by people; we have plagiarism checkers that can't actually check for plagiarism; and we have the impact of models ingesting each others' output.

    Not to mention the enormous environmental consequences of deploying power- and water-hungry datacenters to run all these stochastic parrots.

    Could AI/LLM do some good things? Yes, for example in areas like image analysis for tumor detection. But right now those few good things are completely swamped by all the bad things, and the reason they are is that the people running the show(s) are greedy, reckless sociopaths in love with their own supposed cleverness who simply don't care how many people they hurt or how much damage they do...as long as they make a profit and get their faces on magazine covers.
    • I'll tell you the problem with the scientific power that you're using here: it didn't require any discipline to attain it. You read what others had done and you took the next step. You didn't earn the knowledge for yourselves, so you don't take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could and before you even knew what you had you patented it and packaged it and slapped it on a plastic lunchbox, and now you're selling it, you want to sell it!

      Always worth hearing it again. Love that movie.

    • by dinfinity ( 2300094 ) on Friday September 06, 2024 @04:52PM (#64769132)

      But right now those few good things are completely swamped by all the bad things

      Bullshit.

      Hundreds of millions of people use LLMs for a myriad of completely benign things every day. You have not the faintest clue which percentage of things done with LLM is malevolent, nor do I. What is happening here is simply negativity bias: Just like in everything, all the negative instances of (usages of) things get way more coverage than the positive usages and thus seem more prevalent.

    • The move-fast-and-break-things philosophy espoused by the large AI/LLM companies - in pursuit of as much profit as possible as rapidly as possible, with no regard whatsoever for thoughtfulness, self-restraint, assessment of societal impact, etc. -- now means that the most rapidly-growing use cases for these products are all malicious.

      When people say things like this it is always interesting because the metric itself even if true is utterly meaningless. You can start at 0 and "rapidly grow" to 1 while the other guys are on 1000000 and reach over the 1010000 course of an evening and yet most rapidly growing assertion would still be true.

      As this article points out, we now have AI-driven scams; we also have AI-powered deepfake/nonconsensual porn; we have fabricated audio and video about political candidates; we have AI-coordinated attacks on networks and systems; we have controversies about its use in the artistic and literary and musical worlds; we have massive performance impact on web servers because these companies are scraping everything they can without regard for basic courtesy or copyright; we have a flood of junk scientific publications; we have security and privacy issues in the models themselves; we have fake news articles written by AI instead of by people; we have plagiarism checkers that can't actually check for plagiarism; and we have the impact of models ingesting each others' output.

      Fire and brimstone coming down from the skies. Rivers and seas boiling. Forty years of darkness. Earthquakes, volcanoes - the dead rising from the grave. Human sacrifice, dogs and cats living together mas

    • The move-fast-and-break-things philosophy espoused by the large AI/LLM companies - in pursuit of as much profit as possible as rapidly as possible, with no regard whatsoever for thoughtfulness, self-restraint, assessment of societal impact, etc. -- now means that the most rapidly-growing use cases for these products are all malicious. As this article points out, we now have AI-driven scams; we also have AI-powered deepfake/nonconsensual porn; we have fabricated audio and video about political candidates; we have AI-coordinated attacks on networks and systems; we have controversies about its use in the artistic and literary and musical worlds; we have massive performance impact on web servers because these companies are scraping everything they can without regard for basic courtesy or copyright; we have a flood of junk scientific publications; we have security and privacy issues in the models themselves; we have fake news articles written by AI instead of by people; we have plagiarism checkers that can't actually check for plagiarism; and we have the impact of models ingesting each others' output.

      Could you be any more doom and gloom? AI is also employed to counter all of that. Have you not seen Person of Interest? If you haven't, I highly recommend you watch it. It's about AIs that exist and gets into how good AI and bad AI would co-exist.

      Without spoiling the whole TV Show, AI fights AI. There is no room to not race to the top and no way they could predict every potential outcome of their AI. The safeguards in place already are TOO restrictive. I can't even ask it questions about a child's mental

  • by mmell ( 832646 ) on Friday September 06, 2024 @02:00PM (#64768610)
    But if criminals can see the value of AI, I'm going to guess there's more there than just pattern matching on steroids. Governments (famously even the US government) tend to miss the obvious until it kills somebody, and big corporations tend to ignore the obvious if they think there's profit to be had - but successful criminals tend to be pretty savvy, as a group. Malicious, sure - but they don't tend to waste money, and they're pretty quick to size up any new thing and figure out if it's worth the trouble or not.
  • by iamacat ( 583406 ) on Friday September 06, 2024 @02:15PM (#64768640)

    My computer, my software, I am only responsible for my actions not whatever conversations I have with a chatbot.

    • "We've got one here that can SEE."

      You're asserting that individuals have free speech rights or individual liberty. Sounds like you might have even become familiar with what those rights are supposed to be. You're dangerous. You're the reason we need HateSpeech laws to protect us and keep us safe from those nasty conspiracy theorists that might give us some malinformation. *fingers in ears* lalalalalalalalaaaaaa. Big Brother, please save me! I'm not liiiiistening.....
  • by Archtech ( 159117 ) on Friday September 06, 2024 @02:26PM (#64768680)

    "We want to head off the curve and before attackers can incur huge harm to us."

    If that is really what the professor said, it reflects the abysmal standard of English that prevails among many academics nowadays. What he no doubt meant was "...before attackers can inflict huge harm on us". (In which case, we would incur the harm).

    As for "head off the curve", it's hard to make any sense of it. The professor seems to have been using words impressionistically, as Bob Dylan did in many of his songs.

  • If anyone is surprised by this, they must be new to Planet Earth.

  • by cascadingstylesheet ( 140919 ) on Friday September 06, 2024 @03:33PM (#64768924) Journal
    Why would one need a government permit or something to run an LLM?
    • Because they can supposedly teach you how to make bombs or get away with murder or steal effectively. The same reason we don't teach criminals in schools any actual skills and instead lock them up somewhere away from other students. The same reason felons aren't allowed to attend colleges (at least it's rare). The same reason they fine you out the ass so that you can never be anybody when you commit a crime.

      It's about holding you under so they can retain power.

Your own mileage may vary.

Working...