Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Businesses

Risk of 'Industrial Capture' Looms Over AI Revolution (ft.com) 60

An anonymous reader shares a report: There's a colossal shift going on in artificial intelligence -- but it's not the one some may think. While advanced language-generating systems and chatbots have dominated news headlines, private AI companies have quietly entrenched their power. Recent developments mean that a handful of individuals and corporations now control much of the resources and knowledge in the sector -- and will ultimately shape its impact on our collective future. The phenomenon, which AI experts refer to as "industrial capture," was quantified in a paper published by researchers from the Massachusetts Institute of Technology in the journal Science earlier this month, calling on policymakers to pay closer attention. Its data is increasingly crucial.

[...] The MIT research found that almost 70 per cent of AI PhDs went to work for companies in 2020, compared to 21 per cent in 2004. Similarly, there was an eightfold increase in faculty being hired into AI companies since 2006, far faster than the overall increase in computer science research faculty. "Many of the researchers we spoke to had abandoned certain research trajectories because they feel they cannot compete with industry -- they simply don't have the compute or the engineering talent," said Nur Ahmed, author of the Science paper. In particular, he said that academics were unable to build large language models like GPT-4, a type of AI software that generates plausible and detailed text by predicting the next word in a sentence with high accuracy. The technique requires enormous amounts of data and computing power that primarily only large technology companies like Google, Microsoft and Amazon have access to. Ahmed found that companies' share of the biggest AI models has gone from 11 per cent in 2010 to 96 per cent in 2021. A lack of access means researchers cannot replicate the models built in corporate labs, and can therefore neither probe nor audit them for potential harms and biases very easily. The paper's data also showed a significant disparity between public and private investment into AI technology.

This discussion has been archived. No new comments can be posted.

Risk of 'Industrial Capture' Looms Over AI Revolution

Comments Filter:
  • by groobly ( 6155920 ) on Thursday March 23, 2023 @12:14PM (#63393385)

    This was exactly what people said when the internet was created.

    • "This was exactly what people said when the internet was created."

      "Chat-GPT 5 isn't all that impressive."

    • No, there is key cause here which is how incredibly resource-intensive it is to train a state-of-the-art model. Universities truly have been sidelined. In the past, computer science was more theoretical, and even people from poorer but academically strong nations (USSR) could make real contributions.

      It's similar for physics and astronomy... pretty hard to be on the cutting edge without a particle accelerator or a space telescope these days.

      • strong nations (USSR) could make real contributions.

        Like what? In the Stalin era, computers were looked down upon at just being a useless toy for capitalists. They started going somewhere in the 50s but they were perpetually way behind the west. By the fall of communism, they still hadn't even developed the ability to produce computers at scale. What little they did produce were clones of western computers made from foreign parts. They were still using vacuum tubes well into the 80s for their military hardware.

  • by cats-paw ( 34890 ) on Thursday March 23, 2023 @12:17PM (#63393401) Homepage

    i know it's fashionable to be negative about AI, but i'm really negative about AI.

    The level of harm it can do is simply outstanding. even as a dumb robot, the fact that it can "figure things" out and then use that information to manipulate people has incredible potential for harm.

    propaganda which can be altered on-the-fly and disseminated all over the world in minutes.
    AI's being turned loose to figure out new exploits.
    i just don't even want to know how Google and Facebook are going to use these things for new and improved levels of invasion of privacy, whatever is left of it.
    all automated, running 24 hours a day, and completely driven by the greed and sociopathic tendencies of their owners.

    is our dystopian future here ?

    and it didn't even take a true AI, but what is basically a super duper pattern matcher.

    • i know it's fashionable to be negative about AI, but i'm really negative about AI.

      The level of harm it can do is simply outstanding. even as a dumb robot, the fact that it can "figure things" out and then use that information to manipulate people has incredible potential for harm.

      propaganda which can be altered on-the-fly and disseminated all over the world in minutes. AI's being turned loose to figure out new exploits. i just don't even want to know how Google and Facebook are going to use these things for new and improved levels of invasion of privacy, whatever is left of it. all automated, running 24 hours a day, and completely driven by the greed and sociopathic tendencies of their owners.

      is our dystopian future here ?

      and it didn't even take a true AI, but what is basically a super duper pattern matcher.

      I'm not sure these are new issues. Generative AI's ability to create convincing fake images and videos is a scary, brave new world. However, automated penetration testing is a good thing. Also, working in computer security, I'm not even sure AI can do anything we can't do already. What exploit can AI find? It's only as smart as the patterns. In order to find an exploit, you need an algorithm of attack.

      As you said, "AI" is just super-duper pattern matching + pattern regurgitation for generative stu

    • by Brain-Fu ( 1274756 ) on Thursday March 23, 2023 @01:21PM (#63393563) Homepage Journal

      Every powerful tool has equally harmful uses, from the knife to nuclear power. AI is no different. If all you look at is the potential abuse, and not the potential benefits, then that is more of a personal problem than a problem with the technology itself.

      You cannot stop technological advancement. All you can do is adapt to it. Cultivating a doom-and-gloom attitude is not a very helpful adaptation.

      • Previously the impact of all of those technologies have been buffered large numbers of reasonable people using them for reasonable uses. Now we're talking about a technology with the potential to turn large numbers of people into unreasonable mobs. We could do some things to limit this, maybe start with not allowing applications with large user bases (i.e. new Bing / Google Bard) to use real-time data.

        • How exactly would someone put a limitation in effect every company and developer around the world will adhere to?
    • by narcc ( 412956 )

      know it's fashionable to be negative about AI

      No, we're very much near the peak of the hype cycle. People still think AI is magical.

      but i'm really negative about AI.

      Sounds like you're a realist.

      The level of harm it can do is simply outstanding. even as a dumb robot, the fact that it can "figure things" out and then use that information to manipulate people has incredible potential for harm.

      Never mind, I take it back. You're just as confused as the singularity nuts. None of the things you're afraid of are things that an AI can do.

    • by CAIMLAS ( 41445 )

      What is a "true AI"?

      The model used for these large language models is, effectively, the same model people use when they learn things - albeit a very simplified version of it.

      I think LLM accurately captures the essence of what we've been seeking for AI for decades.

    • You think you're negative because what is essentially auto-complete will create propaganda and manipulate people?
      How about magically altering the structure of reality? Bet THAT shit never crossed your mind!

      Thankfully, we have FUCKING RETARDS like Henry Fucking Kissinger, Eric Goddamn Schmidt and Daniel Motherfuckin Huttenlocher for that. Also, every fucking degenerate at Wall Street Journal.
      These fucking morons think that not only will this iteration of Clippy [spotify.com] completely alter our societies and consciousnes

  • Duh (Score:4, Insightful)

    by Tulsa_Time ( 2430696 ) on Thursday March 23, 2023 @12:20PM (#63393413)

    You mean things like cars, phones, chip making, drilling tech, mining tech, advanced chemistry ... etc... are owned by for profit companies... ?

    I had no idea....

    • It's not just profit, it's resource-rich entities. It's easy to think of historical examples to replace "AI" in that headline: farming, pottery, weaving, radio, rocketry. Garage/academic tinkerers just can't play at the same level. Some subjects circle back around if and when costs fall and methods get easier.
    • it's that we have zero guard rails in place for trusts & monopolies. So all this tech is going to be monopolized by a few companies, leading to higher consumer prices & inflation. This is happening in pretty much every sector of the economy too.
  • by oldgraybeard ( 2939809 ) on Thursday March 23, 2023 @12:29PM (#63393435)
    Credentials do not guarantee creativity, insight and vision.
    • by Anonymous Coward

      Credentials do not guarantee creativity, insight and vision.

      Well, you can guarantee insight by having the credentials to an alternate account with mod points.

  • by rsilvergun ( 571051 ) on Thursday March 23, 2023 @12:39PM (#63393457)
    Over 40 years ago with the Reagan presidency and we never started it up again. So of course yes AI is going to belong to a handful of large multinationals. Everything is going to belong to a handful of large multinationals. Bill Gates is the largest farmland owner in America for Christ's sake.

    We could stop this but we're too distracted by culture War issues.
    • by HiThere ( 15173 )

      This is all extremely speculative, but:

      Well, not precisely. It's going to be the property of a mix of large multinationals and of foreign (largely unfriendly) governments. The governments are behind right now, but private investment in AI has a history of surges and fallow spells. Some governments put minor efforts into a project, but keep pushing for a long time. AI has now gotten good enough that several governments seem to thing that it's already useful, and can be made a lot more useful. So unless

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Thursday March 23, 2023 @12:45PM (#63393471)

    So that everyone can wield that power and not just some corps. That OpenAI now is itself a closed-source quasi proprietary corp-shill is quite ironic. A fact that hasn't gone unnoticed by Musk himself

    "OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.

    Not what I intended at all."

    Elon Musk on Twitter, 17th of Feb., 2023

    • by narcc ( 412956 )

      A fact that hasn't gone unnoticed by Musk himself

      Who gives a shit what that moron "thinks"? Find better heroes.

      • by CAIMLAS ( 41445 )

        ... it is precisely relevant to the topic at hand, as he was the founder of OpenAI.

        So, if you don't care, you probably should. Moron or not.

        • by narcc ( 412956 )

          Two facts:
          - OpenAI failed miserably at being open
          - Elmo was not involved in any way with the development of the technology

          So, again, why the fuck should anyone care what that moron "thinks"? Find better heroes.

    • But it's not. They've already transitioned to a for profit company. The non profit status seemed to be a tax dodge while they were in start up mode. The newer tools and models are already behind pay walls.

      He formed Open AI to make a ton of money. He's a billionaire, not an altruist. If he was an altruist he wouldn't be as rich as he is. And he wouldn't be pushing right wing authoritarian politics over on Twitter.

      But thou shalt no spaketh ill of our Lord & Savior, so here come the down mods despi
    • by HiThere ( 15173 )

      When did you last try to download the source code? I see no reason to believe that they EVER intended it to be open source. The "source code" that I found available was to enable a local module to connect to a remote server, for which the code wasn't available. Calling it "OpenAI" was just a lie.

      As a test for that assertion, what is was the license the code was under in what year? (I can't answer that, as I've never seen a way to access it. But I'm not talking about the code to connect to the remote se

    • Musk is known to be a lying sack of shit, isn't he?
  • Industrial capture warning behind paywall.
  • From client-side algorithms to server-side black boxes. Server-side means tight control, non-transparency and anything-as-a-service.
    "Train your own model" they'll say, omitting the part that you need a fortune to do so effectively.
    If nobody regulates this and make everything more open, we're collectively fucked.
  • If it takes massive computing power to spout out bullcrap, then maybe they shouldn't be wasting all that electricity?

  • by votsalo ( 5723036 ) on Thursday March 23, 2023 @01:42PM (#63393633)
    Luckily, time-travel research is still 50/50 between industry and the free world, so we can still beat SkyNet.
  • by oumuamua ( 6173784 ) on Thursday March 23, 2023 @01:46PM (#63393645)
    Training these models in the millions - What's the problem?
    Sure, It is out of reach of small businesses and small universities but the big universities can afford it.
    • by HiThere ( 15173 )

      No, it isn't out of the reach of small universities. I'm not certain it's out of the reach of individuals. It would be a lot SLOWER, but you can trade off speed against cost, and, to an extent, against required RAM.

      Also, you seem to be assuming that the current approach is optimal, and I see no reason to believe that. Gradient descent is going to be hard to beat, but that can be emulated with hill climbing starting from multiple points (to avoid entrapment in local minima). The trick would be to interfa

  • ...and not academia (at least in the US) because nobody can afford to work as a sub-minimum-wage adjunct, and there are almost no tenure-track positions anywhere.

  • Man, I cannot believe what an amazing series of commentaries AI has generated on this post. It has taken many real personalities, it would seem.

Happiness is twin floppies.

Working...