Risk of 'Industrial Capture' Looms Over AI Revolution (ft.com) 60
An anonymous reader shares a report: There's a colossal shift going on in artificial intelligence -- but it's not the one some may think. While advanced language-generating systems and chatbots have dominated news headlines, private AI companies have quietly entrenched their power. Recent developments mean that a handful of individuals and corporations now control much of the resources and knowledge in the sector -- and will ultimately shape its impact on our collective future. The phenomenon, which AI experts refer to as "industrial capture," was quantified in a paper published by researchers from the Massachusetts Institute of Technology in the journal Science earlier this month, calling on policymakers to pay closer attention. Its data is increasingly crucial.
[...] The MIT research found that almost 70 per cent of AI PhDs went to work for companies in 2020, compared to 21 per cent in 2004. Similarly, there was an eightfold increase in faculty being hired into AI companies since 2006, far faster than the overall increase in computer science research faculty. "Many of the researchers we spoke to had abandoned certain research trajectories because they feel they cannot compete with industry -- they simply don't have the compute or the engineering talent," said Nur Ahmed, author of the Science paper. In particular, he said that academics were unable to build large language models like GPT-4, a type of AI software that generates plausible and detailed text by predicting the next word in a sentence with high accuracy. The technique requires enormous amounts of data and computing power that primarily only large technology companies like Google, Microsoft and Amazon have access to. Ahmed found that companies' share of the biggest AI models has gone from 11 per cent in 2010 to 96 per cent in 2021. A lack of access means researchers cannot replicate the models built in corporate labs, and can therefore neither probe nor audit them for potential harms and biases very easily. The paper's data also showed a significant disparity between public and private investment into AI technology.
[...] The MIT research found that almost 70 per cent of AI PhDs went to work for companies in 2020, compared to 21 per cent in 2004. Similarly, there was an eightfold increase in faculty being hired into AI companies since 2006, far faster than the overall increase in computer science research faculty. "Many of the researchers we spoke to had abandoned certain research trajectories because they feel they cannot compete with industry -- they simply don't have the compute or the engineering talent," said Nur Ahmed, author of the Science paper. In particular, he said that academics were unable to build large language models like GPT-4, a type of AI software that generates plausible and detailed text by predicting the next word in a sentence with high accuracy. The technique requires enormous amounts of data and computing power that primarily only large technology companies like Google, Microsoft and Amazon have access to. Ahmed found that companies' share of the biggest AI models has gone from 11 per cent in 2010 to 96 per cent in 2021. A lack of access means researchers cannot replicate the models built in corporate labs, and can therefore neither probe nor audit them for potential harms and biases very easily. The paper's data also showed a significant disparity between public and private investment into AI technology.
Re: (Score:2)
"Great ideas enter into reality with evil associates and with disgusting alliances. But the greatness remains, nerving the race in its slow ascent."
Alfred North Whitehead, Adventures of Ideas
Re: (Score:2)
What? Sounds to me like greed is holding it back.
Re: duh (Score:2)
How so? I don't see anybody volunteering to build, maintain, and power a data center full of gear to run all of this, or volunteering to pay somebody to build and maintain the computing clusters, data storage, or networking gear behind it. Nor do I see any AI PhDs willing to work for free. In fact, everybody involved in this is going to want to make bank, especially if they've got rare talents.
Anybody who has the resources or skills to do any of this work isn't going to contribute to it without seeking some
Re: (Score:2)
This is an easy one. When researchers share knowledge, they make progress more rapidly.
Anybody who has the resources or skills to do any of this work isn't going to contribute to it without seeking some kind of ROI.
It's amazing you can still hang on to a nonsense believe that outdated. I haven't seen anyone under 70 push that myth in a very long time.
Re: duh (Score:2)
This is an easy one. When researchers share knowledge, they make progress more rapidly.
If it's so easy then why is it that you can't answer my questions? This isn't an answer to anything, it's just bullshit conjecture.
It's amazing you can still hang on to a nonsense believe that outdated. I haven't seen anyone under 70 push that myth in a very long time.
I have to ask, do you even work for a living?
Re: (Score:2)
This is an easy one. When researchers share knowledge, they make progress more rapidly.
If it's so easy then why is it that you can't answer my questions? This isn't an answer to anything, it's just bullshit conjecture.
Huge amounts of the early work in deep learning was done in open source and by people who shared. The people who are "making bank" now are not the inventors, they are "innovators" in the sense of entrepreneurs that take other people's ideas and build companies The simple fact is that if, instead of using Apache, MIT or BSD style licensing, the original people who worked on this had all gone for strong copyleft licensing (at least AGPL, maybe more) they would now have an industry where they could be directly
Re: (Score:2)
Huge amounts of the early work in deep learning was done in open source and by people who shared. The people who are "making bank" now are not the inventors, they are "innovators" in the sense of entrepreneurs that take other people's ideas and build companies The simple fact is that if, instead of using Apache, MIT or BSD style licensing, the original people who worked on this had all gone for strong copyleft licensing (at least AGPL, maybe more) they would now have an industry where they could be directly consulting and working on the code to provide behefits for themselves and many others.
That...says a great big giant amount of...nothing... When you compare the early days of basically anything, you're nearly universally going to have shit that's much harder to do and requires a much greater scale to deal with. You're literally comparing reddit porn deepfakes, which can be done on a personal computer using just a few videos that you essentially want to merge together, to ChatGPT, which requires a vastly larger amount of data in addition to hardware resources that most people can't even fit in
Re: (Score:2)
If it's so easy then why is it that you can't answer my questions?
I did. Very clearly. It's not my fault that you don't understand the answers. Try harder.
Re: (Score:2)
Greed drives technological advance.
Masturbate to AI generated photos of Ayn Rand much?
Sure sounds like it...
Meanwhile, back in the real world, hoarding of wealth has been over and over proven to foster rent seeking at the expense of advancement.
Be it copyright/trademark/patent laws designed to protect established wealth OR such obvious down to earth issues like any future innovators being priced out of living in the supposed centers of innovation such as the Silicon Valley.
Re: (Score:2)
I agree.
In just about every area, monopoly capitalism has triumphed. Food, health care, housing, transportation... even the "open" Internet is dominated by just a few companies.
One saving grace is that there do seem to be good open source tools which can enable individuals to build AI relatively easily. These could keep monopoly actors in check but it will be difficult.
Re: duh (Score:2)
even the "open" Internet is dominated by just a few companies.
That's not capitalism, that's hacktivism. If you want to run your own server for whatever reason, all it takes is for some hacktivist to hate your content for whatever reason to DDoS it. So basically everybody has to use big CDNs like cloudflare. People like you encourage this behavior all the time, so the status quo is what you asked for. Disagree? Then explain why sites like 8kun or kiwifarms can't exist without being behind a CDN? Or is your motto "my content for me, but your content not for thee?"
Not th
same old same old (Score:3)
This was exactly what people said when the internet was created.
Famous last words (Score:2)
"This was exactly what people said when the internet was created."
"Chat-GPT 5 isn't all that impressive."
Re: (Score:2)
It's similar for physics and astronomy... pretty hard to be on the cutting edge without a particle accelerator or a space telescope these days.
Re: same old same old (Score:2)
strong nations (USSR) could make real contributions.
Like what? In the Stalin era, computers were looked down upon at just being a useless toy for capitalists. They started going somewhere in the 50s but they were perpetually way behind the west. By the fall of communism, they still hadn't even developed the ability to produce computers at scale. What little they did produce were clones of western computers made from foreign parts. They were still using vacuum tubes well into the 80s for their military hardware.
a foregone conclusion (Score:5, Insightful)
i know it's fashionable to be negative about AI, but i'm really negative about AI.
The level of harm it can do is simply outstanding. even as a dumb robot, the fact that it can "figure things" out and then use that information to manipulate people has incredible potential for harm.
propaganda which can be altered on-the-fly and disseminated all over the world in minutes.
AI's being turned loose to figure out new exploits.
i just don't even want to know how Google and Facebook are going to use these things for new and improved levels of invasion of privacy, whatever is left of it.
all automated, running 24 hours a day, and completely driven by the greed and sociopathic tendencies of their owners.
is our dystopian future here ?
and it didn't even take a true AI, but what is basically a super duper pattern matcher.
Does AI really add anything, attack-wise? (Score:3)
i know it's fashionable to be negative about AI, but i'm really negative about AI.
The level of harm it can do is simply outstanding. even as a dumb robot, the fact that it can "figure things" out and then use that information to manipulate people has incredible potential for harm.
propaganda which can be altered on-the-fly and disseminated all over the world in minutes. AI's being turned loose to figure out new exploits. i just don't even want to know how Google and Facebook are going to use these things for new and improved levels of invasion of privacy, whatever is left of it. all automated, running 24 hours a day, and completely driven by the greed and sociopathic tendencies of their owners.
is our dystopian future here ?
and it didn't even take a true AI, but what is basically a super duper pattern matcher.
I'm not sure these are new issues. Generative AI's ability to create convincing fake images and videos is a scary, brave new world. However, automated penetration testing is a good thing. Also, working in computer security, I'm not even sure AI can do anything we can't do already. What exploit can AI find? It's only as smart as the patterns. In order to find an exploit, you need an algorithm of attack.
As you said, "AI" is just super-duper pattern matching + pattern regurgitation for generative stu
Re:a foregone conclusion (Score:5, Insightful)
Every powerful tool has equally harmful uses, from the knife to nuclear power. AI is no different. If all you look at is the potential abuse, and not the potential benefits, then that is more of a personal problem than a problem with the technology itself.
You cannot stop technological advancement. All you can do is adapt to it. Cultivating a doom-and-gloom attitude is not a very helpful adaptation.
Re: (Score:2)
Previously the impact of all of those technologies have been buffered large numbers of reasonable people using them for reasonable uses. Now we're talking about a technology with the potential to turn large numbers of people into unreasonable mobs. We could do some things to limit this, maybe start with not allowing applications with large user bases (i.e. new Bing / Google Bard) to use real-time data.
Re: (Score:1)
Re: (Score:2)
know it's fashionable to be negative about AI
No, we're very much near the peak of the hype cycle. People still think AI is magical.
but i'm really negative about AI.
Sounds like you're a realist.
The level of harm it can do is simply outstanding. even as a dumb robot, the fact that it can "figure things" out and then use that information to manipulate people has incredible potential for harm.
Never mind, I take it back. You're just as confused as the singularity nuts. None of the things you're afraid of are things that an AI can do.
Re: (Score:1)
What is a "true AI"?
The model used for these large language models is, effectively, the same model people use when they learn things - albeit a very simplified version of it.
I think LLM accurately captures the essence of what we've been seeking for AI for decades.
Oh you sweet summer child of negativity... (Score:2, Funny)
You think you're negative because what is essentially auto-complete will create propaganda and manipulate people?
How about magically altering the structure of reality? Bet THAT shit never crossed your mind!
Thankfully, we have FUCKING RETARDS like Henry Fucking Kissinger, Eric Goddamn Schmidt and Daniel Motherfuckin Huttenlocher for that. Also, every fucking degenerate at Wall Street Journal.
These fucking morons think that not only will this iteration of Clippy [spotify.com] completely alter our societies and consciousnes
Duh (Score:4, Insightful)
You mean things like cars, phones, chip making, drilling tech, mining tech, advanced chemistry ... etc... are owned by for profit companies... ?
I had no idea....
Re: (Score:2)
The problem isn't for profit companies (Score:2)
Is it really an issue? (Score:4, Insightful)
Re: (Score:1)
Credentials do not guarantee creativity, insight and vision.
Well, you can guarantee insight by having the credentials to an alternate account with mod points.
We stopped enforcing antitrust law (Score:4, Insightful)
We could stop this but we're too distracted by culture War issues.
Re: (Score:3)
This is all extremely speculative, but:
Well, not precisely. It's going to be the property of a mix of large multinationals and of foreign (largely unfriendly) governments. The governments are behind right now, but private investment in AI has a history of surges and fallow spells. Some governments put minor efforts into a project, but keep pushing for a long time. AI has now gotten good enough that several governments seem to thing that it's already useful, and can be made a lot more useful. So unless
Ironically, this is why Musk founded OpenAI (Score:5, Interesting)
So that everyone can wield that power and not just some corps. That OpenAI now is itself a closed-source quasi proprietary corp-shill is quite ironic. A fact that hasn't gone unnoticed by Musk himself
"OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.
Not what I intended at all."
Elon Musk on Twitter, 17th of Feb., 2023
Re: (Score:1)
A fact that hasn't gone unnoticed by Musk himself
Who gives a shit what that moron "thinks"? Find better heroes.
Re: (Score:2)
It's directly relevant because he was one of the founders of OpenAI.
If it was supposed to be and stay Open, then its foundation was failed, bigly.
Therefore Musk is either lying, or an idiot, because you can't achieve Openness with simple hopes and prayers.
Notably, there should not have been a for-profit division if the goal was to avoid corporate control.
Re: (Score:2)
You're 2/3 right, and those first two points are the most relevant. :)
There's no inherent conflict in a company being both Open and profitable though, up to a point. That point is 51% ownership (see Facebook for a more advanced variation), which I'm sure a genius like Musk could have managed if he'd really wanted to.
Re: (Score:2)
What difference does that make? It's not like he's involved in the development of the technology any way.
Don't you ever get tired of licking boots?
Re: (Score:1)
... it is precisely relevant to the topic at hand, as he was the founder of OpenAI.
So, if you don't care, you probably should. Moron or not.
Re: (Score:2)
Two facts:
- OpenAI failed miserably at being open
- Elmo was not involved in any way with the development of the technology
So, again, why the fuck should anyone care what that moron "thinks"? Find better heroes.
I know it's called Open AI (Score:2)
He formed Open AI to make a ton of money. He's a billionaire, not an altruist. If he was an altruist he wouldn't be as rich as he is. And he wouldn't be pushing right wing authoritarian politics over on Twitter.
But thou shalt no spaketh ill of our Lord & Savior, so here come the down mods despi
Re: (Score:2)
Re: (Score:3)
When did you last try to download the source code? I see no reason to believe that they EVER intended it to be open source. The "source code" that I found available was to enable a local module to connect to a remote server, for which the code wasn't available. Calling it "OpenAI" was just a lie.
As a test for that assertion, what is was the license the code was under in what year? (I can't answer that, as I've never seen a way to access it. But I'm not talking about the code to connect to the remote se
Re: (Score:2)
Irony (Score:2)
Re: (Score:2)
You left out the Chinese (Baidu). They have admitted that they are a couple of years behind, but that will not necessarily remain the case.
Actually, I suspect that several governments have similar projects. That we don't hear about them is exactly what we should expect if they were present, so that's not evidence one way or the other. But that does mean that IF they exist, we won't have any notice before they are extremely successful. (And just because Baidu had an embarrassing flop of a demonstration d
The focus has been shifting... (Score:2)
"Train your own model" they'll say, omitting the part that you need a fortune to do so effectively.
If nobody regulates this and make everything more open, we're collectively fucked.
So AI is bad for the environment? (Score:2)
If it takes massive computing power to spout out bullcrap, then maybe they shouldn't be wasting all that electricity?
Time-Travel to the rescue (Score:3)
MIT endowment is $25 billion (Score:3)
Sure, It is out of reach of small businesses and small universities but the big universities can afford it.
Re: (Score:3)
No, it isn't out of the reach of small universities. I'm not certain it's out of the reach of individuals. It would be a lot SLOWER, but you can trade off speed against cost, and, to an extent, against required RAM.
Also, you seem to be assuming that the current approach is optimal, and I see no reason to believe that. Gradient descent is going to be hard to beat, but that can be emulated with hill climbing starting from multiple points (to avoid entrapment in local minima). The trick would be to interfa
PhDs go to industry (Score:2)
...and not academia (at least in the US) because nobody can afford to work as a sub-minimum-wage adjunct, and there are almost no tenure-track positions anywhere.
Kudos to ChatGPT (Score:2)