Microsoft is Trying To Lessen Its Addiction To OpenAI as AI Costs Soar (theinformation.com) 18
Microsoft's push to put artificial intelligence into its software has hinged almost entirely on OpenAI, the startup Microsoft funded in exchange for the right to use its cutting-edge technology. But as the costs of running advanced AI models rise, Microsoft researchers and product teams are working on a plan B. The Information: In recent weeks, Peter Lee, who oversees Microsoft's 1,500 researchers, directed many of them to develop conversational AI that may not perform as well as OpenAI's but that is smaller in size and costs far less to operate, according to a current employee and another person who recently left the company. Microsoft's product teams are already working on incorporating some of that Microsoft-made AI software, powered by large language models, in existing products, such as a chatbot within Bing search that is similar to OpenAI's ChatGPT, these people said.
[...] Microsoft's research group doesn't have illusions about developing a large AI like GPT-4. The team doesn't have the same computing resources as OpenAI, nor does it have armies of human reviewers to give feedback about how well their LLMs answer questions so engineers can improve them. Undeniably, OpenAI and other developers -- including Google and Anthropic, which on Monday received $4 billion from Amazon Web Services -- are firmly ahead of Microsoft when it comes to developing advanced LLMs. But Microsoft may be able to compete in a race to build AI models that mimic the quality of OpenAI software at a fraction of the cost, as Microsoft showed in June with the release of one in-house model it calls Orca.
[...] Microsoft's research group doesn't have illusions about developing a large AI like GPT-4. The team doesn't have the same computing resources as OpenAI, nor does it have armies of human reviewers to give feedback about how well their LLMs answer questions so engineers can improve them. Undeniably, OpenAI and other developers -- including Google and Anthropic, which on Monday received $4 billion from Amazon Web Services -- are firmly ahead of Microsoft when it comes to developing advanced LLMs. But Microsoft may be able to compete in a race to build AI models that mimic the quality of OpenAI software at a fraction of the cost, as Microsoft showed in June with the release of one in-house model it calls Orca.
cost == tons of coal converted to answers (Score:5, Interesting)
Just a reminder, the costs here are ultimately measured in watt hours. They're turning coal into plagiarism at an alarming rate.
Re:cost == tons of coal converted to answers (Score:4, Informative)
OpenAI's LLMs are running in data centers in Ohio and Iowa.
Ohio gets 35% of its energy from coal. Iowa gets 25%.
At least it's not in WV, where 90% comes from coal.
Re: (Score:3)
Almost no large scale modern data centers are designed without massive amounts of renewable energy. Many are designed with solar and wind and as a mid-way with fuel cells. For cooling, they'll try anything and often operate at 50C... And as someone who spends much time in high temperature datacenters, they are super uncomfortable.
You should follow Mark Russonovich. He often posts really cool stuff about improving data center efficiency.
Re: (Score:2)
Co-locating renewables and data centers is stupid.
Renewables should be placed where they perform best. Ohio is often cloudy or snowy. It's not the best place for wind either.
No they can't (Score:3)
Microsoft may be able to compete in a race to build AI models that mimic the quality of OpenAI software at a fraction of the cost, as Microsoft showed in June with the release of one in-house model it calls Orca.
It is unlikely that Microsoft has in-house talent to build a decent AI model.
Re:No they can't (Score:4, Insightful)
It is unlikely that Microsoft has in-house talent to build a decent AI model.
Microsoft has long had an amazing research department.
However, they are almost as famous as Xerox for ignoring the ideas and squandering the opportunities those researchers produce.
Re: (Score:2)
In terms of AI research, their most notable development is Tay [wikipedia.org].
Re: (Score:2)
I'm ignorant of what they publish externally, but we have an internal team that is publishing some absolutely cracking kit. It's not as impactful as foundation models, but for here's a better hammer kit, I've been blown away.
e.g. Github copilot is SSS+,
e.g. The age progression series here was very cool.
https://github.com/Azure/gen-c... [github.com]
Re: (Score:3)
And this not including legal costs (Score:2)
Backup or complementary? (Score:5, Insightful)
Microsoft earlier this year announced further investments in OpenAI, reportedly to the tune of $10 billion [nytimes.com] over several years. That's on top of the $3 billion already invested. So, Microsoft is definitely planning on continuing to work with OpenAI in the future. This idea of having your researchers look at ideas in a complementary area is a good thing. OpenAI is looking at literally bigger and better systems. Microsoft is probing smaller and still useful systems. Makes sense to look at both.
Re: (Score:3)
I think the dirty [not so] secret of the LLMs and machine learning is finally popping at a rate that it cannot be ignored. I will admit I only really started to understand the problem in the last month or two; I had previously thought that it was just a training issue which was essentially one-time. What I hear now is that the chips coming in 3 years will be able to solve today's scaling problems... of course in 3 years it the silicon will be at least two years from the problems they are facing then.
It se
Clippy 2.0 (Score:1)
A simple step (Score:2)
They could try maybe, I don't know, not automatically printing an AI answer on every web search where 99% of the time the person doesn't want and doesn't read the AI answer (which most of the time is "I don't understand what you're saying because you typed a search term not a question")? That should save some costs. It'd also improve the user experience considerably, since Bing results pages now suddenly change in size after you open them and the links you want move out of your reach.
Re: (Score:2)
And taking it out of the only goddamn Android keyboard that still supports swipe-typing would be nice, too