Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AI Microsoft

Microsoft Reportedly Develops LLM Series That Can Rival OpenAI, Anthropic Models 41

Microsoft is reportedly developing its own large language model series capable of rivaling OpenAI and Anthropic's models. SiliconANGLE reports: Sources told Bloomberg that the LLM series is known as MAI. That's presumably an acronym for "Microsoft artificial intelligence." It might also be a reference to Maia 100, an internally-developed AI chip the company debuted last year. It's possible Microsoft is using the processor to power the new MAI models. The company recently tested the LLM series to gauge its performance. As part of the evaluation, Microsoft engineers checked whether MAI could power the company's Copilot family of AI assistants. Data from the tests reportedly indicates that the LLM series is competitive with models from OpenAI and Anthropic.

That Microsoft evaluated whether MAI could be integrated into Copilot hints the LLM series is geared towards general-purpose processing rather than reasoning. Many of the tasks supported by Copilot can be performed with a general-purpose model. According to Bloomberg, Microsoft is currently developing a second LLM series optimized for reasoning tasks. The report didn't specify details such as the number of models Microsoft is training or their parameter counts. It's also unclear whether they might provide multimodal features.
This discussion has been archived. No new comments can be posted.

Microsoft Reportedly Develops LLM Series That Can Rival OpenAI, Anthropic Models

Comments Filter:
  • IQ test for AI's?
    • Various IQ tests for AIs already exist. There are league tables and contests attempting to measure the "intelligence" of these tools. They are fundamentally flawed due to being also used for marketing purposes and as training targets for subsequent versions of the AIs.

      Consider the following scenario. You are playing a new game, and you are very bad at it. But sometimes, you succeed at beating the first level. Suppose you can save the game. Now you get to try the second level without repeating the first. Y

      • You've just described the problem with IQ tests and standardized tests in general.
        The various LLM benchmarks are not more impacted than those- except that you can't pay a smart person to do your LLM benchmark for you.
        • That's complete nonsense. The mistake you've made is that you have assumed that people get to retake tests over and over again until they get it right. In most cases they don't, they get one or at most two resits. And even then the fact that they required a resit is recorded. They don't get to keep trying until they pass no matter how many attempts that takes.

          >p> 'AI' however does, at least in the internal Company tests. This is why the results published by these Companies is vastly different to the

          • That's complete nonsense. The mistake you've made is that you have assumed that people take classes to learn to pass the tests, and that, in fact, you can take an IQ test as many times as you want.

            They don't get to keep trying until they pass no matter how many attempts that takes.

            You can literally take the SATs as many times as you want.

            Then the 'AI' simply does the calculations, which we all know that computers are very good at.

            This more bullshit.
            LLMs are not computers. They are run by computers.
            Math is actually particularly difficult for them, and it took a lot of training to make them good at it.

            This is then reported as the 'AI' being able to win a Math Olympiad.

            No, it's not.

            You really have no fucking idea what you're talking about, do you?

            • Oh, I see. You've come up with one counter example and extrapolated that for all tests everywhere. Great. It's sunny today, so I guess it must be sunny here every single day, since one example covers every possibility. And even in your one example, you have made a fundamental error. When a student takes a second, third or even five hundredth SAT they are not retaking the exact same test. They are given different questions. When these LLMs are tested they are tested on the eaxct same questions that they fail

              • Oh, I see. You've come up with one counter example and extrapolated that for all tests everywhere.

                Multiple counter examples, actually.
                What you've failed to do is come up with a single example that backs up your blanket assertion- which I wouldn't bother with now, as any blanket assertion that has 2 examples going against it varies between stupid and not helpful.

                When a student takes a second, third or even five hundredth SAT they are not retaking the exact same test.

                That's not an error at all.
                This is basic math, here.
                You've got a corpus of things you must get good at.
                Every test you take will have a spattering of challenges. Statistics tells you exactly what weights to use in your training.

                And yes, computers are very good at doing calculations.

                LLMs are not co

                • Personally, I'd say he's done a good job at pointing out the obvious flaws in your objections....
                  • Personally, I'd say he's done a good job at pointing out the obvious flaws in your objections....

                    Personally, I'd say you're probably an idiot, then.

                    Dude asserts that since SATs change, something can't learn the corpus by re-taking.
                    "Good job", indeed. That's why people absolutely don't improve after they take the test a second time ;)

                    • He did no such thing, but keep putting words in his mouth ;)
                    • They did, indeed.

                      And even in your one example, you have made a fundamental error. When a student takes a second, third or even five hundredth SAT they are not retaking the exact same test. They are given different questions. When these LLMs are tested they are tested on the eaxct same questions that they failed last time, and have now been specifically trained to get right. This is an entirely different thing to retaking a type of exam that you fiailed the first time. If I were given the exact same test twice I would expect to get 100% the second time.

                      As if SAT questions had no overlap, lol.

                      You two idiots trying to band together against someone who actually has 6 brain cells to rub together is cute, though.

                    • You're trawling the bottom of the barrel, just give up. It's not like anybody is still reading:)

                      How you go from "exact same test" to "overlap" is your business, as is why you care so much about defending you original debunked misrepresentation. I won't ask.

        • Slippery topic, this measuring intelligence thing. I knew this would be fun.

          I was definitely implying all the hairy parts when I posted. You are immediately confronted with the flaws, biases, and inadequacies, of the IQ test, or any other test, when you attempt to quantify ... whatever LLMs do.

          People can't agree on what intelligence is, and reality isn't objective ... everyone interprets what they observe. Your interpretation of "reality" is always limited by your own capabilities. We are past objectivity,
      • Consider the following scenario. You are playing a new game, and you are very bad at it. But sometimes, you succeed at beating the first level. Suppose you can save the game. Now you get to try the second level without repeating the first. You are bad at it, but sometimes you succeed, and save the second level. Suppose you get to the end like this, does this prove that you are capable of playing the game?

        Er, maybe? Depends on the rules of the game.

        (I mean you're pretty much describing how I beat LoZ BOTW ... )

        • No, I'm describing the current development process of the AI industry at a high level. Each AI product (version, iteration, mashup, company product etc) is analogous to a saved state in the game, and the game consists of improving common benchmark scores incrementally. You can work out the rest:)
    • Benchmarks. [huggingface.co]
      They evaluate their success rate at a battery of tasks.
    • by allo ( 1728082 )

      Usually by a set of questions that should be answered in a zero-shot request.
      Some of the benchmark sets are public and may not be a good measure anymore (especially since Microsoft's phi line of models is trained on synthetic data), but others contain private data that is in no training set and can better verify the "IQ" of the model. The tests also have different kinds of questions, like knowledge, reasoning, math, language understanding, etc.

      • Note however that even keeping the test set private does not actually protect against leakage into the training phase from repeated use.

        (The ML industry has not learnt anything from the p-value hacking fiasco.)

  • Tay has returned from her slumber.

  • Oh great. Clippy on crack cocaine. My life is complete.

  • When Microsoft *truly* integrates "AI" into Windows, its primary purposes will be to prevent you from uninstalling their unrelated crapware, changing your configuration, advertising, surveillance, and other preferences, protecting your privacy, finding any setting without asking an LLM, using browsers other than edge, or otherwise having any control over your machine.
    • I think you've just described the end game for every non-Linux OS there is.
      • There may be a Rust OS coming soon, which may run Linux binaries. I have concerns, but it may be an opportunity to discard legacy. I feel that the Rust community has advantages over the Linux community, but lacks the skills, breadth, penetration, experience, and other. I have to say, I really like Rust, and some have hope. Time will tell.
        • There may be a Rust OS coming soon

          I've seen a few OS' built around languages- they tend to suck ass. Those who are that evangelical about a language tend to miss the big picture when it comes to designing operating systems.

          Which may run Linux binaries.

          No way in the 9 hells, man.
          I've spent a few thousand hours of my life working in the kernel- "running Linux binaries" is a task I'm not sure anyone contemplating that really understands.
          Linking an elf, emulating some syscalls, executing the .text section of an image? Sure- absolutely.
          But the entirety of the ioctls, biz

          • But won't LLM write a new OS better than Linux in a new language better than human next week? jK. Great response, I learned and agree with all of this, thanks. I'm not a zealot, but I personally couldn't do what I'm doing now with C/++/#. But that's at a much higher level than kernel.
          • Noting your handle, I'll be in Portland next week. I think I've read your posts or maybe we've even chatted here before. Just throwing out a random chance, which often has interesting results for me. I have no idea how to communicate privately here though.
          • by gweihir ( 88907 )

            There may be a Rust OS coming soon

            I've seen a few OS' built around languages- they tend to suck ass. Those who are that evangelical about a language tend to miss the big picture when it comes to designing operating systems.

            Indeed. These people tend to be one-trick-ponies and are clueless enough to not even begin to understand how much they do not see.

            Rust is a cool language from the perspective of what its goals are.
            I find the syntax absolutely atrocious, but I don't give it any negative marks for that, just means I'm not likely to develop a preference for it.

            A rather bad design problem with Rust is that it expects way too much from people using it. There are too many advanced concepts that have been inegrated and, on the other hand, there is very rudimentary OO that requires a lot of skill, insight and knowledge in the ones using it and experiences with real OO languages do not really transfer over. This design basically pisses every

        • by gweihir ( 88907 )

          Hahaha, no. That is completely unrealistic. Maybe somebody with megalomania made such claims, but writing an OS kernel is a bit more involved than just using a cool language.

    • by gweihir ( 88907 )

      Indeed. While that malware installation by Microsoft may still be some time off, I will invest some time this summer to isolate Win11 with Teams in a VM and to check whether I can get Teams to run well in a browser under Linux with recording. If I get either to work well, that will be it for native installations (except for my gaming-only machine).

"355/113 -- Not the famous irrational number PI, but an incredible simulation!"

Working...