Comment Re:Ha! (Score 1) 59
The snag here isn't the chip, the snag is the immense amount of data required to operate. Terabytes worth.
Not so. Training requires huge amounts of data to produce a model, but the resulting models can be tailored from large to small, with diminishing returns the larger you get. Some perfectly capable, not state-of-the-art LLMs (e.g. DLite) only need a few hundred MBs to exhibit ChatGPT-like behavior that would be sufficient for narrowly-focused tasks. I could easily imagine a lightweight AI model being used to make pretty much any of Apple’s existing AI tools (e.g autocorrect, on-device object identification/indexing in the photo library, speech transcription) better.
It’s also worth mentioning that Slashdot reported a few weeks ago that Apple researchers published a paper indicating they had a fairly large breakthrough that would enable significantly better performance from models that can run on-device. So, this isn’t something they might do one day: it’s something they’ve already confirmed they have working.