100%. The point of learning-based AI is that it's faster and cheaper to develop than conventional engineered algorithms. It also tends to execute faster with fewer resources than conventional algorithms. Apple, Nvidia and other companies already do this locally pretty extensively: DLSS, background segmentation and other processing in videoconferencing, audio processing, photo processing including object and person recognition, text to speech and speech recognition, information extraction from e-mails, etc.
You probably actually mean large language models. Those too. Language models are so compelling because they seem to have personalities and the can interact with us like people. People are going to want theirs personalized. The current approach is to shove context into hidden background for every prompt but that's expensive and very limited. In future you'll have a local version that learns and adapts to you: what you like for breakfast, what time you get up, what kind of jokes you like, if you're a furry. These things are all over sci fi, from Niven and Heinlein to Star Wars, Star Trek and Marvel.
No reason why it can't be open either. The ridiculous amounts of power put into training language models today is because it's an arms race. Six months behind the behemoths it's all enthusiasts reenacting the early days of PCs in their basements.