Newer Pixels are having hardware chips dedicated to AI in them, which could be able to run these locally. Apple is planning on doing local LLMs too. There’s been a lot of development on “small LLMs”, which have a ton of benefits, like being able to study LLMs easier, run them on lower specs, and saving power on LLM usage.
Smaller LLMs have huge performance tradeoffs, most notably in their abilities to obey prompts. Bard has billions of parameters, so mobile chips wouldn’t be able to run it.
That’s right now, small LLMs have been the focus of development just very recently. And judging how fast LLMs have been improving, I can see that changing very soon.
Newer Pixels are having hardware chips dedicated to AI in them, which could be able to run these locally. Apple is planning on doing local LLMs too. There’s been a lot of development on “small LLMs”, which have a ton of benefits, like being able to study LLMs easier, run them on lower specs, and saving power on LLM usage.
Smaller LLMs have huge performance tradeoffs, most notably in their abilities to obey prompts. Bard has billions of parameters, so mobile chips wouldn’t be able to run it.
That’s right now, small LLMs have been the focus of development just very recently. And judging how fast LLMs have been improving, I can see that changing very soon.