@scottjenson yeah, it's mostly true in the context of LLMs run centrally. The cost of running will probably become more relevant with locally run models.
IMHO this is the future. Focused small models that are cheaper to train and cheaper to run locally.
=> More informations about this toot | View the thread | More toots from sesivany@vivaldi.net
=> View scottjenson@social.coop profile
text/gemini
This content has been proxied by September (ba2dc).