Toot

Written by Jiří Eischmann on 2024-12-19 at 17:55

@scottjenson yeah, it's mostly true in the context of LLMs run centrally. The cost of running will probably become more relevant with locally run models.

IMHO this is the future. Focused small models that are cheaper to train and cheaper to run locally.

=> More informations about this toot | View the thread | More toots from sesivany@vivaldi.net

Mentions

=> View scottjenson@social.coop profile

Tags

Proxy Information
Original URL
gemini://mastogem.picasoft.net/toot/113680772797970866
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
222.065947 milliseconds
Gemini-to-HTML Time
0.5569 milliseconds

This content has been proxied by September (ba2dc).