Ancestors

Toot

Written by Amalgam on 2024-11-17 at 02:35

Any #llm or #ai experts out there? Is a local model more energy efficient than a model hosted somewhere else?

I’m starting to see many smaller models tuned for specific tasks and meant to run locally. I’m guessing since they are smaller they will require less energy than gpt or sonnet or whatever. But their architecture is more optimized than my laptop.

If I’m worried about energy usage how should I think about this?

=> More informations about this toot | More toots from amalgam_@mastodon.social

Descendants

Written by Craig on 2024-11-17 at 04:08

@amalgam_ @ai I thought most of the energy goes into training the models?

=> More informations about this toot | More toots from ccrraaiigg@mastodon.social

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113495961232572691
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
362.572964 milliseconds
Gemini-to-HTML Time
0.4508 milliseconds

This content has been proxied by September (3851b).