Toot

Written by Bas Schouten on 2024-12-12 at 20:00

@NatureMC @kate Now if you -do- want to count the training (https://www.heise.de/en/news/ChatGPT-s-power-consumption-ten-times-more-than-Google-s-9852327.html), and you'd amortize it over a year, you'd get say 350GWh/78Gq ~= 4.5 Wh/q. I was being pretty liberal with 2-10Wh/q.

If we say we train a new model every year, and assume 80B queries. That means on a 20W device each query would have to save us about 15 minutes. For some e-mails, it very well might, so what is more energy efficient depends a lot on exactly what you are doing.

=> More informations about this toot | View the thread | More toots from Schouten_B@mastodon.social

Mentions

=> View NatureMC@mastodon.online profile | View kate@fosstodon.org profile

Tags

Proxy Information
Original URL
gemini://mastogem.picasoft.net/toot/113641626218007475
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
222.326768 milliseconds
Gemini-to-HTML Time
0.461482 milliseconds

This content has been proxied by September (3851b).