@NatureMC @kate Now if you -do- want to count the training (https://www.heise.de/en/news/ChatGPT-s-power-consumption-ten-times-more-than-Google-s-9852327.html), and you'd amortize it over a year, you'd get say 350GWh/78Gq ~= 4.5 Wh/q. I was being pretty liberal with 2-10Wh/q.
If we say we train a new model every year, and assume 80B queries. That means on a 20W device each query would have to save us about 15 minutes. For some e-mails, it very well might, so what is more energy efficient depends a lot on exactly what you are doing.
=> More informations about this toot | View the thread | More toots from Schouten_B@mastodon.social
=> View NatureMC@mastodon.online profile | View kate@fosstodon.org profile
text/gemini
This content has been proxied by September (3851b).