Ancestors

Written by Kate Morley on 2024-12-12 at 11:21

Updated my e-mail signature for the generative AI era:

=> View attached media

=> More informations about this toot | More toots from kate@fosstodon.org

Written by Bas Schouten on 2024-12-12 at 13:20

@kate Actually a ChatGPT query costs between 2 and 10 Wh. If someone were using 15 minutes to respond to your mail on a PC and monitor using ~20W it may very well be better for the environment if they did use LLMs :p. This stuff is pretty hard.

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Written by Petra van Cronenburg on 2024-12-12 at 15:37

@Schouten_B You forgot to include the hidden energy and water costs for training? @kate

=> More informations about this toot | More toots from NatureMC@mastodon.online

Written by Bas Schouten on 2024-12-12 at 17:08

@NatureMC @kate That is amortized when you use it, so not a good argument not to use it. Unless you believe that not using it can make it go away, but that seems extremely unlikely given the scale at which people are already using it and the upward trend in that usage.

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Written by Petra van Cronenburg on 2024-12-12 at 17:31

@Schouten_B What means "amortized"? (Any facts/scientific sources?)

And ojojoj, what an argument! It's like: 'Do you think you're improving something by giving up fossil fuels? They're not going away, look how many people use them!'

Taking part in something just because masses of people are followers somewhere may be marketing-compliant, but it has rarely led to any good in history.

@kate

=> More informations about this toot | More toots from NatureMC@mastodon.online

Written by Bas Schouten on 2024-12-12 at 19:55

@NatureMC @kate I'm not sure what you mean by that. Training GPT4 by most estimates took ~50GWh, but that's 'done', like you doing a query does not mean more training on the model you're using. I.e. for every query you do -now- on an already developed model you only count the current marginal power use. Which for GPT4 is ~3Wh (https://balkangreenenergynews.com/chatgpt-consumes-enough-power-in-one-year-to-charge-over-three-million-electric-cars/ i.e. 278GWh/78Gq ~= 3Wh/q).

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Written by Bas Schouten on 2024-12-12 at 20:00

@NatureMC @kate Now if you -do- want to count the training (https://www.heise.de/en/news/ChatGPT-s-power-consumption-ten-times-more-than-Google-s-9852327.html), and you'd amortize it over a year, you'd get say 350GWh/78Gq ~= 4.5 Wh/q. I was being pretty liberal with 2-10Wh/q.

If we say we train a new model every year, and assume 80B queries. That means on a 20W device each query would have to save us about 15 minutes. For some e-mails, it very well might, so what is more energy efficient depends a lot on exactly what you are doing.

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Written by Bas Schouten on 2024-12-12 at 20:07

@NatureMC @kate (Note for fun, that by the estimates I've seen, of course feel free to provide better data, the CO2eq emissions of a toot is about 100j. This means that you could do about 110 toots for the cost of a single GPT4 query, or about 10 for a Google query! Needless to say the emissions of your phone or computer that you're doing the toots or the searching on is often the most significant contributor to your carbon footprint. Although if you live here it can save on heating the room :P)

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Written by Petra van Cronenburg on 2024-12-13 at 10:49

@Schouten_B Thank you for taking the time to answer my question! This is exactly what I was interested in and I learnt something new about the "small" steps like tooting.

I think such numbers are very important for discussions with non-techies like the women of our embroidery club who send Whatsapps with AI-generated fake-fun-videos to people in the same room while talking (they could at least watch the video on only one phone).Most people have no feeling for the consumption, and believe

@kate

=> More informations about this toot | More toots from NatureMC@mastodon.online

Written by Petra van Cronenburg on 2024-12-13 at 11:00

@Schouten_B advertising.

I completely understand your train of thought as to when a GPT mail is worthwhile. I just keep asking myself whether the mail is worth it at all when I'm under so much pressure. Then I would make differences, e.g. for automated responses in customer service, but we already had that before AI.

The difference is: I think ‘amortising’ is the company's point of view. The 10 E strawberries versus 10 E apples. 😉 For me, the articles you cited speak a clear language:

@kate

=> More informations about this toot | More toots from NatureMC@mastodon.online

Written by Petra van Cronenburg on 2024-12-13 at 11:10

@Schouten_B Quote: "ChatGPT's energy consumption for processing requests exceeds the total electricity consumption of twelve small countries and territories, including Gibraltar, Grenada, Dominica, Samoa and the British Virgin Islands. It could even power the whole of Finland or Belgium for a whole day." This is only energy, we don't talk yet about water (ex. Spain) and increasing energy costs for people living near data centres.

And here I am thinking of our climate situation in the

@kate

=> More informations about this toot | More toots from NatureMC@mastodon.online

Written by Petra van Cronenburg on 2024-12-13 at 11:16

@Schouten_B warmest year ever: we no longer have time to consume more and more energy and believe that technology will sort it out. We have to get down because we are doing far too little. Here in France, an increasing number of people must decide: heat or eat? At the same time, we are wasting energy ... for what?

At the same time, I am pessimistic: the genie is out of the bottle, at best legal regulations can still achieve something. You can do a lot with AI in science (e.g. improve

@kate

=> More informations about this toot | More toots from NatureMC@mastodon.online

Toot

Written by Petra van Cronenburg on 2024-12-13 at 11:18

@Schouten_B climate models). But if I send AI joke videos, I'm polluting the environment.

Therefore, these comparisons and the view on "hidden costs/consumption" are important for me - and therefore, your answers help me to to better contextualise all this. Thanks again for your time (and energy).

@kate

=> More informations about this toot | More toots from NatureMC@mastodon.online

Descendants

Written by Bas Schouten on 2024-12-13 at 13:07

@NatureMC @kate Always happy to engage in good faith dialogue! Fwiw, like you said, the genie is out of the bottle. At the same time from a climate change perspective ChatGPT won't matter much. Nor will flying. China is now emitting more than the entire western world. Even per capita its emissions are second only to the US and Canada.

India will join it soon in -by itself- exceeding the emissions of all of the Western world. This is 3 billion people getting reasonable standards of living.

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Written by Bas Schouten on 2024-12-13 at 13:10

@NatureMC @kate And we can't deny them that. They deserve healthcare, a little vacation, heating, etc. and all those things we in the west take for granted.

But it does mean that all the frivolous emotions from ChatGPT to beef to intercontinental flights are now, while still worth limiting, practically not going to make much of a difference. If we stopped -all- air travel, went completely vegetarian and did no AI whatsoever, it would take Indian and Chinese growth 1-2 years to catch up.

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Written by Bas Schouten on 2024-12-13 at 13:12

@NatureMC @kate So where does that leave us? Well, partially, we're just fucked. But we have to keep trying to reduce our emissions. And while we do so we need to be looking at how to 'fix' the situations, those fixes will likely have to be some form of geoengineering. Carbon recapture, space-based solar radiation management, those kinds of things. As far as I can tell based on the data, those are our only hope for significantly combating climate change.

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Written by Kate Morley on 2024-12-13 at 13:47

@Schouten_B @NatureMC While certain groups in the West having spent years using “What about China?” arguments to promote fatalism about climate change and discourage taking action, China has been installing so much renewal energy capacity that its carbon emissions may have already peaked, with a 1% year-on-year drop in Q2 this year:

=> View attached media

=> More informations about this toot | More toots from kate@fosstodon.org

Written by Bas Schouten on 2024-12-13 at 16:07

@kate @NatureMC China actually likely added slightly less solar in 2024 than 2023. But they installed about 200 GW and now total about 770 GW. If you consider the Netherlands has 24GW. With only about 1.2% of the population, that's not actually that impressive per capita.

The main reason it fell in 2024 was a very bad year for hydro in 2023, which is very important for Chinese electricity generation.

Emissions in Europe have dropped a lot more. In the US the drop was similar.

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Written by Bas Schouten on 2024-12-13 at 16:12

@kate @NatureMC (And for the record, the Netherlands isn't that impressive in terms of renewables, solar only does 'so much' at higher lattitudes) Having said that, emissions in India are projected to rise by 4-5%. The point is, CO2 emissions are -not- going to be at 25% of 2010 levels in 2030.

Will global emissions start dropping in the next 5 years? Yes. Almost certainly,, but nowhere near where they need to be.

Should we all install more renewables? Also yes.

=> View attached media

=> More informations about this toot | More toots from Schouten_B@mastodon.social

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113645238214121068
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
360.469873 milliseconds
Gemini-to-HTML Time
5.20893 milliseconds

This content has been proxied by September (3851b).