In response to a thread by @flancian, I just wrote a long rant about the value technoskepticism as applied to generative AI:
https://hub.netzgemeinde.eu/item/e3e54544-956e-4ca8-99a3-21fa999a0331
I think it might be more digestible in thread form, so I'm also transcribing it as a thread here.
(1/22)
=> More informations about this toot | More toots from dynamic@social.coop
Rather than looking at a new technology and asking "how can this be used?" I think a better viewpoint is "is this entirely necessary?" I think that in evaluating the ethics of a new technology there should always be an ethical cost imposed to adoption vs. non-adoption. Those who seek to profit from new technology want us to instead ask what is the cost of non-adoption, but this is the wrong way to think about it.
(2/?)
=> More informations about this toot | More toots from dynamic@social.coop
With very few exceptions, new technologies come with a resource cost, so the question should not be "what do I lose by not adopting this?" but "do the benefits of this technology justify the costs of its adoption?" Another question to ask is "is there any other way I can accomplish the same thing?" And what are the costs and benefits of the alternative way of doing it?
(3/?)
=> More informations about this toot | More toots from dynamic@social.coop
If I look at an automobile and think "what can I use this for?" an answer might be "I can use this to get to the grocery store 1 mile from my house." That might be nice. Getting to the store is important, and it's something I need to do. Maybe I should buy a car so that I can just step out my door, get in and go.
(4/?)
=> More informations about this toot | More toots from dynamic@social.coop
But if you ask "is there any other way I could get to the store?" then I remember that in fact I currently walk (or bike, or take the bus, or whatever), and if I ask myself the follow-up question "is that working for me", maybe I notice that in fact it works quite well. In fact, I never would have thought that I needed a new technology to do that until the technology were presented to me. That would be a poor scenario for adoption.
(5/?)
=> More informations about this toot | More toots from dynamic@social.coop
In fact, if I think about my current habit of walking to the store and realize that that's when I get a chance to say hello to my neighbors and get some exercise, maybe I begin to notice what I would be losing if I switched to automobile use.
(6/?)
=> More informations about this toot | More toots from dynamic@social.coop
The situation is very different if walking to the store is already not working for me. Maybe I have an injury that has been making it hard, or maybe I'm having trouble carrying the groceries that I bring home. Starting with a problem to solve, and then evaluating the best technological solution for the problem, is a very different proposition from starting with a technology and thinking "how can I use this?".
(7/?)
=> More informations about this toot | More toots from dynamic@social.coop
If walking to the store has become unworkable, maybe the solution is a personal automobile, but maybe there are other, less technologically intensive solutions as well.
(8/?)
=> More informations about this toot | More toots from dynamic@social.coop
In the best case scenario, when considering technologies as solutoins to problems, I would look at all solutions, look at their full impacts (environmental, social, financial, geographic [e.g. space required for parking], and temporal), ideally including at least a brief look down the road and trying to anticipate what the cascading consequences of each solution would be (e.g. thinking about what widespread adoption of automobiles does for congestion and urban planning).
(9/?)
=> More informations about this toot | More toots from dynamic@social.coop
People who look at a technology and think "what can this be used for" see miriad applications: how could a technology with so many wonderful possible uses not be worth adopting. But if you instead look at the individual problems (and focus on the actual problems not just hypothetical uses that actually no one really needs), then technologies are adopted in a much more narrow set of cases.
(10/?)
=> More informations about this toot | More toots from dynamic@social.coop
Some technologies might never be adopted at all. I have the feeling that there are a lot of people who see technological progress as a positive good, and might even be horrified at the idea that some technologies might be left by the wayside, abandoned, and forgotten. That's not the way that I look at it at all. To me, avoiding the adoption of unnecessary technology is a good outcome.
(11/?)
=> More informations about this toot | More toots from dynamic@social.coop
I think that in the space of contemporary use cases for generative AI, very few of them are solving problems that don't already have existing solutions, and many of those existing solutions come with positive benefits of their own.
(12/?)
=> More informations about this toot | More toots from dynamic@social.coop
Want to learn about a new topic? There are books and online explainers written for various levels of competency. There are reference books and encyclopedias. Wikipedia is a marvel, not just because it provides information but because it is built upon and builds up a culture of attribution of sources. It creates a space where people must hash out what information is important enough to include, how it should be organized, what it means to talk about it in neutral terms.
(13/?)
=> More informations about this toot | More toots from dynamic@social.coop
Want to find a specific resource? Search engines and content indexes were great for that.
Want help troubleshooting a problem? Have you tried talking to a friend? Asking around? There are online support forums on almost every topic, and using them provides the benefit of finding people with similar interests. There are even carefully curated repositories of answers to questions, such as StackExchange.
(14/?)
=> More informations about this toot | More toots from dynamic@social.coop
Want to generate a large volume of text? First of all, have you stopped to ask yourself why you need to generate a large volume of text? What problem in the world is the existence of that text going to solve? Is it problem that solutions already exist? Maybe someone has already created equivalent text. If so, what value is added by rewriting what others have written?
(15/?)
=> More informations about this toot | More toots from dynamic@social.coop
Is anyone even going to read the text you are about to generate? (are they just going to ask an AI to summarize it for them? or not even bother with that?)? What is new in what is being generated? What is the purpose of language at all? Isn't the purpose of language to communicate ideas and information from one person's mind into other people's minds?
(16/?)
=> More informations about this toot | More toots from dynamic@social.coop
All of the above is leaving aside the obvious downsides of generative AI: inaccuracies, omissions, injection of extraneous content, incorrect attribution, the massive and growing energy footprint (even "green energy" comes with a cost).
(17/?)
=> More informations about this toot | More toots from dynamic@social.coop
There probably are some problems where generative AI is actually the correct solution, but I am deeply skeptical that there are many problems for which the correct solution is an LLM trained on the entire corpus of human knowledge. And yet, that is exactly what ChatGPT, Claude, and friends aim to be.
(18/?)
=> More informations about this toot | More toots from dynamic@social.coop
There's vastly more data in a model like ChatGPT than is needed for any particular application, but instead of stopping to ask "is this overkill?" and "might it be more resource efficient to use a smaller model tailored to the specific use case", people instead ask "what can a really big LLM do for me?" and there's almost always something they can find that it could conceivably be good for, because it's trained on literally everything.
(19/?)
=> More informations about this toot | More toots from dynamic@social.coop
Generative AIs are a lot like automobiles. Humans have gotten along fine without them for essentially all of human history. They are resource intensive. They depersonalize day-to-day living. The more that we arrange our lives around making use of them, the more dependent we are on big businesses and extractive industries.
These are not small costs.
(20/?)
=> More informations about this toot | More toots from dynamic@social.coop
There may be very specific situations in which (when all costs and benefits are considered) automobiles provide the single best solution, but that doesn't mean that everyone should have one, and it doesn't mean that we should restructure our cities on the assumption that everyone will.
(21/?)
=> More informations about this toot | More toots from dynamic@social.coop
As with cars, there may be very specific situations in which generative AI provide the single best solution. That doesn't make it desirable for people to casually reach for a something equivalent to ChatGPT every time they need a piece of information or a brainstorming partner.
(22/?)
=> More informations about this toot | More toots from dynamic@social.coop This content has been proxied by September (3851b).Proxy Information
text/gemini