In response to a thread by @flancian, I just wrote a long rant about the value technoskepticism as applied to generative AI:
https://hub.netzgemeinde.eu/item/e3e54544-956e-4ca8-99a3-21fa999a0331
I think it might be more digestible in thread form, so I'm also transcribing it as a thread here.
(1/22)
=> More informations about this toot | More toots from dynamic@social.coop
Rather than looking at a new technology and asking "how can this be used?" I think a better viewpoint is "is this entirely necessary?" I think that in evaluating the ethics of a new technology there should always be an ethical cost imposed to adoption vs. non-adoption. Those who seek to profit from new technology want us to instead ask what is the cost of non-adoption, but this is the wrong way to think about it.
(2/?)
=> More informations about this toot | More toots from dynamic@social.coop
With very few exceptions, new technologies come with a resource cost, so the question should not be "what do I lose by not adopting this?" but "do the benefits of this technology justify the costs of its adoption?" Another question to ask is "is there any other way I can accomplish the same thing?" And what are the costs and benefits of the alternative way of doing it?
(3/?)
=> More informations about this toot | More toots from dynamic@social.coop
If I look at an automobile and think "what can I use this for?" an answer might be "I can use this to get to the grocery store 1 mile from my house." That might be nice. Getting to the store is important, and it's something I need to do. Maybe I should buy a car so that I can just step out my door, get in and go.
(4/?)
=> More informations about this toot | More toots from dynamic@social.coop
But if you ask "is there any other way I could get to the store?" then I remember that in fact I currently walk (or bike, or take the bus, or whatever), and if I ask myself the follow-up question "is that working for me", maybe I notice that in fact it works quite well. In fact, I never would have thought that I needed a new technology to do that until the technology were presented to me. That would be a poor scenario for adoption.
(5/?)
=> More informations about this toot | More toots from dynamic@social.coop
In fact, if I think about my current habit of walking to the store and realize that that's when I get a chance to say hello to my neighbors and get some exercise, maybe I begin to notice what I would be losing if I switched to automobile use.
(6/?)
=> More informations about this toot | More toots from dynamic@social.coop
The situation is very different if walking to the store is already not working for me. Maybe I have an injury that has been making it hard, or maybe I'm having trouble carrying the groceries that I bring home. Starting with a problem to solve, and then evaluating the best technological solution for the problem, is a very different proposition from starting with a technology and thinking "how can I use this?".
(7/?)
=> More informations about this toot | More toots from dynamic@social.coop
If walking to the store has become unworkable, maybe the solution is a personal automobile, but maybe there are other, less technologically intensive solutions as well.
(8/?)
=> More informations about this toot | More toots from dynamic@social.coop
In the best case scenario, when considering technologies as solutoins to problems, I would look at all solutions, look at their full impacts (environmental, social, financial, geographic [e.g. space required for parking], and temporal), ideally including at least a brief look down the road and trying to anticipate what the cascading consequences of each solution would be (e.g. thinking about what widespread adoption of automobiles does for congestion and urban planning).
(9/?)
=> More informations about this toot | More toots from dynamic@social.coop
People who look at a technology and think "what can this be used for" see miriad applications: how could a technology with so many wonderful possible uses not be worth adopting. But if you instead look at the individual problems (and focus on the actual problems not just hypothetical uses that actually no one really needs), then technologies are adopted in a much more narrow set of cases.
(10/?)
=> More informations about this toot | More toots from dynamic@social.coop
Some technologies might never be adopted at all. I have the feeling that there are a lot of people who see technological progress as a positive good, and might even be horrified at the idea that some technologies might be left by the wayside, abandoned, and forgotten. That's not the way that I look at it at all. To me, avoiding the adoption of unnecessary technology is a good outcome.
(11/?)
=> More informations about this toot | More toots from dynamic@social.coop
I think that in the space of contemporary use cases for generative AI, very few of them are solving problems that don't already have existing solutions, and many of those existing solutions come with positive benefits of their own.
(12/?)
=> More informations about this toot | More toots from dynamic@social.coop
Want to learn about a new topic? There are books and online explainers written for various levels of competency. There are reference books and encyclopedias. Wikipedia is a marvel, not just because it provides information but because it is built upon and builds up a culture of attribution of sources. It creates a space where people must hash out what information is important enough to include, how it should be organized, what it means to talk about it in neutral terms.
(13/?)
=> More informations about this toot | More toots from dynamic@social.coop
Want to find a specific resource? Search engines and content indexes were great for that.
Want help troubleshooting a problem? Have you tried talking to a friend? Asking around? There are online support forums on almost every topic, and using them provides the benefit of finding people with similar interests. There are even carefully curated repositories of answers to questions, such as StackExchange.
(14/?)
=> More informations about this toot | More toots from dynamic@social.coop
Want to generate a large volume of text? First of all, have you stopped to ask yourself why you need to generate a large volume of text? What problem in the world is the existence of that text going to solve? Is it problem that solutions already exist? Maybe someone has already created equivalent text. If so, what value is added by rewriting what others have written?
(15/?)
=> More informations about this toot | More toots from dynamic@social.coop
Is anyone even going to read the text you are about to generate? (are they just going to ask an AI to summarize it for them? or not even bother with that?)? What is new in what is being generated? What is the purpose of language at all? Isn't the purpose of language to communicate ideas and information from one person's mind into other people's minds?
(16/?)
=> More informations about this toot | More toots from dynamic@social.coop
All of the above is leaving aside the obvious downsides of generative AI: inaccuracies, omissions, injection of extraneous content, incorrect attribution, the massive and growing energy footprint (even "green energy" comes with a cost).
(17/?)
=> More informations about this toot | More toots from dynamic@social.coop
There probably are some problems where generative AI is actually the correct solution, but I am deeply skeptical that there are many problems for which the correct solution is an LLM trained on the entire corpus of human knowledge. And yet, that is exactly what ChatGPT, Claude, and friends aim to be.
(18/?)
=> More informations about this toot | More toots from dynamic@social.coop
There's vastly more data in a model like ChatGPT than is needed for any particular application, but instead of stopping to ask "is this overkill?" and "might it be more resource efficient to use a smaller model tailored to the specific use case", people instead ask "what can a really big LLM do for me?" and there's almost always something they can find that it could conceivably be good for, because it's trained on literally everything.
(19/?)
=> More informations about this toot | More toots from dynamic@social.coop
Generative AIs are a lot like automobiles. Humans have gotten along fine without them for essentially all of human history. They are resource intensive. They depersonalize day-to-day living. The more that we arrange our lives around making use of them, the more dependent we are on big businesses and extractive industries.
These are not small costs.
(20/?)
=> More informations about this toot | More toots from dynamic@social.coop
There may be very specific situations in which (when all costs and benefits are considered) automobiles provide the single best solution, but that doesn't mean that everyone should have one, and it doesn't mean that we should restructure our cities on the assumption that everyone will.
(21/?)
=> More informations about this toot | More toots from dynamic@social.coop
As with cars, there may be very specific situations in which generative AI provide the single best solution. That doesn't make it desirable for people to casually reach for a something equivalent to ChatGPT every time they need a piece of information or a brainstorming partner.
(22/?)
=> More informations about this toot | More toots from dynamic@social.coop
@dynamic thank you for sharing! I agree in most points, in particular if framed as a hedge; as in, all these are points worth considering, and I agree that asking questions like 'can an existing technology or procedure give the same benefits' is a great idea.
=> More informations about this toot | More toots from flancian@social.coop
@dynamic Having said that, I see a lot of beneficial outputs from large language models already; several of the example activities you mention can be much improved by the addition of AI in critical steps. Several steps become either faster or more accessible (to a wider variety of backgrounds).
Within the transport metaphor, some applications of AI feel like bicycles to me. Some have less clear benefits though, or higher costs.
=> More informations about this toot | More toots from flancian@social.coop
@dynamic on a higher level, I think with new technologies it's probably a good thing if different groups/subgroups choose to follow different policies; some erring on the side of caution, some on the side of early adoption. This works in particular in situations which look like non-zero-sum-games in the sense that distinct groups can follow different policies without interfering much with each other.
=> More informations about this toot | More toots from flancian@social.coop
@In #Flancia we'll meet
Seeing "beneficial outputs" is explicitly not sufficient to meet the criteria I've tried to lay out.
=> More informations about this toot | More toots from dynamic@hub.netzgemeinde.eu
@dynamic it's not sufficient a priori, it depends case by case based on the output and the costs. I'm just saying that IMHO for some cases the equation seems positive given my domain-specific evaluation.
=> More informations about this toot | More toots from flancian@social.coop
@In #Flancia we'll meet
When you began to explore this question, did you start with "I have a problem to solve, what is the best way to solve it" (followed by an exploration of all possible approaches) or from "hey, here's some new technology, let me play with it and see what It can do"?
Given that this thread began with you wishing that anti-AI people would "have a conversation" with Claude, as if "having a conversation" with one of these chatbots were something easy and low cost to do, I have a hard time believing that you are thinking seriously about the problematic aspects of this technology.
=> More informations about this toot | More toots from dynamic@hub.netzgemeinde.eu
@In #Flancia we'll meet
I think this thread is relevant, in which @Tanguy Fardet and @Wim🧮 discuss the energy footprint of GPT.
[#]^https://scicomm.xyz/@tfardet/113634475874220646
Wim estimates that GPT 4 is ten times as energy intensive as GPT 3, which throws off a lot of the early calculations of energy footprint for use of generative AIs. As more people get excited about these big models and they continue to be "improved", that situation is presumably ongoing going to get worse.
Really obvious low-hanging fruit for avoiding going down that road is to (when generative AI is actually the correct solution to a problem), use smaller domain-specific models. These would almost certainly not give the illusion of human-like intelligence, but giving up on that illusion should be a no-brainer when considering the massive cost of the larger models.
=> More informations about this toot | More toots from dynamic@hub.netzgemeinde.eu
@In #Flancia we'll meet
Certainly for someone who accepts the premises of the thread I laid out, it is not in the interest of humanity to try to convince people who don't feel a need for the technology that they should try it out.
=> More informations about this toot | More toots from dynamic@hub.netzgemeinde.eu
@dynamic your premises are interesting but opinionated; I think it's very fine for you and other people to abide by them, but it doesn't follow that everyone should accept them as given. I for example tend to push back against blanket guidelines that seem like they could have been used to push back against arbitrary technological development previously, e.g. against bikes ("what is wrong with walking?").
=> More informations about this toot | More toots from flancian@social.coop
@dynamic
Fair though that people who think that even a single exchange with an LLM is unacceptable because of the costs involved should not be forced to try them or anything; it's just that, for the majority of people that haven't interacted with an LLM, having a conversation with one just once could be very informative and give them a better idea of why so many others think the costs involved are acceptable given the benefits, or working to reduce the costs is a worthy pursuit.
=> More informations about this toot | More toots from flancian@social.coop
@dynamic and thanks for the profile pointers, I've followed both!
=> More informations about this toot | More toots from flancian@social.coop
@In #Flancia we'll meet
Disagreement is not only acceptable but inevitable. I still think you're wrong, and I'm entitled to that opinion too.
All I ask for the time being is that you please not pretend that my thread about why people should resist the allure of generative AI is an endorsement of your view that generative AI has many useful applications and that we'd miss out if we didn't make use of those.
=> More informations about this toot | More toots from dynamic@hub.netzgemeinde.eu
@dynamic full disclosure: this exchange felt a bit confrontational suddenly, unsure if that was the intention.
I don't understand why you think I'm pretending something about your thread. I'm just trying to explore this space together, acknowledging we might have different positions on the value propositions and externalities here.
=> More informations about this toot | More toots from flancian@social.coop This content has been proxied by September (3851b).Proxy Information
text/gemini