Posted on 2024-12-26
I recently got laid off from my job as a technical writer, along with my entire team. It was a sad affair, and one that was almost certainly driven by the greed of the American parent company (though they'll probably insist it was about efficiency... or something).
Now, I can't be sure as I no longer work there, but I suspect the higher-ups that are left were seduced by the idea of replacing a lot of people with generative AI. It's the new hotness, after all. It's quite an impressive technology at first glance! You put in a prompt and the machine spits out something that sounds pretty close to human. Indeed, it was human once. These responses are an approximation of an answer a human might have given to the question asked. Kind of. Maybe. The reality is that it gives you a block of text that sounds pretty convincing until you poke it a bit too much.
As a tool, generative AI is not without its uses. It's pretty good at producing boilerplate text and even code. Where it falls short is knowledge. Generative AI, by definition, knows nothing. It can't produce results that are backed by understanding or research because that's not what it's designed to produce. It can only produce an amalgam of existing responses from its training data. Proponents of the technology will swear that it's more complex and smarter than this, and in truth it is, but this is the essence of it.
Having subjected myself recently to the pigsty that is LinkedIn, a site already drowning in AI-generated posts[1], I've seen a lot of sycophants shouting about how it's foolish to not blindly embrace AI and fire as many staff as possible to take full advantage. I've even seen some people who lost their jobs to AI trying to claim that it was an inevitability and somehow a good thing.
=> 1 - Yes, That Viral LinkedIn Post You Read Was Probably AI-Generated
As a software developer, writer, and someone with a background in the arts, I am struck by the fact that the only people nobody is talking about replacing with generative AI are executives. It seems like the executive class are determined to weed out all of the needy, whiny, irritating underlings they have to put up with in favour of a technology that will never talk back to them or question them. Since they don't have any eye for quality in any of the jobs they're replacing, they don't realize just how bad the idea is. AI is cheap for now, but it's going to balloon in cost very soon given that models are getting more expensive to train and query[2] and companies like OpenAI are getting desperate for money.
=> 2 - OpenAI’s New o3 Model Won’t Come Cheap. Why Microsoft Is Paying Close Attention.
The bursting of the bubble is going to be painful for everyone. Some of us have already lost jobs to short-sighted, unquestioning adoption of an unproven technology that hasn't yet revealed its true cost. Those are individual jobs, though. If companies like Anthropic and OpenAI can't find a way to make the technology cheaper to improve and operate, they will pass the cost on to the buyers. Suddenly, that incredibly cheap human replacement whose shortcomings are acceptable due to how cheap it is won't seem so appealing. Not only will generative AI continue to do a worse job than the equivalent expert who has years of training, experience, and actual knowledge, it will keep increasing in price until it becomes untenable[3]. Given how generative AI has wheedled its way into so many companies, displacing so many people and convincing shareholders that infinite growth is right there, the fallout could kill a good number of companies. This is going to hurt.
=> 3 - OpenAI Is A Bad Business
This is to say nothing, of course, about the environmental impact of this nonsense. Generative AI at scale is incredibly thirsty[4] and has an enormous power draw[5]. At a time where our usage MUST go DOWN, these irresponsible, greedy, and catastrophically stupid executives are determined to destroy everything for a few dollars more. That the governments of the world haven't shut the whole thing down and sent all of these people to prison is a sad indictment of just how fucked we really are. And for what? A technology that produces books that lie about how to create art[6] or fail to summarize news headlines[7]?
=> 4 - No, OpenAI's ChatGPT doesn't consume 2 liters of water per 50 queries — a new study says it'll take four times more water than previously thought to quench the chatbot's thirst | 5 - ChatGPT energy emergency — here's how much electricity OpenAI and others are sucking up per week | 6 - AI-Generated Book Grifters Threaten The Future of Lace-Making | 7 - Apple called on to ditch AI headline summaries after BBC debacle
I, for one, will fight this every step of the way in whatever way I can. Our focus should be on improving software efficiency, reducing power usage, finding solutions to human problems, and creating art. We should not be dicking around with chatbots that destroy the earth just to avoid learning how to do things. This is pathetic.
I suppose that, in the rearview, it should have been obvious that the first generation of convincing genAI technology would be (a) deeply flawed and (b) instantly seized upon by fraudsters and scammers, to horrible effect. - Tim Bray
=> ✉️ Tell me what you think This content has been proxied by September (ba2dc).Proxy Information
text/gemini