Among other things, I think generative AI and the culture around it is establishing a permission structure that forgives being sloppy, especially when it comes to software development. It's not the only instance of that, but it's becoming widespread.
Ordinarily, we believe that at least some of the problems we solve with software are difficult, requiring thought, study, research, and care. Not all problems are this way, but some are, and people who write software encounter them. Generative AI provides the illusion that these problems no longer require care. We can sloppily describe the problem to ChatGPT, Copilot, or some similar tool, it spits out sloppy code, and we sloppily verify that the code does more or less what we want. Depending on the context we're in, that might be perceived as adequate, even if the code doesn't work. The permission structure allows us to be forgiven: it's not our fault that our generated code didn't work, that's just part of the job. In a different context/structure, we would be critiqued for choosing to use a subpar method for solving the problem.
[#]AI #GenAI #GenerativeAI #tech #dev #SoftwareDevelopment #software
=> More informations about this toot | View the thread | More toots from abucci@buc.ci
=> View ai tag | View genai tag | View generativeai tag | View tech tag | View dev tag | View softwaredevelopment tag | View software tag This content has been proxied by September (3851b).Proxy Information
text/gemini