I am a programmer in the year 2024 who doesn't use AI.
I'm not even curious about it.
It's not just the mistakes. Hallucinations. Artificial confidence.
It's not just the unconscionable energy use. Laundering and reinforcement of historical biases. Ripoff of creative works. Exploited workers. Scams. Bots. Political propaganda. Mass surveillance to train the beast. And this is just off the top of my head here.
1/N
=> More informations about this toot | More toots from sunfish@hachyderm.io
It's not just about how the story of how AI will make us all so much more "productive" that we'll all have much more free time, which has been told many times in modern history and has never been broadly true.
It is all that.
And.
2/N
=> More informations about this toot | More toots from sunfish@hachyderm.io
On a personal level.
I got curious about computers because they are things I'm able to be curious about. That's it. That's the spark. I can explore them and learn how they work.
Computers are hard sometimes. Sometimes we can figure out ways to make them easier. But if we instead automate doing hard things, using AI to make doing hard things less effort, it doesn't lead to a place I'm excited about.
3/N
=> More informations about this toot | More toots from sunfish@hachyderm.io
@sunfish wait, is this essentially "I like doing hard puzzles. Of course I don’t want to make them simple"?!
=> More informations about this toot | More toots from freddy@security.plumbing
@freddy It's not about the hardness of the puzzle. I don't think I'm even that skilled at solving hard puzzles.
I like things like "if-else" constructs. I like how when I see one, I know what it does.
=> More informations about this toot | More toots from sunfish@hachyderm.io
@sunfish @freddy I wonder if it's just a matter of taste and personal interest, or if part of it is you understanding that good software requires people who understand how the software is written and how it works. Generating or modifying the software with AI creates an understanding debt that could be fatal to the software long-term (or medium-term, really).
So if your personal satisfaction relies on doing a good job, avoiding AI might be a good instinct.
=> More informations about this toot | More toots from fvsch@hachyderm.io
@sunfish @freddy See @baldur on how building and maintaining software requires building a theory of how the software works: https://www.baldurbjarnason.com/2022/theory-building/
Could be that AI code generators are not a big threat to this theory-building, but on the surface it does sound like shipping code that no-one has more than a fuzzy understanding of. I’ll wait for research showing that this downside is manageable and not too costly before jumping on that bandwagon.
=> More informations about this toot | More toots from fvsch@hachyderm.io
@fvsch @sunfish @freddy @baldur A friend of mine as tried out AI in VisualStudio and it's only marginally better to classic code completion. Any actual hard problem he has to resolve himself anyway.
AI can only suggest things that have frequently been solved before. It cannot solve new problems.
=> More informations about this toot | More toots from gunchleoc@mastodon.scot
@gunchleoc @fvsch @sunfish "AI can only suggest things that have frequently been solved before. It cannot solve new problems."
Exactly. Because it's not actually "I" (as in intelligent).
It's "C." As in AutoComplete.
=> More informations about this toot | More toots from quixote@mastodon.nz
@quixote so far, it's a pretty good snippet provider.
=> More informations about this toot | More toots from groundie@mastodon.social
@groundie @quixote but who will provide new snippets for problems that do not even exist today?
=> More informations about this toot | More toots from ptesarik@fosstodon.org
@ptesarik Well, just imagining things, maybe a place that might have real programmers left (TransDniester?). They invent an AutoComplete with weirdness built in. The other ACs swallow its output whole, since not-seen-before = real. ACs then follow an exponential curve of hallucinations. The people dependent on them go insane with no answers and starve to death trying to make pizzas out of cloud-based kale.
Before you can say World Domination! there's nobody left to grow food for the coders....
=> More informations about this toot | More toots from quixote@mastodon.nz
@quixote @gunchleoc @fvsch @sunfish
And that's why I don't use stuff like copilot because I get frustrated by autocomplete as it is half the time when it does something I don't want it to.
Autocomplete is ok when writing new code but as soon as your making modifications to existing code it ends up inserting extra characters where they don't belong and just being a nuisance.
=> More informations about this toot | More toots from dominikg@mastodon.gamedev.place
@gunchleoc @fvsch @sunfish @freddy @baldur yep, the "smart code completion" JetBrains supply is right about half the time and that can save a bit of typing. But it doesn't replace the thinking.
Bigger snippets of AI code end up creating work because first you have to do a very detailed review, then you can run a static analysis tool over it, then you start writing tests, and those tests have to be unusually thorough because AI makes mistakes that people don't.
=> More informations about this toot | More toots from moz@fosstodon.org
[#]quote | «What keeps the software alive are the programmers who have an accurate mental model (theory) of how it is built and works».
https://www.baldurbjarnason.com/2022/theory-building/
=> More informations about this toot | More toots from tivasyk@mastodon.social
@tivasyk @dalias effect of AI coding assistance on building theories of programs TBD…
=> More informations about this toot | More toots from regehr@mastodon.social
@regehr @tivasyk @dalias As the effective utility of "AI" tools improves, the leverage of smaller teams will increase.
=> More informations about this toot | More toots from johnm@federate.social
@johnm @regehr @tivasyk 🙄
=> More informations about this toot | More toots from dalias@hachyderm.io
@tivasyk One old quote I also like in this context is this one:
Because if AI just wrote a whole bunch of code for you… are you sure you're going to be able not just to somewhat understand it, but also debug it and so maintain that code?
=> More informations about this toot | More toots from lanodan@queer.hacktivis.me
@tivasyk Another article I like with similar thoughts: https://knowledge.csc.gov.sg/ethos-issue-21/how-to-build-good-software/ — “The main value in software is not the code produced, but the knowledge accumulated by the people who produced it […] Software Is about Developing Knowledge More than Writing Code”
=> More informations about this toot | More toots from svat@mathstodon.xyz
@sunfish that maybe the exact point why AI is so popular. People can't really know how it works, how much it "knows" and "understands". It leaves so much room for imagination. Good old expert systems were quite reliable and debuggable but much less magic. It's much less impressive if you know how it works @freddy
=> More informations about this toot | More toots from wiesodennblos@hessen.social
@sunfish This is the important part, AI models are not accountable or debuggable in a way any programming language is. There's nothing to reproduce with certainty, no post mortem.
=> More informations about this toot | More toots from cohentheblue@ohai.social
@sunfish I think this partially reflects the sentiment in the Go experiments with AI as a tool for software development: "… differs from many development-focused uses of LLMs by not trying to augment or displace the code writing process at all. After all, writing code is the fun part of writing software. Instead, the idea is to focus on the not-fun parts, like processing incoming issues, matching questions to existing documentation, and so on."
=> More informations about this toot | More toots from kortschak@infosec.exchange
@freddy @sunfish
My son is a CS student, and he said that most of his classmates use AI to finish assignments. He doesn't, because he wants to understand what's happening.
I wouldn't be surprised if he's lying to me, not because he has a history of lying but because, frankly, if ChatGPT existed when I took programming courses 30 years ago I would have used it.
But if he's telling the truth, I expect him to run circles around most other engineers with similar experience within years.
=> More informations about this toot | More toots from firebreathingduck@vivaldi.net
@firebreathingduck
I've run into this with my student interns.
I do not think he is lying to you.
@freddy @sunfish
=> More informations about this toot | More toots from mav@hackers.town
@freddy @sunfish No, I think that is missing the point completely.
LLMs (no, it's not AI) aren't making the puzzle simple. They don't help you understand the problem, they don't further understanding of similar problems, and that way, their use prevents simplification.
=> More informations about this toot | More toots from Ardubal@mastodon.xyz
@Ardubal And to further your point, if we don't understand the problem we can't evaluate whether LLMs are giving us a proper solution or one of their frequent plausible-sounding-but-wrong ones.
@freddy @sunfish
=> More informations about this toot | More toots from david_megginson@mstdn.ca
@freddy @sunfish you need to reread the first post in the thread, fedifreddy
=> More informations about this toot | More toots from cultdev@mastodon.social
@sunfish I personally am a big fan of making hard things easier, but I believe it's important to be able to understand how the automation came to the conclusions that it did. You don't have to understand how a calculator works to get good use out of it, but you can understand it if you want to, and so there's room for curiosity all the way down. And that's fundamentally impossible with the current hyped AI techniques. (There have been techniques that were labeled "AI" at one time or another which could actually tell you what they tried and why, so I'm not going to say "all AI is bad", just… most of it.)
=> More informations about this toot | More toots from jamey@toot.cat
@jamey @sunfish Not only do I want to know how it came to its conclusions, I also want these to allways be predictable. A computer is a tool. If I have learned to use a tool correctly, and I execute that correct usage, I want the result to be predictably correct (if the tool is not broken). Easy as that. If that's not the case, how could I even learn to use it correctly? Then it would just be unsatisfactory, wastefully inefficient trial and error.
=> More informations about this toot | More toots from words_number@mastodon.social
@words_number @sunfish Most of the time I agree with you: I usually want deterministic and reproducible results. But not always. Sometimes I only care about having a "satisficing" answer, meaning I want to get any one of many possible correct answers that is good enough for my current purposes. This comes up a lot in NP-Complete problems. In that case, randomization is a very useful tool for finding an answer more quickly. The result can still be explainable without being deterministic.
=> More informations about this toot | More toots from jamey@toot.cat
@sunfish despite IKEA, people are still handmaking wood furniture. Often exceptionally special, uselful or beautiful too.
=> More informations about this toot | More toots from photovince@mastodon.social
@sunfish I neither use it. But the thing is AI isn't really useful to do hard tasks. Maybe for boilerplate and easy tasks. But I don't think you can get a decent success rate when asking hard problems to an AI.
=> More informations about this toot | More toots from doragasu@mastodon.sdf.org
@doragasu
I try it from time to time, when i stumble upon a hard task. LLMs fails solving those tasks regularly. I sometimes solve the task first and then try to see, how it would solve the task, only to notice, that it fails.
Boilerplate code can be solved with better tooling and snippets, which require much less computational power and produce more consistent results.
@sunfish
=> More informations about this toot | More toots from django@kowelenz.social
@django @sunfish @doragasu Exactly. People say AI is great for boilerplate, but my IDE already has templates for boilerplate, I have a code snippet library, and there are sites like Stack Overflow with examples. All of those have the added advantages over AI that I know where the code came from and I can have some confidence that it's actually correct.
=> More informations about this toot | More toots from mathew@universeodon.com
@sunfish
For me it's also the complete loss of control — systems can be insanely complex, but it would still not impossible to figure out how it works, given necessary time, but neural network is something infinitely complex, not cleated consciously, but by feeding it amounts of data that are even hard to measure, without making conscise choices about what data it is and without understanding how every piece of that data changes it.
=> More informations about this toot | More toots from m0xee@librem.one
@sunfish As an educator I'm concerned about AI for the exact same ecosystem of reasons you listed, and echo your feelings about struggle. The purpose of education is to go through the uncomfortable bits so that you can come out the other side as a critical thinking individual. AI removes or reduces that necessary struggle. Nobody wants to go back to paper and pencil in classrooms, but that is about the only toolset left that has not had some form of AI added to it.
=> More informations about this toot | More toots from cpultz@lincolnite.net
@sunfish
As someone who isn't techy I am grateful to you for being that guy. We have to do the hard things by hand or we will end up being useless. The whole package of a.i. is obviously and seriously dysfunctional, and the people who are selling it are all obviously shady. I'm just an artist but I am pretty sure that if a.i. cant produce a painting, then it has no business sequencing genes or generating code.
I'm deeply grateful to professionals who don't use it.
Blessings
🤘❤️🤘
=> More informations about this toot | More toots from Langhamian@mastodon.social
@sunfish This argument I find compelling.
=> More informations about this toot | More toots from FinalOverdrive@kolektiva.social
@sunfish I was attracted to computers in the 80s and 90s because they were deterministic, precise, predictable, do only what I tell them no more no less, and never make mistakes [caveats omitted for brevity]. It was a toolkit to make whole universes within if you were clever and patient.
Since then they've gotten fuzzier and more opinionated and unreliable. AI is the epitome of that, and coding is the last place I want it. Coding already feels like building on quicksand these days.
=> More informations about this toot | More toots from rocketsoup@mastodon.social
@sunfish though it might be very useful for boilerplate parts, where you still the one who does decode upon logic.
See it as smart autocomplete, not more than that.
=> More informations about this toot | More toots from 0xZogG@hachyderm.io
@sunfish Thanks for sharing your view. Yesterday I read the last issue from the newsletter of Thorsten Ball (https://registerspill.thorstenball.com/p/they-all-use-it) and it really depressed me how he failed to acknowledge the points you mentioned.
=> More informations about this toot | More toots from giulianopz@hachyderm.io
@sunfish I largely share these feelings and the concerns you outlined. But I've also accepted that they're not going away and not having familiarity with them is going to quickly become a limiting factor in decision making, at the least. So I've started using them on a limited basis, but only for things where they can genuinely provide value through augmentation. E.g., summarizing text to help me find perspectives I might have missed: https://fosstodon.org/@tlockney/113517610598397126
=> More informations about this toot | More toots from tlockney@fosstodon.org
@sunfish I don't think I've seen anyone make this claim re: AI, tbh.
=> More informations about this toot | More toots from SteveBennett@mastodon.social
@sunfish I actually think SotA machine learning is a good thing. Computer vision applications are starting to help us make sense of things that have confused us for a long time (and by 'computer vision' I mean that to extend to pattern recognition in multi dimensional datasets. Not just images).
And the current trendy face of ML, the LLM, is undoubtedly fascinating. Especially if you see them as search engines.
But not everyone does.
I think that's the problem.
=> More informations about this toot | More toots from dynamite_ready@mastodon.gamedev.place
@sunfish I have photographic memory, that's why I went for IT, because almost everything else was boring (and I'm not very good at math).
IT makes me think, actually think.
To quote Hercules Poirot: "It is the brain, the little gray cells on which one must rely. One must seek the truth within--not without"
=> More informations about this toot | More toots from Johns_priv@mastodon.social
@sunfish same, dan!
=> More informations about this toot | More toots from regehr@mastodon.social
@sunfish Same ... for all of the above reasons. But mostly because I started programming at 19 and I knew then and there that it was what I was put on Earth to do, and will stop when they pry my cold dead fingers off the keyboard!
=> More informations about this toot | More toots from AlgoCompSynth@mastodon.social
@sunfish thank you. Not in anything remotely tech and i have no desire to fiddle with it for the same reasons. Not for everyone.
=> More informations about this toot | More toots from skoombidoombis@masto.ai
@sunfish Even if the AI thing was ethical and/or sustainable (which obviously, nah), it's trained on the output from a shed-load of output from children on Reddit or wherever.
I've cut code for nearly half a century now, and that AI nonsense will never cut it.
=> More informations about this toot | More toots from bytebro@mastodonapp.uk
@bytebro @sunfish The ultimate Garbage In Garbage Out pipeline.
You wouldn't run a sewage line to your faucet.
=> More informations about this toot | More toots from earthshine@hackers.town
@sunfish AI is a tool - just like an IDE, or anything else in the toolkit. I think the misguided-ness comes in when people start to believe that the tool owns responsibility for knowledge vs. being an implement to use that knowledge effectively. AI is great at writing boilerplate, and I’m happy to let it do that part - but that doesn’t remove the transfer of understanding and ownership of the code from me to a GPU in the sky somewhere. I think people who understand that will be ok.
=> More informations about this toot | More toots from alatartheblue@hostux.social
@alatartheblue there is a large cohort of users who actively and aggressively do not want to understand that, the laundering of all responsibility is what they want
@sunfish
=> More informations about this toot | More toots from erisceleste@tech.lgbt
@erisceleste @sunfish sure, but that’s always been the case. It isn’t new. “The compiler didn’t catch that memory leak” - see the rise and importance of Rust and other memory-safe languages. From the human perspective, response is the same - those who want to dig in and care will rise to the top, those who just want the computer to catch it all will be in the “good enough” category.
=> More informations about this toot | More toots from alatartheblue@hostux.social
@sunfish 30+ year net/sysadmin/Cisco eng here: I've supported and worked with many developers, supported git branch management, etc.
Full stop on "AI" products. The abuse of our trust, our most private data, the massive energy over-consumption, the political and social ramifications of being unable to trust online comms is nearly apocalyptic and we have to push it all back.
No more smartphones here, for one.
=> More informations about this toot | More toots from Milkman76@mastodon.social
@sunfish @dasdom Everyone, especially developpers use IA today.
If proper AI worked we would have only bug free applications.
Draw your own conclusion.
=> More informations about this toot | More toots from thuro@iosdev.space
@thuro @sunfish Even if true, what value do bug free apps have on a planet where nothing grows anymore?
=> More informations about this toot | More toots from dasdom@chaos.social
@sunfish same goes. My colleagues use it. Can't say I've noticed a gap in productivity, either.
=> More informations about this toot | More toots from neil_h@mastodon.social
@sunfish It’s like crypto for me, nah not interested.
=> More informations about this toot | More toots from Ralimba@veganism.social
@sunfish AI doesn't use any more energy than the average computer/server, even then it's kind of a nonissue considering other forms of energy use e.g. gasoline, which are far more productive to oppose
=> More informations about this toot | More toots from nano@vixen.zone
@sunfish same tbh. I think something a lot of people miss when they talk about this is that I like writing code. I have having an idea on my head, going click clack on my keyboard, and testing if my solution is correct.
=> More informations about this toot | More toots from davesh@hachyderm.io This content has been proxied by September (ba2dc).Proxy Information
text/gemini