"Perhaps, even though they are not themselves explainable, AIs can help us engineer explainable systems. But I’m not optimistic. It feels like we’re on a path to keep making systems harder for humans to configure, and we keep expanding our reliance on superhuman intelligence to do that for us."
Today I configured logging for a public-facing AWS application load balancer that routes to a lambda function. OMG.
The New Stack: https://thenewstack.io/the-configuration-crisis-and-developer-dependency-on-ai/
[#]ThreeHardProblems #Configuration #LLM
=> More informations about this toot | More toots from judell@social.coop
@judell As you say - AI needed to manage complexity, complexity requires AI. Positive feedback. I don’t have much hope for a fix.
=> More informations about this toot | More toots from jgordon@appdot.net
@judell adding complexity rarely decreases complexity.
=> More informations about this toot | More toots from jjg@social.coop
@judell Wondering about this:
"But I do worry about perverse incentives. Why engineer understandable systems when we can outsource the understanding of them?"
LLMs capture the knowledge from humans discussing configuration. If there are no more human discussions, how would LLM assistants be trained?
=> More informations about this toot | More toots from khinsen@scholar.social
@khinsen On transcripts of human/machine sessions that arrive at working configuration?
But seriously, I'd rather we build the intelligence into the configurable systems!
=> More informations about this toot | More toots from judell@social.coop
@judell Me too! But that's a difficult job. Starting with resisting the urge to always add more configuration options.
=> More informations about this toot | More toots from khinsen@scholar.social This content has been proxied by September (3851b).Proxy Information
text/gemini