"Do not treat the generative AI as a rational being" Challenge
Rating: impossible
Asking a LLM bot to explain its reasoning, its creation, its training data, or even its prompt doesn't result in an output that means anything. LLMs do not have introspection. Getting the LLM to "admit" something embarrassing is not a win.
=> More informations about this toot | More toots from markgritter@mathstodon.xyz
@markgritter
No, really, look, I did it
=> More informations about this toot | More toots from inthehands@hachyderm.io
@inthehands OK, you are a person on the Internet (and also one I've met in real life) and I kind of want to argue with you whether or not you accomplished the challenge. :)
But the tragedy here is that I have felt the same way about LLMs even though I know that it is futile. Once you are chatting in a textbox some sort of magic takes over where we ascribe intentionality.
=> More informations about this toot | More toots from markgritter@mathstodon.xyz
@markgritter
Yeah, it’s a sad fact of life that we don’t get to opt out of being human just because we know better.
=> More informations about this toot | More toots from inthehands@hachyderm.io
@markgritter @inthehands It's like what the "uncanny valley" says about our vision : It is deeply compromised.
We are conditioned to view intelligible text (in linguistics every utterance is text, even if spoken) as a sign of intelligence.
Likewise, even an obviously sentient person will be downgraded by our subconscious if we can't make sense of what they say (edited to add: e.g. that wino shouting at the bus stop, that disabled person, that foreigner on the TV). We can't not do it, and it takes a lot of conscious self-conditioning to push it aside).
Consciousness is self-deceit 90% of the time.
Self-deceit with utility.
=> More informations about this toot | More toots from androcat@toot.cat
@markgritter Word.
https://oliphant.social/@oliphant/113778171747995123
=> More informations about this toot | More toots from oliphant@oliphant.social
@markgritter You just wrote a prompt where the followup autocorrect text is admission-shaped.
=> More informations about this toot | More toots from dalias@hachyderm.io
@markgritter The description that does fit AI of the common variety is autocomplete on steroids.
You wouldn't expect autocomplete to know what you mean. You expect it to complete the word. Likewise with AI. It's just bigger so it tries to complete the sentence.
=> More informations about this toot | More toots from quixote@mastodon.nz
@markgritter More and more I think of our current AI's as the computers from the 1950s: huge, clunky, slow and for the most part useless.
Maybe in 10 or 20 years..
=> More informations about this toot | More toots from maartenpelgrim@mastodon.nl
@maartenpelgrim @markgritter It won't get any better, though.
They are using a crude math hack to mimic a brain.
But they can't get it to work at scale.
All they have is 5000 neurons' worth, a pitifully small brain even in the Insect kingdom.
And there are severe bottlenecks against scaling it.
=> More informations about this toot | More toots from androcat@toot.cat
@androcat @markgritter Yes, I'm beginning to suspect that later generations will look at all of this as silly business.
=> More informations about this toot | More toots from maartenpelgrim@mastodon.nl
@markgritter the problem goes deeper than this. See Dennett's "The Intentional Stance" which is fundamentally about how we lean strongly toward assigning intentionality to systems that we have no reason to believe actually have it. Dennett's claim is that we do this because it works as a way of predicting behavior. See it working on your partner, friend or dog.
We're stuck with this, I think, which is very, very scary to me in the context of LLMs.
=> More informations about this toot | More toots from PaulDavisTheFirst@fosstodon.org This content has been proxied by September (3851b).Proxy Information
text/gemini