Ancestors

Toot

Written by Mark Gritter on 2025-01-07 at 01:49

"Do not treat the generative AI as a rational being" Challenge

Rating: impossible

Asking a LLM bot to explain its reasoning, its creation, its training data, or even its prompt doesn't result in an output that means anything. LLMs do not have introspection. Getting the LLM to "admit" something embarrassing is not a win.

=> More informations about this toot | More toots from markgritter@mathstodon.xyz

Descendants

Written by Paul Cantrell on 2025-01-07 at 01:55

@markgritter

No, really, look, I did it

=> View attached media

=> More informations about this toot | More toots from inthehands@hachyderm.io

Written by Mark Gritter on 2025-01-07 at 04:59

@inthehands OK, you are a person on the Internet (and also one I've met in real life) and I kind of want to argue with you whether or not you accomplished the challenge. :)

But the tragedy here is that I have felt the same way about LLMs even though I know that it is futile. Once you are chatting in a textbox some sort of magic takes over where we ascribe intentionality.

=> More informations about this toot | More toots from markgritter@mathstodon.xyz

Written by Paul Cantrell on 2025-01-07 at 05:05

@markgritter

Yeah, it’s a sad fact of life that we don’t get to opt out of being human just because we know better.

=> More informations about this toot | More toots from inthehands@hachyderm.io

Written by Androcat on 2025-01-07 at 10:07

@markgritter @inthehands It's like what the "uncanny valley" says about our vision : It is deeply compromised.

We are conditioned to view intelligible text (in linguistics every utterance is text, even if spoken) as a sign of intelligence.

Likewise, even an obviously sentient person will be downgraded by our subconscious if we can't make sense of what they say (edited to add: e.g. that wino shouting at the bus stop, that disabled person, that foreigner on the TV). We can't not do it, and it takes a lot of conscious self-conditioning to push it aside).

Consciousness is self-deceit 90% of the time.

Self-deceit with utility.

=> More informations about this toot | More toots from androcat@toot.cat

Written by Oliphantom Menace on 2025-01-07 at 01:59

@markgritter Word.

https://oliphant.social/@oliphant/113778171747995123

=> More informations about this toot | More toots from oliphant@oliphant.social

Written by Cassandrich on 2025-01-07 at 02:29

@markgritter You just wrote a prompt where the followup autocorrect text is admission-shaped.

=> More informations about this toot | More toots from dalias@hachyderm.io

Written by quixote on 2025-01-07 at 02:45

@markgritter The description that does fit AI of the common variety is autocomplete on steroids.

You wouldn't expect autocomplete to know what you mean. You expect it to complete the word. Likewise with AI. It's just bigger so it tries to complete the sentence.

=> More informations about this toot | More toots from quixote@mastodon.nz

Written by Maarten Pelgrim on 2025-01-07 at 07:28

@markgritter More and more I think of our current AI's as the computers from the 1950s: huge, clunky, slow and for the most part useless.

Maybe in 10 or 20 years..

=> More informations about this toot | More toots from maartenpelgrim@mastodon.nl

Written by Androcat on 2025-01-07 at 10:19

@maartenpelgrim @markgritter It won't get any better, though.

They are using a crude math hack to mimic a brain.

But they can't get it to work at scale.

All they have is 5000 neurons' worth, a pitifully small brain even in the Insect kingdom.

And there are severe bottlenecks against scaling it.

=> More informations about this toot | More toots from androcat@toot.cat

Written by Maarten Pelgrim on 2025-01-07 at 18:50

@androcat @markgritter Yes, I'm beginning to suspect that later generations will look at all of this as silly business.

=> More informations about this toot | More toots from maartenpelgrim@mastodon.nl

Written by PaulDavisTheFirst on 2025-01-07 at 14:58

@markgritter the problem goes deeper than this. See Dennett's "The Intentional Stance" which is fundamentally about how we lean strongly toward assigning intentionality to systems that we have no reason to believe actually have it. Dennett's claim is that we do this because it works as a way of predicting behavior. See it working on your partner, friend or dog.

We're stuck with this, I think, which is very, very scary to me in the context of LLMs.

=> More informations about this toot | More toots from PaulDavisTheFirst@fosstodon.org

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113784558928920875
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
331.803882 milliseconds
Gemini-to-HTML Time
3.758915 milliseconds

This content has been proxied by September (3851b).