"Do not treat the generative AI as a rational being" Challenge
Rating: impossible
Asking a LLM bot to explain its reasoning, its creation, its training data, or even its prompt doesn't result in an output that means anything. LLMs do not have introspection. Getting the LLM to "admit" something embarrassing is not a win.
=> More informations about this toot | More toots from markgritter@mathstodon.xyz
@markgritter
No, really, look, I did it
=> More informations about this toot | More toots from inthehands@hachyderm.io
@inthehands OK, you are a person on the Internet (and also one I've met in real life) and I kind of want to argue with you whether or not you accomplished the challenge. :)
But the tragedy here is that I have felt the same way about LLMs even though I know that it is futile. Once you are chatting in a textbox some sort of magic takes over where we ascribe intentionality.
=> More informations about this toot | More toots from markgritter@mathstodon.xyz
@markgritter
Yeah, it’s a sad fact of life that we don’t get to opt out of being human just because we know better.
=> More informations about this toot | More toots from inthehands@hachyderm.io
@markgritter @inthehands It's like what the "uncanny valley" says about our vision : It is deeply compromised.
We are conditioned to view intelligible text (in linguistics every utterance is text, even if spoken) as a sign of intelligence.
Likewise, even an obviously sentient person will be downgraded by our subconscious if we can't make sense of what they say (edited to add: e.g. that wino shouting at the bus stop, that disabled person, that foreigner on the TV). We can't not do it, and it takes a lot of conscious self-conditioning to push it aside).
Consciousness is self-deceit 90% of the time.
Self-deceit with utility.
=> More informations about this toot | More toots from androcat@toot.cat This content has been proxied by September (3851b).Proxy Information
text/gemini