"Do not treat the generative AI as a rational being" Challenge
Rating: impossible
Asking a LLM bot to explain its reasoning, its creation, its training data, or even its prompt doesn't result in an output that means anything. LLMs do not have introspection. Getting the LLM to "admit" something embarrassing is not a win.
=> More informations about this toot | View the thread | More toots from markgritter@mathstodon.xyz
text/gemini
This content has been proxied by September (3851b).