I have a strong suspicion that #LLM style #ai don't know when to shut up.
More technically, they don't know what they they don't know, so the string termination conditions aren't strongly derived from semantics.
Empirically this looks like "non-interactive Turing tests can ID LLM vs human at more than a random guessing rate by selecting the shorter/more factually dense response"
=> More informations about this toot | More toots from NiftyLinks@federated.press
text/gemini
This content has been proxied by September (3851b).