Seriously, what kind of reply is this, you ignore everything I said except the literal last thing, and even then it’s weasel words. “Using agential language for LLMs is wrong, but it works.”
Yes, Curtis, prompting the LLM with language more similar to its training data results in more plausible text prediction in the output, why is that? Because it’s more natural, there’s not a lot of training data on querying a program on its inner workings, so the response is less like natural language.
But you’re not actually getting any insight. You’re just improving the verisimilitude of the text prediction.
=> More informations about this toot | View the thread | More toots from earthquake@lemm.ee
=> View ovid@fosstodon.org profile
text/gemini
This content has been proxied by September (3851b).