On the Mastodon part of the Fediverse, Tim Bray recommends taking a "wait and see" attitude regarding the current LLM apocalypse:
- The claim that they're glorified Markov chains, just stats engines, is wrong.
- Some critics are essentially echoing Searle's Chinese Room argument, which is wrong.
=> Tim Bray's toot | The Chinese room argument
He says a few more reasonable things, but i want to push back on these, because I think they give the purveyors of LLMs too much credit.
For the first claim: it is wrong only in the details. If you take out any mention of Markov chains, but keep the claim that LLMs are just stats engines, the claim is right. LLMs are vastly more complex than Markov chains, both in program design and language corpus. But they /are/ still just statistics engines. As I saw it pithily explained, to an LLM, the only difference between the phrase "Neil Armstrong was the first man to walk on the moon" and the phrase "Neil Armstrong was the first man to walk on Mars" is that the former is more likely to appear in its training corpus.
For the second: I agree that Searle's Chinese room thought experiment is wrong, but disagree that the /reasons/ it is wrong are operant in the case of LLMs. I'm not going to tire my fingers out repeating the whole argument, the link to the Wikipedia page is above.
The reason that the Chinese room argument is wrong is that it imagines the computer as the "man in the room" that doesn't understand Chinese. But the computer is actually the complete system of man and room, and all the instructions and rules. The homunculus in the machine doesn't have to understand Chinese for the whole machine to understand Chinese.
/But/ LLMs do not even measure up to the Chinese room system! It is stipulated in the argument that the machine perfectly emulates the speech of a native speaker of Chinese. Searle argues that programs are entirely syntactic, while minds have semantics. In the case of the Chinese room, in work as stipulated, the system-as-a-whole /must/ encode semantic meaning in order to avoid being caught in contradiction, among a number of other failure modes. A friend recently shared this transcript of someone talking to ChatGPT:
,---- | Human > Name a word with silent "v". | ChatGPT> "Salmon" has a silent "v". | Human > Where is the "v" in that word? | ChatGPT> In the word "salmon", the "v" is silent and is not | pronounced. The letter "v" appears in the spelling of the | word, but it is not pronounced when the word is spoken. The | correct pronunciation of "salmon" is "SAM-un" with no "v" | sound.
text/gemini
This content has been proxied by September (ba2dc).