@xgebi me, explaining to AIBros that that's not how it works: an LLM can never have sufficient context to replicate human understanding. There is no context you can pass an LLM that will make it capable of thought. Context, which is required for an LLM to generate meaningful output, can never be comprehensive enough for the LLM to think.
=> More informations about this toot | View the thread | More toots from b4ux1t3@hachyderm.io
=> View xgebi@hachyderm.io profile
text/gemini
This content has been proxied by September (3851b).