@dalonso It is fake. They are supposed to react if a someone else or another AI say it is wrong. They have no capabilities though to know it. Example: A LLM says something correct. You tell the LLM it is wrong. Then the LLM will correct itself and say sth. incorrect. So it is trained to react, it is not trained to know what is correct.
=> More informations about this toot | View the thread | More toots from jornfranke@mastodon.online
=> View dalonso@mas.to profile
text/gemini
This content has been proxied by September (3851b).