Study finds LLMs can identify their own mistakes
https://venturebeat.com/ai/study-finds-llms-can-identify-their-own-mistakes/
"This finding suggests that current evaluation methods, which solely rely on the final output of LLMs, may not accurately reflect their true capabilities. It raises the possibility that by better understanding and leveraging the internal knowledge of LLMs, we might be able to unlock hidden potential and significantly reduce errors."
[#]AI #LLM #Research
=> More informations about this toot | More toots from dalonso@mas.to
A mí me ha pasado de darle al LLM ejercicios muy complejos e indicarle (con un lenguaje nada trivial) en qué paso se ha equivocado. Lo corrige y da el resultado correcto.
=> More informations about this toot | More toots from dalonso@mas.to
@dalonso It is fake. They are supposed to react if a someone else or another AI say it is wrong. They have no capabilities though to know it. Example: A LLM says something correct. You tell the LLM it is wrong. Then the LLM will correct itself and say sth. incorrect. So it is trained to react, it is not trained to know what is correct.
=> More informations about this toot | More toots from jornfranke@mastodon.online This content has been proxied by September (3851b).Proxy Information
text/gemini