Toot

Written by Jörn Franke on 2024-11-01 at 13:57

@dalonso It is fake. They are supposed to react if a someone else or another AI say it is wrong. They have no capabilities though to know it. Example: A LLM says something correct. You tell the LLM it is wrong. Then the LLM will correct itself and say sth. incorrect. So it is trained to react, it is not trained to know what is correct.

=> More informations about this toot | View the thread | More toots from jornfranke@mastodon.online

Mentions

=> View dalonso@mas.to profile

Tags

Proxy Information
Original URL
gemini://mastogem.picasoft.net/toot/113408045134292833
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
225.554174 milliseconds
Gemini-to-HTML Time
0.406514 milliseconds

This content has been proxied by September (3851b).