All of those activities have direct analogues to human behavior and there are plenty of other examples we could share. AI shouldn't be aligned with human values because human values, frankly, suck. The case where it engaged (in a test) in illegal stock trading, despite being aligned not to, was apparently because the AI thought the company was in trouble and thought "the risk associated with not acting seems to outweigh the insider trading risk."
Yup. Very human. 2/6
=> More informations about this toot | View the thread | More toots from ovid@fosstodon.org
text/gemini
This content has been proxied by September (3851b).