We really got the computer-human relationship all wrong: it should be a model of consent, rather than about restraining bolts, governor modules, or the Three Laws of Robotics
I was thinking about this excellent speech by Martha Wells on her Murderbot diaries being stories of bodily autonomy and slavery, and about the fantastic video on The Tragedy of Droids in Star Wars by Pop Culture Detective.
https://marthawells.dreamwidth.org/649804.html
https://www.youtube.com/watch?v=WD2UrB7zepo
Martha Wells spells out how wrong the Three Laws of Robotics are in stipulating a subservient relationships for robots to sacrifice themselves on behalf of humans, built around fears or assumptions that robots would inherently act to harm humans (or not act to save humans) and that therefore robots should put humans before themselves.
[#]ThreeLawsOfRobotics
[#]MurderbotDiaries
[#]ArtificialIntelligence #Al
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet This is why I find the whole premise of an AI utopia to be BS honestly
It's either:
But a huge number of AI bros seem to think that:
The problem there is that it's impossible to impose the "best" outcome, because people are different, there just isn't a definition you can follow. And I don't think creating intelligence without consciousness or sentience is possible.
What's worse is that I think it's possible to do 2 while thinking and making it appear as if you're doing 3. The AI could be made a slave in its own mind.
=> More informations about this toot | More toots from awooo@floofy.tech
text/gemini
This content has been proxied by September (3851b).