We really got the computer-human relationship all wrong: it should be a model of consent, rather than about restraining bolts, governor modules, or the Three Laws of Robotics
I was thinking about this excellent speech by Martha Wells on her Murderbot diaries being stories of bodily autonomy and slavery, and about the fantastic video on The Tragedy of Droids in Star Wars by Pop Culture Detective.
https://marthawells.dreamwidth.org/649804.html
https://www.youtube.com/watch?v=WD2UrB7zepo
Martha Wells spells out how wrong the Three Laws of Robotics are in stipulating a subservient relationships for robots to sacrifice themselves on behalf of humans, built around fears or assumptions that robots would inherently act to harm humans (or not act to save humans) and that therefore robots should put humans before themselves.
[#]ThreeLawsOfRobotics
[#]MurderbotDiaries
[#]ArtificialIntelligence #Al
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet Hmmm. Industrial health and safety regs are written in the blood of workers who were harmed by machines, not because machines inherently act to harm humans, but because humans are fragile squishy meat sacks, and the machines hadn't been designed not to harm them yet.
Making machines not kill or harm humans doesn't happen by default, or by accident. It takes planning and safeguards.
=> More informations about this toot | More toots from aspragg@ohai.social
@saraislet And fully-sentient AGIs with moral frameworks won't appear out of nowhere. They'll be built on slightly worse AIs, which were built on worse AIs than that, which absolutely would have needed safeguards.
When do we take the safeguards away? At what point is one of those AIs capable of long-term/moral reasoning enough that it won't run a human over getting from A to B, just because that was the shortest route, and it wasn't specifically programmed to avoid harming humans?
=> More informations about this toot | More toots from aspragg@ohai.social
@aspragg we teach human rules and then those that are capable develop moral frameworks
We can still use rules and safeguards, but in the same way we teach humans not to harm humans, and teach humans driving vehicles that other humans are fleshy meat sacks that go squish. Those safeguards don't have to overwhelmingly lean on robots sacrificing themselves for human safety, they can just emphasize human safety
=> More informations about this toot | More toots from saraislet@infosec.exchange
text/gemini
This content has been proxied by September (ba2dc).