Ancestors

Written by Fullmetal Manager 🌈💖🔥 on 2024-12-23 at 22:24

We really got the computer-human relationship all wrong: it should be a model of consent, rather than about restraining bolts, governor modules, or the Three Laws of Robotics

I was thinking about this excellent speech by Martha Wells on her Murderbot diaries being stories of bodily autonomy and slavery, and about the fantastic video on The Tragedy of Droids in Star Wars by Pop Culture Detective.

https://marthawells.dreamwidth.org/649804.html

https://www.youtube.com/watch?v=WD2UrB7zepo

Martha Wells spells out how wrong the Three Laws of Robotics are in stipulating a subservient relationships for robots to sacrifice themselves on behalf of humans, built around fears or assumptions that robots would inherently act to harm humans (or not act to save humans) and that therefore robots should put humans before themselves.

[#]ThreeLawsOfRobotics

[#]MurderbotDiaries

[#]ArtificialIntelligence #Al

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Fullmetal Manager 🌈💖🔥 on 2024-12-23 at 22:32

Martha Wells tells the story of how the titular murderbot makes its way through a human-dominated world, makes its own choices for its body and interactions, and processes human-robot relationships, in a clear allegory for slavery

The Murderbot Diaries starts with a short 90-page novella, "All Systems Red". It's an easy, enthralling afternoon read, and I highly recommend it! It's the best escapism for the myriad of dystopian clusterfuckery that most humans on this planet are currently experiencing in one way or another.

https://www.marthawells.com/murderbot1.htm

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Fullmetal Manager 🌈💖🔥 on 2024-12-23 at 22:54

Our relationship with computers and Al should be based on consent!

We're on the verge of giving Al the capability to take actions (theoretically on behalf of a user). Now, I don't know what Al or a random number generator is going to do with my personal information, but what matters is consent.

What struck me about our current relationship with #ArtificialIntelligence is that it doesn't matter that Al is currently basically a random number generator instead of the quasi-sentient entity that Al-enthusiasts want.

It's my data, my personal information. My relationship with Al isn't what's important. I'm not here to dictate to Alfred (excuse me, Al). What matters is that I tell Al what personal information I am willing to share in each interaction, and what actions I am willing to let Al take on my behalf (e.g., using my money and location data to order tickets for a local movie).

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Fullmetal Manager 🌈💖🔥 on 2024-12-23 at 22:57

The scope of data and actions that Al can take on behalf of a user should be about consent, and it should be a contract between Al and the user. It is not about control, and it is not about subservience.

Right now, computers are only capable of doing what they're instructed (even if that's generating random numbers and using that as their input) — but that's still implicitly a contract wherein the terms are spelled out by the mechanics of the design. Should that evolve, we would still seek at each stage to seek a reasonable degree of verification of consent to the contracted expectations (which has been explored in different realms of philosophy and science fiction)

In other words, at some point we would simply ask Al — and develop a more refined understanding of what autonomy means for Alfred (excuse me, Al)

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Fullmetal Manager 🌈💖🔥 on 2024-12-23 at 23:06

At this point, if you're asking why Alfred (excuse me, Al) doesn't have the opportunity to choose a name and pronouns, you're getting it, right on!

At this point, if you're asking why we should expect a computer to do anything other than follow our instructions, I have three questions

  1. When giving instructions, have you ever stopped to consider whether anyone or anything could say "no"?

  1. When giving instructions, have you ever stopped to consider WHY someone or something might want to say "no", or what might stop them from saying "no"?

  1. Who hurt you, and what is it going to take to convince your narcissistic brain mush to go to therapy?

=> More informations about this toot | More toots from saraislet@infosec.exchange

Toot

Written by Gilgwath on 2024-12-23 at 23:21

@saraislet interessting thoughts. What worries me the most in the here and now is that informed concent has left the picture long ago for the average computet user. Most none-techies are as much slaves to the design and business decisions of vendors as the machines are. Most consumer software even goes so fare as to do what it assumes I want it to do, not what I actually instructed it to do.

=> More informations about this toot | More toots from gilgwath@social.tchncs.de

Descendants

Written by Fullmetal Manager 🌈💖🔥 on 2024-12-23 at 23:57

@gilgwath that's true even beyond computers and technology — is that any different when it comes to the food we eat?

As someone with food allergies, I don't experience much informed consent when it comes to what's in the food I get from restaurants, or even from grocers or friends. Restaurants and friends aren't always clear on the ingredients or allergens, and even in a grocery, packaged food ingredients depend on regulations that vary by country (and even then there's risks and contaminants), and even basic produce can be sprayed with chemicals that I might react to (and might be harming all of us 💀)

You're completely right that it's an awful state of informed consent, and that goes horrifyingly far

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Fullmetal Manager 🌈💖🔥 on 2024-12-23 at 23:58

@gilgwath not to mention microplastics 💀💀💀

=> More informations about this toot | More toots from saraislet@infosec.exchange

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113704704704469462
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
291.979686 milliseconds
Gemini-to-HTML Time
4.382314 milliseconds

This content has been proxied by September (ba2dc).