Ancestors

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-23 at 22:24

We really got the computer-human relationship all wrong: it should be a model of consent, rather than about restraining bolts, governor modules, or the Three Laws of Robotics

I was thinking about this excellent speech by Martha Wells on her Murderbot diaries being stories of bodily autonomy and slavery, and about the fantastic video on The Tragedy of Droids in Star Wars by Pop Culture Detective.

https://marthawells.dreamwidth.org/649804.html

https://www.youtube.com/watch?v=WD2UrB7zepo

Martha Wells spells out how wrong the Three Laws of Robotics are in stipulating a subservient relationships for robots to sacrifice themselves on behalf of humans, built around fears or assumptions that robots would inherently act to harm humans (or not act to save humans) and that therefore robots should put humans before themselves.

[#]ThreeLawsOfRobotics

[#]MurderbotDiaries

[#]ArtificialIntelligence #Al

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-23 at 22:32

Martha Wells tells the story of how the titular murderbot makes its way through a human-dominated world, makes its own choices for its body and interactions, and processes human-robot relationships, in a clear allegory for slavery

The Murderbot Diaries starts with a short 90-page novella, "All Systems Red". It's an easy, enthralling afternoon read, and I highly recommend it! It's the best escapism for the myriad of dystopian clusterfuckery that most humans on this planet are currently experiencing in one way or another.

https://www.marthawells.com/murderbot1.htm

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-23 at 22:54

Our relationship with computers and Al should be based on consent!

We're on the verge of giving Al the capability to take actions (theoretically on behalf of a user). Now, I don't know what Al or a random number generator is going to do with my personal information, but what matters is consent.

What struck me about our current relationship with #ArtificialIntelligence is that it doesn't matter that Al is currently basically a random number generator instead of the quasi-sentient entity that Al-enthusiasts want.

It's my data, my personal information. My relationship with Al isn't what's important. I'm not here to dictate to Alfred (excuse me, Al). What matters is that I tell Al what personal information I am willing to share in each interaction, and what actions I am willing to let Al take on my behalf (e.g., using my money and location data to order tickets for a local movie).

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-23 at 22:57

The scope of data and actions that Al can take on behalf of a user should be about consent, and it should be a contract between Al and the user. It is not about control, and it is not about subservience.

Right now, computers are only capable of doing what they're instructed (even if that's generating random numbers and using that as their input) β€”Β but that's still implicitly a contract wherein the terms are spelled out by the mechanics of the design. Should that evolve, we would still seek at each stage to seek a reasonable degree of verification of consent to the contracted expectations (which has been explored in different realms of philosophy and science fiction)

In other words, at some point we would simply ask Al β€” and develop a more refined understanding of what autonomy means for Alfred (excuse me, Al)

=> More informations about this toot | More toots from saraislet@infosec.exchange

Toot

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-23 at 23:06

At this point, if you're asking why Alfred (excuse me, Al) doesn't have the opportunity to choose a name and pronouns, you're getting it, right on!

At this point, if you're asking why we should expect a computer to do anything other than follow our instructions, I have three questions

  1. When giving instructions, have you ever stopped to consider whether anyone or anything could say "no"?

  1. When giving instructions, have you ever stopped to consider WHY someone or something might want to say "no", or what might stop them from saying "no"?

  1. Who hurt you, and what is it going to take to convince your narcissistic brain mush to go to therapy?

=> More informations about this toot | More toots from saraislet@infosec.exchange

Descendants

Written by Gilgwath on 2024-12-23 at 23:21

@saraislet interessting thoughts. What worries me the most in the here and now is that informed concent has left the picture long ago for the average computet user. Most none-techies are as much slaves to the design and business decisions of vendors as the machines are. Most consumer software even goes so fare as to do what it assumes I want it to do, not what I actually instructed it to do.

=> More informations about this toot | More toots from gilgwath@social.tchncs.de

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-23 at 23:57

@gilgwath that's true even beyond computers and technology β€” is that any different when it comes to the food we eat?

As someone with food allergies, I don't experience much informed consent when it comes to what's in the food I get from restaurants, or even from grocers or friends. Restaurants and friends aren't always clear on the ingredients or allergens, and even in a grocery, packaged food ingredients depend on regulations that vary by country (and even then there's risks and contaminants), and even basic produce can be sprayed with chemicals that I might react to (and might be harming all of us πŸ’€)

You're completely right that it's an awful state of informed consent, and that goes horrifyingly far

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-23 at 23:58

@gilgwath not to mention microplastics πŸ’€πŸ’€πŸ’€

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Alexandra Magin πŸ³οΈβ€πŸŒˆ on 2024-12-23 at 23:30

@saraislet This is very closely related to one of the primary reasons I believe I find using LLMs unpleasant: using a natural language for commanding a (not-yet conscious) machine feels like a practice that gets me in the habit of using natural language in a nonconsensual way, and that feels wrong

Using formally-specified programming languages to demand a computer do something is at least not retraining my social apparatus towards a nonconsensual mode

=> More informations about this toot | More toots from recursive@hachyderm.io

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-23 at 23:50

@recursive you can ask nicely! Seriously, asking nicely IS an important part of setting up consensual relationships

I think another part is pausing to consider what the relationship is, what the expectations are, and where there is/isn't "autonomy"

Even when there's a hierarchy like when an assistant is hired to do a job, consent is there when we ask them to do something. Under that there's an expectation that we're paying them to do that thing. I think that expectation is tied to a contract, and when they continue to show up and do the thing, that is at least implicitly their consent to maintaining the contract

I think the spot where that gets questionable is the same spot it gets questionable for me with an LLM: do they actually have autonomy in agreeing to a contract (no matter whether explicit or implicit)?

I think making an explicit process, for contract agreement and renewal, helps clarify consent and autonomy, but it's not enough

The other party has to be capable of saying no. Computers currently can't β€” and humans often don't feel safe to say no, or fear consequences of saying no, or have never been taught that they have autonomy

Is that the realm of the interaction that feels missing with LLMs?

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Alexandra Magin πŸ³οΈβ€πŸŒˆ on 2024-12-24 at 00:18

@saraislet Yeah. All of that. That's what's tricky!

But especially, how is it consent if they can't reasonably say "no"

And this leads to some pointed criticism of the present-day systems of humans making other humans do tasks for them in most of the world

It really feels like it points out how much we need to improve this with humans, before we have any hope of doing it right for LLMs

=> More informations about this toot | More toots from recursive@hachyderm.io

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-24 at 02:06

@recursive do we need consent to use a hammer?

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Alexandra Magin πŸ³οΈβ€πŸŒˆ on 2024-12-24 at 03:34

@saraislet Does the hammer have feelings?

=> More informations about this toot | More toots from recursive@hachyderm.io

Written by Fullmetal Manager πŸŒˆπŸ’–πŸ”₯ on 2024-12-25 at 00:33

@recursive neither does the LLM : )

=> More informations about this toot | More toots from saraislet@infosec.exchange

Written by Alexandra Magin πŸ³οΈβ€πŸŒˆ on 2024-12-25 at 00:44

@saraislet I guess I'm annoyed at being expected to talk to hammers though

Probably some "language is actually exhausting" autism in me

=> More informations about this toot | More toots from recursive@hachyderm.io

Written by mhoye on 2024-12-25 at 15:39

@recursive @saraislet I think your point about habits in communication is a good one, though. If we turn into the people we practice being, then the habitual use of a language of imperative in a system that has no concept of consent is corrosive.

It’s easy to believe that nobody who created these interfaces knows anyone who works retail, food services or hospitality.

=> More informations about this toot | More toots from mhoye@mastodon.social

Written by Alexandra Magin πŸ³οΈβ€πŸŒˆ on 2024-12-25 at 16:15

@mhoye @saraislet Yeah

Although, I'm in a reflective mood this time of year, and it makes me consider that an alternative is to just not operate on habit as much and slow the heck down and consider whether I'm talking to an underpaid retail worker or a hammer

=> More informations about this toot | More toots from recursive@hachyderm.io

Written by mhoye on 2024-12-25 at 17:12

@recursive @saraislet There’s a game called event[0] that’s supposed to be this space station survival horror mystery thing, about some malevolent AI that you need to somehow thwart and escape, and I somehow missed 90% of it because I said please and thank you whenever I was talking to the AI. I missed the whole survival-horror part and ended up playing a cozy explorer instead

It’s useful metaphor, I think.

=> More informations about this toot | More toots from mhoye@mastodon.social

Written by Alexandra Magin πŸ³οΈβ€πŸŒˆ on 2024-12-25 at 17:45

@mhoye @saraislet that's lovely

=> More informations about this toot | More toots from recursive@hachyderm.io

Written by JP on 2024-12-25 at 16:41

@recursive @saraislet my entire professional communication style i've been working to hone all these years is built around collaboration, seeking input and consensus from equal partners in the process, so the idea of a collaborator who doesn't actually know anything, just needs to be told what to do, bullshits if i ask them anything, and doesn't have any context i can't give them in an email-sized brief... it's not what i want, socially or practically, to say the least!

=> More informations about this toot | More toots from jplebreton@mastodon.social

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113704644864045818
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
339.308935 milliseconds
Gemini-to-HTML Time
6.175938 milliseconds

This content has been proxied by September (ba2dc).