We really got the computer-human relationship all wrong: it should be a model of consent, rather than about restraining bolts, governor modules, or the Three Laws of Robotics
I was thinking about this excellent speech by Martha Wells on her Murderbot diaries being stories of bodily autonomy and slavery, and about the fantastic video on The Tragedy of Droids in Star Wars by Pop Culture Detective.
https://marthawells.dreamwidth.org/649804.html
https://www.youtube.com/watch?v=WD2UrB7zepo
Martha Wells spells out how wrong the Three Laws of Robotics are in stipulating a subservient relationships for robots to sacrifice themselves on behalf of humans, built around fears or assumptions that robots would inherently act to harm humans (or not act to save humans) and that therefore robots should put humans before themselves.
[#]ThreeLawsOfRobotics
[#]MurderbotDiaries
[#]ArtificialIntelligence #Al
=> More informations about this toot | More toots from saraislet@infosec.exchange
Martha Wells tells the story of how the titular murderbot makes its way through a human-dominated world, makes its own choices for its body and interactions, and processes human-robot relationships, in a clear allegory for slavery
The Murderbot Diaries starts with a short 90-page novella, "All Systems Red". It's an easy, enthralling afternoon read, and I highly recommend it! It's the best escapism for the myriad of dystopian clusterfuckery that most humans on this planet are currently experiencing in one way or another.
https://www.marthawells.com/murderbot1.htm
=> More informations about this toot | More toots from saraislet@infosec.exchange
Our relationship with computers and Al should be based on consent!
We're on the verge of giving Al the capability to take actions (theoretically on behalf of a user). Now, I don't know what Al or a random number generator is going to do with my personal information, but what matters is consent.
What struck me about our current relationship with #ArtificialIntelligence is that it doesn't matter that Al is currently basically a random number generator instead of the quasi-sentient entity that Al-enthusiasts want.
It's my data, my personal information. My relationship with Al isn't what's important. I'm not here to dictate to Alfred (excuse me, Al). What matters is that I tell Al what personal information I am willing to share in each interaction, and what actions I am willing to let Al take on my behalf (e.g., using my money and location data to order tickets for a local movie).
=> More informations about this toot | More toots from saraislet@infosec.exchange
The scope of data and actions that Al can take on behalf of a user should be about consent, and it should be a contract between Al and the user. It is not about control, and it is not about subservience.
Right now, computers are only capable of doing what they're instructed (even if that's generating random numbers and using that as their input) βΒ but that's still implicitly a contract wherein the terms are spelled out by the mechanics of the design. Should that evolve, we would still seek at each stage to seek a reasonable degree of verification of consent to the contracted expectations (which has been explored in different realms of philosophy and science fiction)
In other words, at some point we would simply ask Al β and develop a more refined understanding of what autonomy means for Alfred (excuse me, Al)
=> More informations about this toot | More toots from saraislet@infosec.exchange
At this point, if you're asking why Alfred (excuse me, Al) doesn't have the opportunity to choose a name and pronouns, you're getting it, right on!
At this point, if you're asking why we should expect a computer to do anything other than follow our instructions, I have three questions
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet This is very closely related to one of the primary reasons I believe I find using LLMs unpleasant: using a natural language for commanding a (not-yet conscious) machine feels like a practice that gets me in the habit of using natural language in a nonconsensual way, and that feels wrong
Using formally-specified programming languages to demand a computer do something is at least not retraining my social apparatus towards a nonconsensual mode
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive you can ask nicely! Seriously, asking nicely IS an important part of setting up consensual relationships
I think another part is pausing to consider what the relationship is, what the expectations are, and where there is/isn't "autonomy"
Even when there's a hierarchy like when an assistant is hired to do a job, consent is there when we ask them to do something. Under that there's an expectation that we're paying them to do that thing. I think that expectation is tied to a contract, and when they continue to show up and do the thing, that is at least implicitly their consent to maintaining the contract
I think the spot where that gets questionable is the same spot it gets questionable for me with an LLM: do they actually have autonomy in agreeing to a contract (no matter whether explicit or implicit)?
I think making an explicit process, for contract agreement and renewal, helps clarify consent and autonomy, but it's not enough
The other party has to be capable of saying no. Computers currently can't β and humans often don't feel safe to say no, or fear consequences of saying no, or have never been taught that they have autonomy
Is that the realm of the interaction that feels missing with LLMs?
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet Yeah. All of that. That's what's tricky!
But especially, how is it consent if they can't reasonably say "no"
And this leads to some pointed criticism of the present-day systems of humans making other humans do tasks for them in most of the world
It really feels like it points out how much we need to improve this with humans, before we have any hope of doing it right for LLMs
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive do we need consent to use a hammer?
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet Does the hammer have feelings?
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive neither does the LLM : )
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet I guess I'm annoyed at being expected to talk to hammers though
Probably some "language is actually exhausting" autism in me
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive @saraislet I think your point about habits in communication is a good one, though. If we turn into the people we practice being, then the habitual use of a language of imperative in a system that has no concept of consent is corrosive.
Itβs easy to believe that nobody who created these interfaces knows anyone who works retail, food services or hospitality.
=> More informations about this toot | More toots from mhoye@mastodon.social
@mhoye @saraislet Yeah
Although, I'm in a reflective mood this time of year, and it makes me consider that an alternative is to just not operate on habit as much and slow the heck down and consider whether I'm talking to an underpaid retail worker or a hammer
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive @saraislet Thereβs a game called event[0] thatβs supposed to be this space station survival horror mystery thing, about some malevolent AI that you need to somehow thwart and escape, and I somehow missed 90% of it because I said please and thank you whenever I was talking to the AI. I missed the whole survival-horror part and ended up playing a cozy explorer instead
Itβs useful metaphor, I think.
=> More informations about this toot | More toots from mhoye@mastodon.social
@mhoye @saraislet that's lovely
=> More informations about this toot | More toots from recursive@hachyderm.io This content has been proxied by September (ba2dc).Proxy Information
text/gemini