We really got the computer-human relationship all wrong: it should be a model of consent, rather than about restraining bolts, governor modules, or the Three Laws of Robotics
I was thinking about this excellent speech by Martha Wells on her Murderbot diaries being stories of bodily autonomy and slavery, and about the fantastic video on The Tragedy of Droids in Star Wars by Pop Culture Detective.
https://marthawells.dreamwidth.org/649804.html
https://www.youtube.com/watch?v=WD2UrB7zepo
Martha Wells spells out how wrong the Three Laws of Robotics are in stipulating a subservient relationships for robots to sacrifice themselves on behalf of humans, built around fears or assumptions that robots would inherently act to harm humans (or not act to save humans) and that therefore robots should put humans before themselves.
[#]ThreeLawsOfRobotics
[#]MurderbotDiaries
[#]ArtificialIntelligence #Al
=> More informations about this toot | More toots from saraislet@infosec.exchange
Martha Wells tells the story of how the titular murderbot makes its way through a human-dominated world, makes its own choices for its body and interactions, and processes human-robot relationships, in a clear allegory for slavery
The Murderbot Diaries starts with a short 90-page novella, "All Systems Red". It's an easy, enthralling afternoon read, and I highly recommend it! It's the best escapism for the myriad of dystopian clusterfuckery that most humans on this planet are currently experiencing in one way or another.
https://www.marthawells.com/murderbot1.htm
=> More informations about this toot | More toots from saraislet@infosec.exchange
Our relationship with computers and Al should be based on consent!
We're on the verge of giving Al the capability to take actions (theoretically on behalf of a user). Now, I don't know what Al or a random number generator is going to do with my personal information, but what matters is consent.
What struck me about our current relationship with #ArtificialIntelligence is that it doesn't matter that Al is currently basically a random number generator instead of the quasi-sentient entity that Al-enthusiasts want.
It's my data, my personal information. My relationship with Al isn't what's important. I'm not here to dictate to Alfred (excuse me, Al). What matters is that I tell Al what personal information I am willing to share in each interaction, and what actions I am willing to let Al take on my behalf (e.g., using my money and location data to order tickets for a local movie).
=> More informations about this toot | More toots from saraislet@infosec.exchange
The scope of data and actions that Al can take on behalf of a user should be about consent, and it should be a contract between Al and the user. It is not about control, and it is not about subservience.
Right now, computers are only capable of doing what they're instructed (even if that's generating random numbers and using that as their input) βΒ but that's still implicitly a contract wherein the terms are spelled out by the mechanics of the design. Should that evolve, we would still seek at each stage to seek a reasonable degree of verification of consent to the contracted expectations (which has been explored in different realms of philosophy and science fiction)
In other words, at some point we would simply ask Al β and develop a more refined understanding of what autonomy means for Alfred (excuse me, Al)
=> More informations about this toot | More toots from saraislet@infosec.exchange
At this point, if you're asking why Alfred (excuse me, Al) doesn't have the opportunity to choose a name and pronouns, you're getting it, right on!
At this point, if you're asking why we should expect a computer to do anything other than follow our instructions, I have three questions
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet interessting thoughts. What worries me the most in the here and now is that informed concent has left the picture long ago for the average computet user. Most none-techies are as much slaves to the design and business decisions of vendors as the machines are. Most consumer software even goes so fare as to do what it assumes I want it to do, not what I actually instructed it to do.
=> More informations about this toot | More toots from gilgwath@social.tchncs.de
@gilgwath that's true even beyond computers and technology β is that any different when it comes to the food we eat?
As someone with food allergies, I don't experience much informed consent when it comes to what's in the food I get from restaurants, or even from grocers or friends. Restaurants and friends aren't always clear on the ingredients or allergens, and even in a grocery, packaged food ingredients depend on regulations that vary by country (and even then there's risks and contaminants), and even basic produce can be sprayed with chemicals that I might react to (and might be harming all of us π)
You're completely right that it's an awful state of informed consent, and that goes horrifyingly far
=> More informations about this toot | More toots from saraislet@infosec.exchange
@gilgwath not to mention microplastics πππ
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet This is very closely related to one of the primary reasons I believe I find using LLMs unpleasant: using a natural language for commanding a (not-yet conscious) machine feels like a practice that gets me in the habit of using natural language in a nonconsensual way, and that feels wrong
Using formally-specified programming languages to demand a computer do something is at least not retraining my social apparatus towards a nonconsensual mode
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive you can ask nicely! Seriously, asking nicely IS an important part of setting up consensual relationships
I think another part is pausing to consider what the relationship is, what the expectations are, and where there is/isn't "autonomy"
Even when there's a hierarchy like when an assistant is hired to do a job, consent is there when we ask them to do something. Under that there's an expectation that we're paying them to do that thing. I think that expectation is tied to a contract, and when they continue to show up and do the thing, that is at least implicitly their consent to maintaining the contract
I think the spot where that gets questionable is the same spot it gets questionable for me with an LLM: do they actually have autonomy in agreeing to a contract (no matter whether explicit or implicit)?
I think making an explicit process, for contract agreement and renewal, helps clarify consent and autonomy, but it's not enough
The other party has to be capable of saying no. Computers currently can't β and humans often don't feel safe to say no, or fear consequences of saying no, or have never been taught that they have autonomy
Is that the realm of the interaction that feels missing with LLMs?
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet Yeah. All of that. That's what's tricky!
But especially, how is it consent if they can't reasonably say "no"
And this leads to some pointed criticism of the present-day systems of humans making other humans do tasks for them in most of the world
It really feels like it points out how much we need to improve this with humans, before we have any hope of doing it right for LLMs
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive do we need consent to use a hammer?
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet Does the hammer have feelings?
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive neither does the LLM : )
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet I guess I'm annoyed at being expected to talk to hammers though
Probably some "language is actually exhausting" autism in me
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive @saraislet I think your point about habits in communication is a good one, though. If we turn into the people we practice being, then the habitual use of a language of imperative in a system that has no concept of consent is corrosive.
Itβs easy to believe that nobody who created these interfaces knows anyone who works retail, food services or hospitality.
=> More informations about this toot | More toots from mhoye@mastodon.social
@mhoye @saraislet Yeah
Although, I'm in a reflective mood this time of year, and it makes me consider that an alternative is to just not operate on habit as much and slow the heck down and consider whether I'm talking to an underpaid retail worker or a hammer
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive @saraislet Thereβs a game called event[0] thatβs supposed to be this space station survival horror mystery thing, about some malevolent AI that you need to somehow thwart and escape, and I somehow missed 90% of it because I said please and thank you whenever I was talking to the AI. I missed the whole survival-horror part and ended up playing a cozy explorer instead
Itβs useful metaphor, I think.
=> More informations about this toot | More toots from mhoye@mastodon.social
@mhoye @saraislet that's lovely
=> More informations about this toot | More toots from recursive@hachyderm.io
@recursive @saraislet my entire professional communication style i've been working to hone all these years is built around collaboration, seeking input and consensus from equal partners in the process, so the idea of a collaborator who doesn't actually know anything, just needs to be told what to do, bullshits if i ask them anything, and doesn't have any context i can't give them in an email-sized brief... it's not what i want, socially or practically, to say the least!
=> More informations about this toot | More toots from jplebreton@mastodon.social
@saraislet I'd tend to agree, except the full term is "informed consent" & unlike another human being, about whom we can at least extrapolate some general ideas because they're fundamentally like ourselves, an AI will forever remain utterly opaque to us. I honestly think the only way to achieve meaningful human/robot relations is if the robot is literally an artificial life form with quasi-human instincts etc., which is probably impossible but even if not, why? Humans already exist.
=> More informations about this toot | More toots from jwcph@helvede.net
@saraislet Data from Star Trek is such a great exemplar for this; we have the slavery allegory in "Measure of a Man"; he's a self-aware life form with a fundamental right to self-determination - but meaningful relations between him & humans nevertheless remain a struggle for both him & them because of how fundamentally not human he is (even if the writers clearly struggle with his "no emotions", because sapience without emotions is almost certainly impossible).
=> More informations about this toot | More toots from jwcph@helvede.net
@jwcph IMO to be consent it has to be informed consent, otherwise it's not meaningful at all
But I'd hesitate to say that all human interaction is something we can extrapolate from. Allistic people often consider autistic people to be inexplicable robots. I'm not sure that's all that different. Allistic people often consider autistic people (or people with various mental health challenges) not to deserve autonomy for very similar reasons
I think we can do our best to strive to navigate [informed] consent with people or Al or other lifeforms. I don't see a good reason for Al but since it's not my choice whether people try to make random number generators pretend to be sentient, but it is my choice how I treat them
=> More informations about this toot | More toots from saraislet@infosec.exchange
@saraislet This is why I find the whole premise of an AI utopia to be BS honestly
It's either:
But a huge number of AI bros seem to think that:
The problem there is that it's impossible to impose the "best" outcome, because people are different, there just isn't a definition you can follow. And I don't think creating intelligence without consciousness or sentience is possible.
What's worse is that I think it's possible to do 2 while thinking and making it appear as if you're doing 3. The AI could be made a slave in its own mind.
=> More informations about this toot | More toots from awooo@floofy.tech
@saraislet
"I thought about War Games years later, while watching The Lord of the Rings documentary about the program used to create the massive battle scenes and how they had to tweak it to stop it from making all the pixel people run away from each other instead of fight."
π€―
=> More informations about this toot | More toots from cavyherd@wandering.shop
@saraislet All of Asimov's Robots stories were about the Three Laws being wrong, though. Or at least incomplete and unsuitable for purpose. The movie did a good job of conveying that theme, even if it wasn't particularly true to the story it shared a title with.
Frankly I think trying to stop AI from going rogue and attacking humans is probably the best way to guarantee it will go rogue and attack humans. Consider how humans would react to the same treatment.
=> More informations about this toot | More toots from tknarr@mstdn.social
@saraislet
If we fight - really fight, we might be able to win the same rights as robots.
=> More informations about this toot | More toots from photoncollector@mastodon.social
@saraislet Hmmm. Industrial health and safety regs are written in the blood of workers who were harmed by machines, not because machines inherently act to harm humans, but because humans are fragile squishy meat sacks, and the machines hadn't been designed not to harm them yet.
Making machines not kill or harm humans doesn't happen by default, or by accident. It takes planning and safeguards.
=> More informations about this toot | More toots from aspragg@ohai.social
@saraislet And fully-sentient AGIs with moral frameworks won't appear out of nowhere. They'll be built on slightly worse AIs, which were built on worse AIs than that, which absolutely would have needed safeguards.
When do we take the safeguards away? At what point is one of those AIs capable of long-term/moral reasoning enough that it won't run a human over getting from A to B, just because that was the shortest route, and it wasn't specifically programmed to avoid harming humans?
=> More informations about this toot | More toots from aspragg@ohai.social
@aspragg we teach human rules and then those that are capable develop moral frameworks
We can still use rules and safeguards, but in the same way we teach humans not to harm humans, and teach humans driving vehicles that other humans are fleshy meat sacks that go squish. Those safeguards don't have to overwhelmingly lean on robots sacrificing themselves for human safety, they can just emphasize human safety
=> More informations about this toot | More toots from saraislet@infosec.exchange This content has been proxied by September (ba2dc).Proxy Information
text/gemini