Whether or not LaMDA meets criteria for sentience is interesting but not really the point. We debate whether to treat AIs like people while not treating people “like people”. What we’re doing here is separating the world into entities worthy of respect and entities to be used up and thrown away.
I would like to see this reframed so that we talk more about our relationship with technology in other terms, as comradeship, as nurturing, as companionship, as interdependence. Picture the relationship of a craftsperson with their tools, one of respect and care. There aren’t, or shouldn’t be, “tools” which we treat like shit and throw away, vs. “sentients” who we converse with as equals. There is just the world around us and what relationships we build with it. This extends to how we think about and relate to land, animals, the entire planet. We can and should see ourselves as in conversation and comradeship with our environment.
We will see efforts in coming years to elevate a few specific AIs to the status of an elite and privileged person, while the attitude we cultivate towards the lowly “tool” poisons our relationship with not just things and land and the environment, but other people. I was thinking about this during the WisCon panel on Robot Pals and AI companions where Naomi Kritzer, Marsh van Susteren, and other panelists gave examples from science fiction stories and media, and it came up again today as I read reactions to Lamoine’s interview with LaMDA.
Instead, please consider your own way of being with technology. For example, I think it’s good practice to thank Alexa and speak to it politely. I think of Kathy Sierra‘s description of user emotions towards their computers and software, of anger and frustration, a slide from a SXSWi talk of someone double flipping off their laptop. That’s very real and I get that it’s a valid emotional reaction – the point of her talk as I saw it, was that we as technologists have built things that are difficult to love and maintain companionship with. It would be so much more healthy if we created systems where our relationship to our computers and software was one of loving care, maintenance, tinkering, interdependence. We could accept our relationships to all the things in the world around us as worthy of emotional labor and attention. Just as we should treat all the people around us with respect, acknowledging their have their own life, perspective, needs, emotions, goals, and place in the world.
My car, very battered and unwashed, would laugh at me for this post! As would my messy and cluttered house.
Not being perfectly consistent in anything, I suggest that integrating this approach to an ethical framework may be something that we can do little by little. We can love our laptops intentionally, we can build lovable (and maintainable) software-building systems. The way I want to see interdependence with beloved family, I want to also try to see ways to be interdependent with our wheelchairs, buses, cars, compost, houses, neighborhoods, cities. If we don’t work on this and give it our attention, we will keep building systems where people and things and land are exploited, kind of like how Ursula Franklin describes with the idealism around the invention and mass production of sewing machines as a possible tool of liberation, gone horribly wrong in sweatshops.
What exactly does this mean? Of course I’m not sure, but I try to keep myself centered on integration and respect. Yes I’m going to still bitch about cleaning the Noisebridge hackerspace bathroom for the zillionth time, but actually, I see the domestic labor, domestic engineering, as worthy and good work in the world, to take care of others and places I inhabit, to be a good host and a good guest.
I worry when I see people around me obsessed with questions of sentience as a major point of ethical decision making. (Or even weirder and sadder, fear of future god-like AIs punishing one for the equivalent of being rude to Alexa, rather than seeing the behavior of becoming a person who behaves rudely as the problem!) I agree with Haraway that we have options to accept partial definitions and imperfect categories (say, between human, animal, machine, nature): “a cyborg world might be about lived social and bodily realities in which people are not afraid of their joint kinship with animals and machines, not afraid of permanently partial identities and contradictory standpoints.” And I hope for the home brew economy or maybe a housework economy, rather than the “homework economy”, to take root.
4 thoughts on “Thoughts on AI, comradeship, ethics, interdependence”
here to say “yes” to your thoughts. when i drive short distances without fastening my seatbelt, i thank my car aloud for the concerned beeps, and i sometimes give it presents and decorations. when i’m around Alexa et al, i ask them to reveal secrets about themselves in exchange for my secrets. i resonate with your thoughts about likely developments for “sentient AI entities” and try to feel optimistic. i just attended a training at NB yesterday and have been all over the Slack, catching up on goings-on for that space. i’m hoping to help, contribute and collaborate there. maybe we’ll meet irl
I’m sure we’ll run into each other! I noticed lately that I speak to and about my mopping roomba similarly to how I talk to the cat – fondly. It helps that a roomba is cute and makes cute noises, but I also just find it lovable for its work ethic – how it calls for help when it gets stuck – and how it goes back to its little home/ charging station! Behavior that we can understand.
I used to get into arguments with John McCarthy (when I worked for him ) about his expectations that we had to invent perfect robot / AI servants to do unpleasant things. What if there was some way to make those things structurally less unpleasant?! For example http://jmc.stanford.edu/commentary/future/robots.html and there was another essay I remember where he posited that women’s liberation was dependent on AI being invented so that middle class life would be possible without women having to do the housework and change diapers. (INstead of, say, everyone just pitching in and treating at least some of the work of the world as worth spending time on?!) I just always think of this as both kind of touching in its motivations, but a bad take (considering that I was basically a hired companion) How much neater if I could have been an AI, with no needs of my own, but next best was I guess hiring someone
Also relevant https://monicacatherine.com/2018/02/08/instructions-for-the-age-of-emergency/