Lab Notebook 4: Complacence

Lab Notebook 4: Complacence

Why are we so invested in divisions between inorganic and organic, animal and human, machine and mind? Should we be? If not, how do we challenge these boundaries?

Complacence

It is becoming increasingly easy to be complacent to everything that is active around us. For those who are not affected by the problems that the majority of the world face, the easy option is to not just look away, but to not even recognize it in the first place. When we allow artificial intelligence to take the lead in art, writing, or any form in which humans express themselves, we are becoming less sentient than the machines we use. We are obsessed with what qualifies as sentience, or more accurately what is sentience that should not be treated like cattle (I say “we” but I am talking about an undefinable populous among Americans who subconsciously follow a set of rules). So should artificial intelligence be treated as sentience? Right now the answer is a pretty definite “no”, but science fiction allows us to discuss the line in which we would draw for artificial intelligence to be declared as living.

Scripter (link)

“What do you think? What am I right now? Alive? Dead? Or is it all just one story someone made up?”

“I don’t want to answer that”

(Kim Bo Young – Scripter)

Scripter follows three characters and the legitimacy of their sentience. The question is if any of the three of them are a “player controlled” character or a “non-playable character” or NPC. The story never really answers the question explicitly, but instead leaves the reader to multiple open ended outcomes of what could have happened. The Angel character acts as the voice of questioning, and is essentially asking if any of the people that he are talking to are working based on a set of manufactured replies or if they are experiencing free thought. The question of sentience is a lot more difficult to answer when only given knowledge from the confines of a short story, so I think that the importance is in giving different outlooks on the topic. An idea that stuck with me after reading was that one cannot distinguish if something is sentient just based on its actions and dialogue. Animals, humans, and robots, we decide if they are able to experience existential ideas due to their ability or inability to discuss them with us, but this could possibly be the wrong was of looking at it. Now, I still will not get a screen protector for my phone, and I still eat meat, but the idea of basing a very important facet of existence on arbitrary boundaries is not very attractive to me.

SZA – Ghost in the Machine

SZA – “Ghost in the Machine” | Genius Lyrics

In “Ghost in the Machine”, SZA describes feelings of complacence and aimlessness in her romantic and professional life.

“Can you distract me from all the disaster?”

“Robot got future, I don’t”

“I’m wide open, I’m awake, I’m on autopilot”

“Standin’ on my own in an airport bar or hotel lobby, waiting to feel clean”

Genius Lyrics Source

How does a rise in AI technology relate to losing autonomy and allowing oneself to becoming complacent in being a “ghost in the machine”?

Looking towards conventional AI creation software that is easily accessible to people, there is an argument that “AI assisted” creations do not fall under the grounds of cheating. Wether it is a student trying to get out of reading a book or a conglomerate shaving of expenses in the marketing department by making AI advertising, for the most part I am under the impression that using AI tools is a really easy way to make a once good idea illegitimate. This idea of hybrid authorship is where I think we could make AI a tool instead of a cheat.

We argue that AI is not sentient because it does not reflect the rules we place around what defines sentience. Yet each day we just try to make our problems other people’s problems, our tasks automatized, our art stolen. If we decide that sentience in defined in thought, then we can no longer let AI do the thinking for us.