#22 | HUMAN SELF-PRESERVATION | Human Emotions, AI, and the Struggle for Self-Preservation
Human self-preservation is an innate instinct that begins at birth, driving individuals to seek safety and survival. It can be influenced by experience but is often undermined by risky behaviors.
Emotions act as signals for self-preservation, indicating the need for attention and problem-solving in uncertainty or danger. They guide decision-making and behavior, driven by doubt and conscience.
AI lacks emotions and conscience, posing challenges in aligning AI behavior with human values. Developing safeguards and ethical guidelines is essential to ensure AI's responsible and safe interaction with humans.
From the moment of birth, we begin to prioritize our own survival. As soon as the umbilical cord is cut, all babies start to cry at a frequency of around 1250 cps - within the human speech interference zone of reception. A crying baby is impossible to ignore, highly irritating, and demanding our attention. When a baby cries, one or more people will usually come rushing over to figure out what the problem is and how to address it. Babies can cry in about six different ways, each one indicating a specific need - being tired, bored, cold, hungry, in need of a diaper change, or simply saying "Love me." Good mothers and babysitters are attuned and respond.
Self-preservation is instinctual. It is modified by experience, education, skills, and curiosity but diverted or sabotaged by ego-tripping, vengeful action, daring, and inattention. The result is usually injury, disability, or death. Self-preservation has a safety system built into what we call emotions. All emotions indicate a neediness to be defined and acted upon. Dr. Nussbaum at the University of Chicago identifies emotions as upheavals in thinking, trying to force attention, and problem-solving to clarify unknown, threatening, or confusing situations. Yes, even love.
The driver of all this confusion is doubt, that is, asking our brains to dig around in our memory for clues, hints, and sort-of-likes, only to be stymied by a lack of information. We attempt to quell the resulting anxiety by creating an alternative, a fantasy, a myth, a legend, or a lie to ease the discomfort. The knowledge gap spurs discomfort in us and generally involves matters of safety, health, projects, and objectives. Another guide we all have is conscience, our trove of ideas, values, and principles to guide our conduct. Conscience is beneficial to most of us. Yet, as Hamlet said in his soliloquy, conscience can also make cowards of us all.
So, we have a built-in safety mechanism to address unknowns in our sphere of activity, to guide us toward curiosity, research, problem-solving, and extracting ourselves from uncomfortable situations that could be dangerous. The question is, What controls on Artificial Intelligence could be devised on a system of unlimited capacity to destroy itself (and us in the process)?
Emotions for us are self-defining, creating an endless array of cognitive, neural, and biochemical messaging to try to mimic and synthesize protagonist tactics to reduce the impact of the assault. AI does not have the built-in protection, or does it? If there is no emotion in AI, how does it resolve to fit into the human experience or gain a foothold to be truly companionable as we try to be?