"Why? Why was I programmed to feel pain?"

Should Robots Be Made to Feel Pain?

Is pain and suffering as essential for machine learning as they are for human cognitive development?

“We can’t learn without pain.”
― Aristotle

Pain is a fundamental fact of life for many organisms on our planet; a crucial mechanism for identifying what kinds of actions pose serious threats to our physical and mental health. As robots become more sophisticated and interactive, should they also be programmed to experience pain to prevent injuries to themselves or others, and if so, to what extent?

“Pain in the Machine,” a 12-minute documentary released by the University of Cambridge, tackles this multifaceted and controversial issue. The film offers insights from artificial intelligence thought leaders, practicing physicians, and other interdisciplinary experts, and contrasts them with iconic popular culture moments that point to the larger philosophical questions inherent to artificially programming pain responses—including a nod to burning robot bit in The Simpsons.

Like so many AI research fields, evaluating the utility and benefits of pain in robots inevitably flips the mirror back on our understanding of how those experiences function and protect us in our own lives.

“Pain has fascinated philosophers for centuries,” Ben Seymour, a Cambridge-based expert on the computational and systems neuroscience of pain, comments in the documentary. “Indeed, some people consider pain to be the pinnacle of consciousness. Of course, it’s not a pleasant pinnacle of consciousness but it arguably is a time where we feel most human, because we are most in touch with ourselves as a mortal human being.”

This idea that pain is a profoundly humanizing force, in spite of how excruciating it can feel moment-to-moment, is a standard across many eras and cultures. As the author James Baldwin put it: “[T]he things that tormented me most were the very things that connected me with all the people who were alive, or who had ever been alive.”

It remains to be seen whether basic reflexive pain responses, which have already been programmed into some AI systems, could evolve into more complex emotions like empathy, or the kind of solidarity through suffering described by Baldwin. Perhaps robots could even surpass the cognitive and conceptual limits of their human creators, pioneering new approaches to interacting with the world and its inhabitants.

“Humans do seem to be no different from very complex machines made up of biological material,” points out Marta Halina, a lecturer in the philosophy of cognitive science at the University of Cambridge, in “Pain in the Machine.”

“That has huge implications on thinking about the future of AI, because we might be able to build machines that are as complex as us, and thus have abilities like us; for example, the ability to feel pain,” Halina said. “And if we can build machines that are even more complex than humans, then they might have experiences and abilities that we can’t even imagine.”
>>READ MORE at VICE: “Why Robots Need to Feel Pain” (2016)

Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots

Abstract
In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems).

In this article, this hypothesis is argued from several viewpoints towards its verification.

A developmental process of empathy, morality, and ethics based on the mirror neuron system (MNS) that promotes the emergence of the concept of self (and others) scaffolds the emergence of artificial minds.

Firstly, an outline of the ideological background on issues of the mind in a broad sense is shown, followed by the limitation of the current progress of artificial intelligence (AI), focusing on deep learning.

Next, artificial pain is introduced, along with its architectures in the early stage of self-inflicted experiences of pain, and later, in the sharing stage of the pain between self and others.

Then, cognitive developmental robotics (CDR) is revisited for two important concepts—physical embodiment and social interaction, both of which help to shape conscious minds.

Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, the issue of how robots (artificial systems) could be moral agents is addressed.
Keywords: pain; empathy; morality; mirror neuron system (MNS)
>>READ MORE