Policy and Ethics

Is “Personhood” Attainable for AI?

Figure 1: There is an ongoing question as to what makes AI different from human beings

As technology progresses it becomes more evident that the world around us is increasingly dependent on technology, namely AI. We are surrounded by the likes of Siri and spell-check algorithms, and there’s no doubt that as AI furthers, humans will find themselves being replaced by more efficient machines. To clarify, there is a distinction to be made between weak and strong AI. While strong AI can complete tasks independently and “think” similarly to a human brain, weak AI depends on human interference and coding.[1] This article will focus on strong AI and its more controversial implications. This raises the question of whether or not strong AI can ever be considered “persons”, a particularly prevalent question in modern-day society given the emphasis that humans place on their unique identity as a species. We humans pride ourselves on our ability to reason and empathise, thus setting ourselves apart as superior to machines and animals. If machines also developed human-like consciousness, ethical implications would arise: how we should treat robots, and should it be similar to how we would treat human peers? To answer this question effectively, it is important to define the term “person”, which in this case describes a being with specific attributes generally associated with humans such as reason, intelligence, and consciousness. The question “can machines be persons?” will be explored through the contrasting implications offered by the Chinese Room theory and the Turing Test, to ultimately determine whether personhood is an experience unique to being human.

Searle and the Chinese Room 

John Searle concluded that machines can’t be persons through the analogy of the Chinese Room. Searle imagines himself alone in a room, following a procedure to respond to Chinese characters being slipped underneath the door. He looks at the character and finds the corresponding response. The response consists of a different set of Chinese characters which he copies out and slips back under the door. Searle emulates a computer in this aspect, using a set procedure (like computer coding) to spell out the right Chinese characters but ultimately lacking any understanding of Chinese.[2] But, to anyone on the other side of the door, it would seem as though the inhabitant understood Chinese. This implies that the mind is the result of biological processes only present in human beings, and that personhood cannot be measured by the resulting action but instead an understanding of what drives the action. The implication of this argument is that cognition is the necessary criterion for personhood, and part of this sense of human awareness is a result of contextual knowledge. A knowledge of culture, for example, helps to develop an unconscious intuition that machines lack. Machines can’t possibly be expected to be able to reflect on their actions given their lack of background knowledge. Humans and “persons” can hence be distinguished from machines for their ability to “know-how” to do something, implying expertise, rather than simply “knowing” to function based on syntactic coding.

There is overall strength to this argument, partially because this argument acknowledges that the mind does not operate based on formal rules. Robotic coding (understood and created by humans) is incomparable with the mysteries of the human brain and chemical coordination, which neuroscientists have yet to fully unravel. Moreover, this argument acknowledges the role of “Qualia”. Qualia is what links sense perception to human knowledge by asserting that some knowledge can only be gained through sense experience. For example, while a physicist can know all physical aspects of the color red, such as wavelength and frequency, they will never truly understand what red is like until they are able to see it.[3] This explains the inability of machines to register subjective experience and subsequent semantics. Nonetheless, Searle’s argument has major shortcomings. For instance, if “experience” is part of being a “person” and conscious, then one could potentially program AI to interpret experiences and apply what they’ve learned to developing language. This would then weaken Turing’s assertion that machines are not akin to “persons” because they don’t have a developed sense of language. From a contrasting perspective, however, this is far easier said than done, and perhaps the innate ability to reflect on experience is uniquely human and cannot be replicated in programming. Moreover, in support of Searle, Searle’s Chinese characters gave the impression that they were written by a native speaker, and thus it’s not unreasonable to suggest that output behaviour is more important than the process from which this behaviour was derived, whether it be biological or coded. We don’t judge other human beings based on the biological processes behind an action, but instead the resulting action itself. It should be considered that Searle, who doesn’t understand the characters, is part of an overarching system (including the instructions and database) which contains understanding as a whole.[4] This is similar to how a small region of the human brain speaks English, while the brain works together as an overall system. Perhaps this holistic view is beneficial in suggesting that it’s spurious to immediately assume that AI lacks understanding. Nonetheless, this rebuttal to Searle is quite stretched, and would thus ultimately agree with Searle’s point of view that machines cannot be “persons”.

The Turing Test 

Alan Turing argues otherwise through his example of the Turing Test. Having formulated an “imitation game” of sorts, involving human judges trying to determine whether an interlocutor is AI or human, Turing concluded that if an interlocutor’s communication is interpreted as that of a human by an “evaluator”, AI can reach personhood. Eugene Goostman, AI that simulates a 13-year-old Ukranian boy, has passed the Turing test before by tricking ⅓ of the judges into thinking him to be a real human being.[5] This shows that surely some AI can be “persons” if they can communicate in a way similar to human beings. After all, some even go as far as to assert that language forms the fundamental basis for thought. Turing therefore argues that a “person” doesn’t necessarily have to be human. [6]

Figure 2: The biological brain processes leading to thought are crucial to personhood

It should be noted that the Turing Test has never truly been successfully passed, with Eugene’s odd and disjointed answers being excused since the boy was described as a non-native English speaker. As a result, the Turing Test is yet to have proved any AI to be a “person” and given the current level of technology, further advances will likely not allow the AI to communicate as a person would. However, technology continues to surprise us all, so it’s hard to decide how much more can be made possible through technology. Nonetheless, the Turing Test fails when human beings (such as children) aren’t judged on their ability to communicate alone. Thus the Turing Test cannot identify a machine as a person, given that there are other essential criteria for personhood such as creativity, not to mention that the process used to reach the output is also essential. In the case of a “person”, this process must be biological and not syntactic. Moreover, having considered Turing’s view, it is clear that this argument is most debilitated by the fact that the machine doesn’t understand the meaning behind what it’s saying. Part of consciousness, a more valid criterion for personhood than communication, entails a semantic understanding of what is being said, and AI is unable to do so, instead mechanically following a set of coded rules.[7]

The Verdict 

Ultimately, machines cannot be “persons”. Consciousness is no doubt a crucial aspect of being human, and while it’s difficult to tell if anybody truly has understanding, consciousness is no doubt more evident in the human race than in coded computers, as discussed throughout the exploration of Searle and Turing’s contrasting views. Arguments such as Turing’s are seldom cogent because they fail to acknowledge that emotions and creativity are criteria for personhood that no computer has been able to emulate thus far. Therefore, given the inability of AI to fulfil most of the necessary benchmarks for personhood, currently there isn’t any AI that will suffice as a “person” – it seems unlikely that this could be a development in the near future.

AI will no doubt advance further as human beings continue to find ways to incorporate more complex mechanical systems into their daily lives. Nonetheless, it seems unlikely that AI will ever develop the imagination, empathy, morality, and intuition that seem to make human beings human – however anthropocentric that view may be.

References

  1. Kerns, Jeff. “StackPath.” Machinedesign.com, 2019. https://www.machinedesign.com/markets/robotics/article/21835139/whats-the-difference-between-weak-and-strong-ai .
  2. Halpern, Mark. “The Trouble with the Turing Test.” The New Atlantis, 2020. https://www.thenewatlantis.com/publications/the-trouble-with-the-turing-test .
  3. Tye, Michael. “Qualia (Stanford Encyclopedia of Philosophy).” Stanford.edu, 2017. https://plato.stanford.edu/entries/qualia/ .
  4. Searle, John. “Chinese Room Argument.” Scholarpedia 4, no. 8 (2009): 3100. https://doi.org/10.4249/scholarpedia.3100 .
  5. Aamoth, Doug. “Interview with Eugene Goostman, the Fake Kid Who Passed the Turing Test.” Time, June 9, 2014. https://time.com/2847900/eugene-goostman-turing-test/ .
  6. ‌Oppy, Graham, and David Dowe. “The Turing Test (Stanford Encyclopedia of Philosophy).” Stanford.edu, 2003. https://plato.stanford.edu/entries/turing-test/ .
  7. Nancy Le Nezet, Chris White, Daniel Lee, and Guy Williams. Philosophy. Being Human. Course Book. Oxford: Oxford University Press, 2014.

Images

[Figure 1] Kelly, Andy. “Photo of Girl Laying Left Hand on White Digital Robot.” Unsplash.com. Unsplash, October 4, 2017. https://unsplash.com/photos/0E_vhMVqL9g .

[Figure 2] Robina Weermeijer. “Brown Brain Decor in Selective-Focus Photography.” Unsplash.com. Unsplash, June 5, 2019. https://unsplash.com/photos/3KGF9R_0oHs .

About the Author

This image has an empty alt attribute; its file name is Jessica-Trevess-Profile-Picture-1-768x1024.jpg

Jessica has an interest in biology and chemistry in particular, however is also fascinated by some of the philosophical and ethical aspects of science. Hoping to pursue medicine in the future, Jessica is passionate about the idea of using science to improve the lives of others.

Leave a Reply

Your email address will not be published. Required fields are marked *