Mubarak Hussain
ABSTRACT: For a long time, researchers of Artificial Intelligence (AI) and futurists have hypothesized that the developed Artificial General Intelligence (AGI) systems can execute intellectual and behavioral tasks similar to human beings. However, there are two possible concerns regarding the emergence of AGI systems and their moral status, namely: 1) is it possible to grant moral status to the AGI-enabled robots similar to humans? 2) if it is (im)possible, then under what conditions do such robots (fail to) achieve moral status similar to humans? To examine the possibilities, the present study puts forward a functionality argument, which claims that if a human being and an AGI-enabled robot have similar functionality, but different creative processes, they may have similar moral status. Furthermore, the functionality argument asserts that an entity’s (a human being or an AGI-enabled robot) creation/production from carbon or silicon or its brain’s utilization of neurotransmitters or semiconductors does not carry any significance. Rather, if both entities have similar functionality, they may have similar moral status, which implies that the AGI-enabled robot may achieve human-like moral status if it performs human-like functions.