AI, Robots and humans - a Cyberpsychological perspective
August 04, 2012, 14 min read
The constant advancements in computing power, machine learning algorithms and breakthroughs in relevant technologies is setting the interaction between humans and computers on a road where sometime in the near future advanced Artificial Intelligences (A.I.) will engage with people in many meaningful ways.
The possibility of a machine with consciousness raises many philosophical, psychological and sociological questions about the nature of consciousness itself and what it really means to be intelligent. The computational modelling of human cognitive abilities can play a significant role in the advancement of cognitive psychology, giving a better understanding of people’s own intelligence. Going from natural to Artificial Intelligence, there are many challenges and risks to be met, but also great opportunities.
This essay will explore these issues starting with an overview of intelligence and the evolutionary path of theories of mind. The second part will assess the role of Cognitive Psychology and how it is affected by Functionalism and Computationalism. The third part aims to define Artificial Intelligence, provide an insight into some of its core processes and present a sample of the current state of research. Finally, the fourth part will explore the relationship between humans and machines, the ethical considerations and the challenges that society as a whole will have to face.
Intelligence and theories of mind
The mind has been a mystery for centuries. Many scholars have attempted to decode its functionality on their effort to identify and describe intelligence. The practice of theorising on the mind’s inner workings, lead to the development of theories of the mind and respective schools of thought.
One of the first significant theories of the mind was Dualism and particularly Cartesian Dualism. It is a view about the relation between the mind and the body (Carter, 2007, p. 5). Cartesian Dualism holds that although the body is a material object, the mind is composed entirely of immaterial stuff. Furthermore, the two are engaged in a causal relation with each other, with the immaterial mind causing things to happen in the material body and vice versa. By and large, the way Cartesian Dualism describes the relationship between body and mind is still popular (Carter, 2007, p. 4), mainly due to its similarity in the way many religions describe humans as the combination of material and immaterial stuff.
Psychological Behaviourism was another theory of the mind, focused on observable behavioural aspects of humans, that aimed to distance itself from metaphysical and theological connotations (such as Dualism). Ivan Pavlov (1849-1936) developed the theory of reflex arcs that described the connection between environmental stimuli and behavioural responses, while B. F. Skinner (1904-1990) investigated the most effective ways of conditioning reflex arcs. A significant drawback of Psychological Behaviourism is that it fails to describe all aspects of mentality, as not all are connected to observable behaviour (Carter, 2007, p. 22).
Moving towards a more detailed and practical description of the mind and its processes, the causal theory of mind was developed in the second half of the nineteenth century, describing the physical stimuli that lead to specific mental states. Building on that theory, the Australian Materialism (and its variations) aimed to explore mental states, not in a metaphysical way as Dualism but in a more pragmatic way, and their connection to neural states in the brain. Australian Materialism holds that “to be in a type of mental state is to be in a type of neural state which is apt to be caused by certain stimuli and apt to cause certain behaviour” (Carter, 2007, p. 38). Despite the advantages of Australian Materialism, an argument against it is that it fails to describe in detail the exact neural states that correspond to certain mental states and thus it makes introspection and research significantly challenging.
Functionalism aimed to fill that gap by preserving the connection between stimulus, mentality and behaviour, while maintaining that mental states are functional states. Specifically, according to Functionalists, mental states are functions that mediate relations between inputs, outputs and other mental states (Carter, 2007, p. 45).
Functionalism was a stepping stone to Computationalism, which aimed to create formal computational definitions for mental states. Computationalism helped cognitive scientists to replicate complex human processes in machines.
Cognitive psychology and formal modelling of human intelligence
One of the main branches of psychology that has provided great insight to Artificial Intelligence, and at the same time benefited from developments in A.I., is cognitive psychology, which examines human cognition, mental processes and complex behaviour. Specifically it explores processes such as perception, attention, learning, memory, vision, language, creativity, reasoning and emotion. All these processes are challenging topics that A.I. researchers try to understand, model and reproduce artificially.
Various psychological theories of cognition can been used to create respective computational models (DʼMello & Franklin, 2009). Such theories include the situated or embodied cognition (Clark, 1997; Glenberg, 1997; Varela, Thompson & Rosch, 1991), the theory of perceptual symbol systems (Barsalou, 1999; Harnad, 1990), working memory theory (Baddeley & Hitch, 1974; Baddeley, 1992; Baddeley, 2000; Baars & Franklin, 2003), the Glenberg’s theory (1997) of the importance of affordances to understanding, the long-term working memory theory of Ericsson and Kinstch (1995), and Baars’ global workspace theory (1988; 1997).
DʼMello & Franklin (2009) highlight the important role that cognitive robotics can play in the way of a better understanding of human cognition. They believe that psychologists and cognitive scientists can benefit from “large scale working models of cognition that are mechanistic embodiments of functional psychological theories” in doing their research by “generating testable hypotheses about human cognition and by providing the means of testing such hypotheses empirically”.
Computationalism enabled this type of research for psychologists and scientists alike. By developing formal flowcharts that represent cognitive processes, scientists are able to convert such flowcharts into computer programs and test their assumptions and models about human cognition. “Computational modelling in cognitive science and artificial intelligence has profoundly affected how human cognition is viewed and studied” (Sternberg, 1999, p. 245).
In essence, mind is now treated as a computing machine and scientists strive to figure out all its processes and develop efficient formal definitions.
From Intelligence to Artificial Intelligence
Since scientists are researching and modelling cognitive processes to computable models, it is only logical for a revert synthesising process of such models to take place, aiming to develop an artificial agent that implements human cognition and intelligence. The fundamental goal of such agents is to act and think humanly and rationally (Norvig & Russell, 2010, p. 2-4). Critics of A.I. argue that even if such an accomplishment is possible, an A.I. agent will perform a task in an entirely different way from the human brain (Carlson, Martin & Buskist, 2004). One can argue against this claim by comparing intelligence to flying.
By observing nature one could define as a necessary property of flying the existence of flexible, feathery wings. Humans have developed artificial flying machines that do not have feathers, but achieve the exact same goal: flying. In this case, flying would be defined by its outcome (the act of flying effectively) and not by its underlying properties (metal instead of feather wings). Thus, in the case of A.I. an artificial agent can be characterised as intelligent just by acting as such. Mackworth & Poole (2010, p. 3) define Artificial Intelligence as “the field that studies the synthesis and analysis of computational agents that act intelligently”. Specifically, A.I. researchers investigate the computational implementation of features such as reasoning, language, vision, learning, creativity, planning, perception and motion (for robots). In essence, the goal is to create autonomous agents in software or robot form that exhibit partial or complete characteristics from the spectrum of human cognitive capacity.
Existing psychological theories of human cognition can form the basic computational models of an intelligent agent. Such an approach can be beneficial both to psychologists and engineers. By consulting on the implementation of human cognition theories into A.I. agents, psychologists can test and improve the literature, while engineers gain the benefit of building on top of the best available knowledge on intelligence.
In order to function in its world, an autonomous intelligent agent should exhibit some basic cognitive processes such as perception, episodic memory, working memory, selective attention and action selection (DʼMello & Franklin, 2009). A perceptual system will enable the agent to recognise, categorise and understand its world. The episodic memory allows the agent to recall past events and use that information to plan its current and future course. Working memory will facilitate the decision process by integrating information from the perceptual system and the episodic memory. A selective attention system will enable the agent to focus only on useful and relevant information and stimuli. Finally, an action selection system will provide the “what to do next” functionality (DʼMello & Franklin, 2009).
So far, there has not been a complete Artificial Intelligence as there are certain hurdles that scientists have to overcome. There is still no comprehensive formal model of cognition, while areas such as computer vision and social interaction have not reached the required level of maturity. Computing power remains still a bottleneck, but this is bound to stop being a problem as computers steadily become more powerful. Ray Kurzweil argues that compute power will in less than a few decades make it possible to create software that is indeed smarter than humans (Kurzweil, 2005). Nevertheless, there have been significant advancements in specific areas of A.I. and robotics. Private companies like Google, Facebook and IBM lead the development in machine learning as they strive to find better ways to utilise their massive data sets or provide new tools and applications to the private and public sector.
A recent example in machine learning field, is IBM’s DeepQA project. DeepQA is “a computer system that can directly and precisely answer natural language questions over an open and broad range of knowledge” (IBM, 2010), in a manner similar to the Computer in the TV series Star Trek: The next generation. DeepQA’s goal is to deliver precise, meaningful responses, and synthesise, integrate, and rapidly reason over the breadth of human knowledge. The techniques used to achieve this goal include Natural Language Processing, Information Retrieval, Machine Learning, Knowledge Representation and Reasoning, and massively parallel computation. Watson is the embodiment of DeepQA, making its first public appearance in the TV game show Jeopardy.
In regards to robotics, the number of private companies and public institutions that develop more advanced and human-like robots is increasing in a slow but steady pace. For example, such an advanced robot named HRP-4C was presented by Japanese scientists in Tokyo (BBC, 2009). HRP-4C has 30 motors in its body for motion and eight motors on its face to create emotional expressions.
The interesting thing about robots is how easily humans can project life on such inanimate objects. As Behr (2011) argues, “It only takes a bit of interactivity before our minds go a step further and start projecting consciousness” and Van der Loos (2007) explains “It is not necessary that a robot be fully human-like in physical capability, but for the actions that it is capable of exhibiting, it must be capable of communicating the intention of doing them through, for example, gestures, voice and context”.
Human, Machine and ethical considerations
As A.I. becomes more dynamic and evolutionary how can people be sure that it will behave the way they want it to? What kind and how much robot adaptability on behaviour will people accept and how will they receive robots with consciousness? Is a super intelligent A.I. a threat to humanity? Questions like these represent some of the ethical and practical considerations towards future advanced Artificial Intelligences and robots. Implementing ethics in a robot is still in its infancy (McLaren, 2006).
When software “breeds” or evolves today, it does so in order to meet goals that humans specify. In the future, we will want to set it up so that it improves its own goals. As machines race into unknown territory, the question is: Can we control them? Are they bound, ultimately, to get out of control? (Martin, 2000, p. 10)
The science-fiction literature is filled with stories about Artificial Intelligences, robots and their interactions with humans and the society as a whole. Not all such endeavours tend to go well (for the humans) and to that end Isaac Asimov developed the three laws of robotics, that all robots should adhere to, in order for humans to be safe:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Hard-wiring rules into an A.I.’s or robot’s programming is not a trivial matter since rules can be inherently imperfect and have loopholes (Lang, 2002). Even Isaac Asimov had to include a zeroth law in his later literature work, to compensate for exceptions:
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Why be afraid what an A.I. might do, when every single newborn human has a similar potential. Do all humans really act ethically and by whose standards? Society overcomes these ethical problems by teaching the young and guiding them through its moral landscape. Instead of creating rules and laws, people could make any A.I. or robot to act as a social being, like humans do. A.I.s could be instilled with principles and ethical guidelines, and base their learning and acting upon them. Should then robots be part of the legal system and learn how to behave in a moral and ethical complex as it is set by society and state law?
Galvan (2003) argues that a robot will never develop a free will since it will always be a technological product. Gips (1991) believes that implementing any theoretical philosophical framework into an A.I. is very challenging when the theory has to be translated to a formal computational definition. Furthermore, there is the argument that cultural differences should be accommodated in every A.I. (Wagner, Cannon & Va der Loos, 2005).
Conclusion
The actual implementation of a proper Artificial Intelligence seems almost inevitable at some point in the not too distant future. The accelerating computer power and the advanced software development techniques have set A.I. researchers in a path of discovery that will forever change the way humans interact with computers and machines in general. The nature of intelligence itself and the way people perceive it has been a strongly contested philosophical and psychological issue. Computationalism has provided scientists with a more clear and formal way of defining the mental and cognitive processes of humans, in a way that can be translated into computer code.
Psychology and especially cognitive psychology plays an important role in the development of intelligent machines and also reaps the benefits of such advancements by testing and improving theories of cognition. Artificial Intelligence has matured to a satisfying theoretical level, providing the basis for real implementations of intelligent agents as long as some limitations are addressed. The most challenging issue remains the perception of people about robots and A.I. in general and the ethical concerns that are raised. These issues are likely to be debated more extensively as intelligent agents enter people’s lives and start interacting with them.
A.I. can be intimidating to people that perceive it as a threat and think that humanity might become obsolete when robots populate the world, but it can also be an amazing achievement that will help the human race to progress even further and solve many difficult problems (food supply, poverty, disaster aid, and more).
References
- Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge: Cambridge University Press.
- Baars, B. J. (1997). In the theater of consciousness. Oxford: Oxford University Press.
- Baars, B. J., & Franklin, S. (2003). How conscious experience and working memory interact. Trends in Cognitive Science, 7, 166-172.
- Baddeley, A. D. (1992). Consciousness and working memory. Consciousness and Cognition, 1, 3-6.
- Baddeley, A. D. (2000). The episodic buffer: A new component of working memory? Trends in Cognitive Science, 4, 417-423.
- Baddeley, A. D., & Hitch G. J. (1974). Working memory. In G.A. Bower (Ed.), The psychology of learning and motivation. New York: Academic Press.
- Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577- 609.
- BBC. (2009). Life-like walking female robot. BBC. Retrieved from: http://news.bbc.co.uk/2/hi/7946780.stm
- Behr, R. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle – review. Guardian. Retrieved from: http://www.guardian.co.uk/books/2011/jan/30/alone-together-sherry-turkle-review
- Carlson, N., Martin, G. N., & Buskist, W. (2004). Psychology. (2nd ed.). Great Britain: Pearson Education Limited.
- Carter, M. (2007). Minds and Computers. Edinburgh, Scotland: Edinburgh University Press.
- Clark, A. (1997). Being there: Putting brain, body, and world together again. Cambridge, MA: MIT Press.
- DʼMello, S., & Franklin, S. (2009). Computational modeling/cognitive robotics compliments functional modeling/experimental psychology. New Ideas in Psychology, 2579. Elsevier.
- Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological Review, 102, 211-245.
- Galvan, J. M. (2003). On technoethics. IEEE Robotics & Automation Magazine, 10, 58.
- Gips, J. (1991). Towards the Ethical Robot. The Second International Workshop on Human and Machine Cognition: Android Epistemology.
- Glenberg, A. M. (1997). What memory is for. Behavioral and Brain Sciences, 20, 1-19.
- Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335-346.
- IBM. (2010). The DeepQA Project. IBM. Retrieved from: http://www.research.ibm.com/deepqa/deepqa.shtml
- Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Group.
- Lang, C. (2002). Ethics for artificial intelligence. State-Wide Technology Symposium, Promise or peril, 1-18.
- Mackworth, A., & Poole, D. (2010). Artificial Intelligence: Foundations of Computational Agents. USA: Cambridge University Press.
- Martin, J. (2000) After the Internet: Alien Intelligence. Washington, D.C.: Capital Press.
- McLaren, B. M. (2006). Computational models of ethical reasoning: Challenges, initial steps, and future directions. Intelligent Systems, IEEE, 21(4), 29–37. IEEE.
- Norvig, P., & Russell S. (2010). Artificial Intelligence, a modern approach. (3rd ed.). USA: Prentice Hall.
- Sternberg, R. (1999). The Nature of Cognition. USA: MIT Press.
- Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind. Cambridge, MA: MIT Press.
- Van der Loos, H. F. M. (2007). Ethics by Design: A Conceptual Approach to Personal and Service Robot Systems. ICRA Roboethics Workshop, Rome, Italy: IEEE. Citeseer.
- Wagner, J. J., Cannon, D. M., & Va der Loos, H. F. M. (2005). Cross-Cultural Considerations in Establishing Roboethics for Neuro-Robot Applications. 9th International Conference on Rehabilitation Robotics, 2005. ICORR 2005., 1-6. Ieee.