Showing posts with label Dualism. Show all posts
Showing posts with label Dualism. Show all posts

25 September 2015

The Complex Phenomenon of Religion.

It's 25 years today since my father died. His death was one of the events that got me thinking about life, death, and all that. I dedicate this essay to:

Peter Harry Attwood (1935-1990).

Religion is sometimes portrayed as a simple phenomenon. As a simple crutch for the weak, as a "violent" control mechanism and so on. Although these kinds of criticisms sometimes contain a grain of truth, in fact religion more generally is a complex phenomenon that emerges from the interaction of a number of qualities, characteristics, or abilities that humans possess. In this essay I will try to outline a set of minimal common features of all religions and link them to an evolutionary account of humans.

The diagram below attempts to summarise some of the key factors involved and to show how these factors interact to produce the basic phenomena of religion. However, any given religion may include many more elements and be considerably more complex that this summary suggests. At the end of the essay I will add a few comments about Buddhism as a religion and about what makes Buddhism distinctive (or not).

Religion seems to minimally involve supernatural agents, morality, and an afterlife. I have argued that belief in all these is "natural", by which I mean they are emergent properties of the way our brains work. I do not mean that these are necessarily accurate intuitions in the sense of being true. However, as ideas which have guided human behaviour they have been very successful in helping us go from being just another species of primate, to the highly sophisticated cultures we live in today (and I include all present day human cultures in this). What follows is not a critique, but a description. There are possible critiques of every point, both in the conclusions of religieux and of the reasons for things that I am proposing here. But I want to outline a story about religion without getting bogged down in the critique of it. In most cases I've made the critique previously. 

Supernatural agents emerge from a combination of such properties of the brain such as pareidolia (the propensity to see faces everywhere); agent detection and theory of mind (Barrett; see also Why Are Karma and Rebirth Are Still Plausible?). Fundamental to the supernatural is ontological dualism and the matter/spirit dichotomy.

Theory of mind is tuned to make living in social groups feasible and means we tend to see other agents in human terms (anthropomorphism). Supernatural agents are human-like in their desires and goals, and counter-intuitive only in that they lack a physical body. Because this is minimally counter-intuitive it makes supernatural agents more interesting and memorable. Thus, human communities tend to be surrounded by a halo of supernatural agents. Lacking bodies, supernatural agents may possess associated abilities, such as the ability to move unhindered by physical obstructions, but they are often located in some physical object, such as a tree, rock or home. Those who can bridge the two worlds of matter and spirit we call shaman. Though of course spirits also operate in the two worlds, if spirits remained wholly in their spirit world they would be a lot less interesting. For some reason the spirit world seems inherently leaky. Shamans interpret and use knowledge gained from spirits to guide decision making in the material realm. Supernatural agents can become gods and when they do, shamans become priests.

Fundamental to this account of religion is the social nature of human beings. Any account of religion which rejects the social nature of humanity or demonizes the basic structures and functions of human groups is simply uninteresting (so that is almost all psychology and most of social theory inspired by French philosophers). Unfortunately in this libertarian age there is a tendency to take a dismissive or critical stance on human groups. Social living is undoubtedly involves compromises for the individual. But the evolutionary benefits massively outweigh any perceived loss of autonomy. What's more human social groups look and work very much like other primate social groups. This has been apparent since Richard Leakey sent three young women to Africa to study chimps, gorillas and baboons in the 1960s. The most revealing of these studies was Jane Goodall's work on chimpanzees at Gombe stream, which showed chimp groups to share many traits with human groups. As social animals our behaviour is tuned towards being a member of a group, as it is in all other social primates.

Robin Dunbar showed that the average size of group that a social animal generally lives in, is correlated with the ratio of the volume of neo-cortex to the rest of the brain. For humans this predicts an average group size of ca. 150, a figure for which there is now considerable empirical support. The Dunbar Number represents a cognitive limit, beyond which we cannot maintain knowledge of each member of a group, their roles in hierarchies, mating preferences, past interactions, that is the information we need to be a well informed group member. In practice humans typically organise themselves into units of about 15, 50, 150, 500, 1500 and so on. Groups of different sizes serving different functions and operating with differing levels of intimacy and knowledge. As well as collecting information through observation, we use theory of mind to infer the disposition of other group members. The smallest viable unit of humanity is probably the 150 sized group.

Social living depends for it's success on the active participation of all group members and social norms. Norms are primarily to help the group function effectively. But they may work indirectly, for example to help strengthen group identity "We are the people who....". If social animals were, as economists claim, fundamentally selfish, then groups could not function. We are adapted to being cooperative. But there are temptations to freeload or break other group norms. Up to around the 150 number, groups maintain norms by simple observation. Everyone knows everyone else's business. 

Anthropomorphism allows us to relate to non-human beings as part of our group. We also have the ability to empathise with strangers, though empathy evolved to help us understand the internal disposition of other individuals or small groups. Empathy is personal, which is why we humans still have trouble comprehending large scale disasters without some Jarrod Diamond has noted that in places like the highlands of New Guinea, where the population is almost at a maximum density for hunter-gather lifestyles and thus competition for resources is intense, that tolerance of strangers is low (which is also true of other primate species). In many instances, strangers are killed on sight. However surpluses and trade between groups makes tolerance of strangers more feasible. Thus the factors which lead to civilisations (i.e. much larger groupings) also facilitated tolerance of strangers. Ara Norenzayan has argued that religion with "Big Gods" was a major factor in enabling the large scale cooperation implied by civilisation. Large groups mean that keeping track of each group member becomes more difficult. Monitoring compliance with behavioural norms starts to break down. 

Social groups which perceive an active halo of supernatural beings incorporated into their daily lives may rely on these supernatural agents as monitors of group norms (Norenzayan). In which case the role of the shaman is also expanded. The beings involved in monitoring are likely to become more active and present. They may begin to play an active role, for example punishing transgressive behaviour. Because supernatural agents are already counter-intuitive in lacking physical bodies, they can easily evolve in this direction. Those involved in monitoring the social sphere have a tendency to become omnipresent (the better to see you) and, as a result, omniscient. Once they start dishing our punishments they can become omnipotent as well. Thus ordinary supernatural agents can become gods.

Once gods emerge they typically require more elaborate acknowledgement, rather like a dominant member of the tribe gets first preference in food and mates. A group may enact elaborate and costly rituals aimed at securing the cooperation of spirits and gods. Making sacrifices (in the sense of giving scarce resources) helps to encourage participation in group norms (see also Martyrs Maketh the Religion). Costly sacrifices bolster the faith of followers. Those who officiate at such ceremonies are likely shaman initially, but become focussed on interpreting and enacting the will of the gods rather than spirits in general. In other words they become priests. The prestige of priests rises with the prestige of the gods they serve. Along with sacrifice, priests may introduce arbitrary taboos that help define group identity. As Foucault noted, the power of the group or leaders to shape the subject is matched by the desire of the subject to be shaped. As members of a social species we make ourselves into subjects of power; or even into the kind of subjects (selves) that accept the compromises of social lifestyles. As social primates we evolved to participate in social groups with hierarchies. On the other hand evolution no longer entirely defines us - we did not evolve to use written communication for example (which is why writing is so much more difficult than talking).

We have a tendency to think in terms of reasons and purposes - teleology. In teleological thinking, things happen for a reason. We exist for a reason. The world exists for a reason. Things happen for a reason. In modern life we often seek reasons in individual psychology. In the past other types of reasons included supernatural interference and magic. The stories we tell about these reasons for events become our mythology. Even so we are left with questions. If we are here for a reason, we want to know what it is (because it is far from obvious to most people). If following the group norms or the prescriptions of gods is supposed to make everything run smoothly, then why does it not? If gods are members of our tribe and can intervene to help us, why do they not?

Despite the emphasis on keeping group norms and associating this with the success of the group, life is patently unfair. We can be the very best group member, keep all the rules, and yet we still suffer misfortune, illness, and death. The world is unjust. But we tend to believe the opposite, i.e. that the world is just, that reasons make it so. If everything happens for a reason, then bad things also happen for a reason. But what could that reason possibly be? The meeting of injustice and teleology is extremely fruitful for religion, but before getting further into it we need to consider the afterlife.

The matter/spirit dichotomy seems to emerge naturally from generalising about human experience. Some people have vivid experiences of leaving their body for example which, on face value, would only be possible if the locus of experiencing is separate from the physical body. The very metaphors that we use to talk about aspects of lived experience tend to frame the matter/spirit dichotomy in a particular way. Matter is dull, lifeless, rigid. Spirit is light, lively, and infinitely flexible. Matter is low, spirit high. And so on (see Metaphors and Materialism). We understand life through Vitalism: living beings are matter made flexible by an inspiration of spirit. Spirit in many languages is closely associated with the breath—spiritus, qi, prāṇa, ātman, pneuma—perhaps the most important characteristic of living beings in the pre-scientific world.

The greatest injustice seems to be that our breath leaves us, i.e. we die. All living beings act to sustain and maintain their own existence, their own life. Self-consciousness gives us the knowledge of the certainty of our own death. In a dualistic worldview, death occurs when the spirit leaves the body. The body returns to being inanimate matter (dust to dust). In this worldview, spirit is not affected by death in the same way as matter. Indeed spirit is not affected by death at all. Once the spirit leaves the body a number of post-mortem possibilities exist: hanging around as a supernatural agent; travelling to another world (to the realm of the ancestors for example, or to paradise); or taking another human form. The precise workings are specific to cultures, but all cultures seem to have an afterlife and the variations are limited to one or other of these possibilities.

Something interesting happens when we combine normative morality, teleological thinking, and the afterlife. If things happen for a reason and one of the main reasons is our own behaviour and there is injustice, then it stands to reason, that our own behaviour is (potentially) a cause of injustice. We link behaviour to outcomes. And if everything happens to a reason it's hard to imagine the morally good not being rewarded and the morally wicked not being punished. And if something bad happens, then maybe we have transgressed in some way. In which case a shaman or priest must consult the unseen, but all seeing supernatural monitors (this is incidentally why the Buddha had to have access to this knowledge). This world, the material world composed primarily of matter, is manifestly unjust. By contrast, an afterlife is very much a world of spirit and as the basic metaphors show, the world of spirit is the polar opposite of the world of matter. If the world of matter is unjust (and it is) then the world of spirit is by necessity just. The rules of the afterlife must be very different. Gods hold sway there for example. Gods whose reason for being is to supervise the behaviour of humans. So it is entirely unsurprising that the function of an afterlife, in those communities which practice morality, is judgement of the dead. This happens in all the major religions, and dates back at least to the ancient Egyptian Book of the Dead.

Here we have, I think, all the major components of religion. And they emerge from lower-level, relatively simple properties of the (social) human mind at work. Thus religion is a natural phenomenon. It is not, as opponents of religion like to assert, something artificial that is superimposed on societies, but something that naturally emerges out of anatomically modern humans with a pre-scientific worldview living together. If chimps were only a little more like us, they too would develop like this. Neanderthals almost certainly had religion of a sort. The naturalness of religion predicts that every society of humans ought to have religion or something like it. And they do, except where people are WEIRD: Western, Educated, Industrialised, Rich, and Democratic. WEIRD people are psychological outliers from the rest of humanity. But WEIRD culture is build upon layers of religious culture, with Christianity superimposed on early forms of religion (and perhaps several layers of this). Again, for emphasis, the naturalness of religion does not mean that a religious account of the world is either accurate or precise. It is certainly successful, depending on how one measures success, but as a description of the world the religious view tends to be flawed making it both inaccurate and imprecise. 

Religious communities have some distinct advantages over non-religious communities in terms of sustaining group identity and encouraging cooperation.  The Abrahamic religions certainly have many millions of followers, and the followers of these religions have established a vast hegemony over most of the planet. On the other hand Christianity seems to be waning. Religious ideologies are giving way to political ideologies. Communism was one such that is also on the wane. Neoliberalism seems to have survived the near collapse of the world's economies to continue to dominate public discourse on politics and economics. Liberal Humanism seems to be a potent force for good still, though as we have seen it cannot be successfully linked to Neoliberal economics. 


There are those who argue that Buddhism is not a religion. This is naive at best, and probably disingenuous. Buddhism has all the same kinds of concerns as other religions, all of the main components outlined above—supernatural agents, morality, and an afterlife—and many of the secondary components as well. In many ways, Buddhism is simply another manifestation of the same dynamic that produces religious ideas and practices in other groups. Sure we have an abstract supernatural monitor, but karma does exactly the same job as Anubis, Varuṇa, Mazda, or Jehovah in monitoring behaviour. It's merely a quantitative difference, not a qualitative one. WEIRD Buddhists play down the halo of supernatural beings, but traditional Buddhist societies in Asia all have folk beliefs which involve spirits (e.g. Burmese nat) and many similar animistic beliefs, such as tree spirits (rukkhadevatā) are Canonical. 

David Chapman (@meaningness) and I had a very interesting exchange on Twitter a few days ago (storified). DC noted that some of those who are opposed to secularisation of mindfulness training, are concerned about disconnecting mindfulness from "Buddhist ethics". They seem to argue that the problem is that mindfulness without ethics is either meaningless or dangerous, or both. DC's point was that there was nothing distinctive about Buddhist ethics and that, in the USA at least, what masquerades as "Buddhist" ethics is simply the prevailing ethics of WEIRD North America. So to argue against mindfulness being taught separately from Buddhist ethics is meaningless. For example Tricycle Magazine has run positive stories on Buddhists in the US military. If soldiers can be Buddhists, then Buddhist ethics really do have no meaning. Indeed there is nothing very distinctive about Buddhist ethics more generally, nothing that distinguishes Buddhist ethics from, say, Christian ethics. Sure, the stated rationale for being ethical is different, but the outcome is the same: love thy neighbour. (David has started his blog series on this: “Buddhist ethics” is a fraud).

Certainly Buddhism is not the only religion to use a variety of religious techniques for working with the mind, including concentration and reflection exercises. Mediation was a word in English long before Buddhism came on the scene (noted ca. 1200 CE). Arguably all the practices that we associate with Buddhism were in fact borrowed from other religions anyway (particularly Brahmanism and Jainism). According to Buddhism's own mythology, meditation was already being practised to a very high degree before Buddhism came into being. The Buddha simply adapted procedures he had already learned.

So is there anything about Buddhism as a religion that is distinctive? Some would argue that pratītya-samutpāda is distinctively Buddhist. However too many of us portray conditioned arising as a theory of cause and effect, or worse, a Theory of Everything. It is certainly a failure as the latter, and far from being very useful in the former role (the words involved don't even mean caused). Since almost everyone seems confused about the domain of application of this idea, one wonders whether Buddhists can lay claim to the theory at all. If Buddhists make pratītyasamutpāda into an ontology then pratītyasamutpāda would hardly seem to be Buddhist any longer. Nowadays, Buddhists all seem to think that having read about nirvāṇa or śūnyatā in a book makes one an expert on "reality".

DC and I tentatively agreed that any distinction that Buddhism might have is probably in the area of cultivating states in which sense-experience and ordinary mental-experience cease, what I would call nirodha-samāpatti or śūnyatā-vimokṣa etc. It is these states in particular that seem to promote the transformation of the mind that makes Buddhism distinctive. It's just unfortunate that we have so many books about these states, and so many people talking about them from having read the books (and writing books on the basis of having read the books), and so few people who experience such states. The thing that distinguishes Buddhism is something that only a tiny minority are realistically ever going to seriously cultivate, and probably a minority of them are going to succeed in experiencing. So Buddhism in practice, for the vast majority consists in beliefs and activities that are not distinctively Buddhist at all - loving your neighbours, communal singing, relaxation techniques, philosophical speculation, propitiation of supernatural agents, and so on.

And while some people are having awakenings, the level of noise through which they have to communicate is overwhelming. Buddhists have adopted so much psychological and psycho-analytic jargon that Buddhism as presented can seem indistinguishable from either at times. One gets the sense that today's "lay" Buddhism is closely aligned with the goals of psychologists. Not only this but we also get a lot of interference from pseudo-science, Advaita Vedanta, and home grown philosophies.

So, to sum up, religion is a natural phenomenon. It emerges from, is an emergent property of, a brain evolved for living in large social groups. A religious worldview makes sense to so many people, even WEIRD people, because it fits with our non-reflective beliefs about the world. Buddhism sits squarely in the middle of this as another religious worldview. But this does not mean that a religious worldview is accurate or precise, or that a secularised version of religion is an improvement on religion per se. Secularised versions of Buddhism are simply religion tailored for WEIRD people. It is more appealing to secularists who none the less feel that something is missing from their lives (because they are evolved to be religious). If Buddhism is distinctive, it is distinctive in ways that the vast majority of people will never have access to.

The main point I take from this is that religion is comprehensible. People who hold to religious views are comprehensible. While I think religious views are erroneous, I can see why so many people disagree, why religion remains so compelling for so many people. I can sympathise with them. And while I'm not an evangelist, it does make it easier for me to stay in dialogue with, for examples, members of my family who are committed Christians. As with the problem of communicating evolution, part of the problem with religion remaining plausible is the sheer ineptitude of scientists as communicators - their remarkable ability to understand string theory, or whatever, seems to be matched by an astounding lack of insight into their own species. And philosophers, whose job to is make the world comprehensible, have also largely failed. They both fail on the level of making new discoveries comprehensible and on the level of communicating why new discoveries are important. And when they fail, priests and other charlatans step into the gap, and that too is understandable. 


References to particular works or thinkers that are not linked to directly can be checked in the bibliography tab of the blog. 

08 May 2015

What can the Turing Test Tell Us?

Alan Turing's contribution to mathematics, cryptography and computer science was inestimable. Not only did he shorten World War Two, saving thousands of lives, he advanced us onto the path of digital computers. His suicide after being coerced into hormone treatment is a massive blot on the intellectual landscape in Britain. It is an enduring source of shame. Turing's work remained classified for decades because of the fear that war might break out again and knowing how to break the complex codes used by the Germans was too valuable an advantage to throw away. Nowadays, cryptography has advanced to the point where keeping Turing's work a secret no longer confers much advantage.

Turing was prescient in many ways. Not only did he set the paradigm for how digital computers work, but he understood that one day such machines might become so sophisticated that they were indistinguishable from intelligent beings. He was the first person to consider artificial intelligence (AI). Thinking about AI led him to construct one of the most famous thought experiments ever proposed. The Turing Test is not only a way to distinguish intelligence, it is actually a way of thinking about intelligence without getting bogged down in the details of how intelligence works. For Turing and many of us, the argument is that if a machine can communicate in a way that is it indistinguishable from a human being, then we must assume that it is intelligent, however it achieves this. It's a pragmatic definition of intelligence and one that leads to a practical threshold, beyond which all AI researchers wish to pass.

However underpinning the test are some assumptions about communication, language, and intelligence that I wish to examine. The first is that all human beings all seem to be considered good judges for the Turing Test. I think a good case can be made for considering this a false assumption. The second is the assumptions that mere word use is how we define not only intelligence, but language. Both of these are demonstrably false. If the assumptions the test is built on are false, then we need to rethink what the test is measuring, and whether we still feel this is a sufficient measure of intelligence.

Turing Judges.

The idea of the Turing Test is that a person sits at a teletype machine that prints texts and allows the operator to type text. The human and the test subject sit in different rooms and use the teletype machines to communicate. A machine can be said to pass the Turing Test if a human operator of the teletype cannot tell that the subject is not human. This puts word use at the forefront of Turing's definition of what it means to be intelligent. 

Human beings use of language is indeed one of our defining features. Animals use faculties that hint at a proto-language facility. No animal uses language in the sense that we do. At best animals show one or two of the target properties that define language. They might for example have several grunts that indicate objects (often types of predator), but no syntax or grammar. There has been significant interest in programs that sought to teach apes to use language either as symbols or gestures. But most of this research has been discredited. Koko the gorilla was supposedly one of the most sophisticated language uses, but her "language" in fact consisted of rapidly cycling through the repertoire of signs, with the handler picking the signs that made most sense to them. In other experiments subtle cues from handlers told the animals what signs to use. More rigorous experiments show that chimps can understand some language, particularly nouns, but then so can grey parrots, some dogs, and other animals. Crucially they don't use language to communicate. In fact a far more impressive demonstration of intelligence is the ability of crows to improvise tools to retrieve food, or the coordinated pack hunting of aquatic mammals like orca and dolphins. So animals do not use language, but are none the less intelligent. 

Humans are all at different levels when it comes to language use. Some of us are extraordinarily gifted with language and others struggle with the basics. The distinctions are magnified when we restrict language to just written words. This restriction alone is doubtful. Language as written language, even if used for a dialogue, is only small part of what language use consists of. A great deal of what we communicate in language is conveyed by tone of voice, facial expression, hand gestures, or body posture. Those people who can use written language well are rare. So a Turing judge is not simply distinguishing a machine from a human, but is placing a machine on a scale that includes novelists and football hooligans. What happens when the subject responds to any question by chanting "Oi, oi, oi, Come on you reds!"? Intelligence, particularly as measured by word use, is not a simple proposition. 

The Turing Test using text alone would be more interesting if we could define in advance what elements would convince us that the generator of the text was human. To the best of my knowledge this has never been achieved. We don't know what criteria constitute a valid or successful test. We just assume that any generic human being is a good judge. There's no reason to believe that this is true. As I've mentioned many times now, individuals are actually quite poor at solo reasoning tasks (See An Argumentative Theory of Reason). Reason does not work the way they we thought it did. Mercier & Sperber have argued that at least one of the many fallacies that we almost inevitably fall prey to—confirmation bias—is a feature of reason, rather than a bug. M&S argue that this is because reason evolved to help small groups make decisions and those who make proposals think and argue differently to those who critique them. On this account, any given individual would most likely be a poor Turing judge. 

Humans beings evolved to use language. Almost without exception, we all use it without giving it much thought. Certain disorders or diseases may prevent language use, but these stand out against the background of general language use: from the Amazon jungles to the African veldt, humans speak. The likelihood is that we've been using language for tens of thousands of years (See When Did Language Evolve?). But writing is another story. Writing is unusual amongst the world's languages, in that only a minority of living languages are written, or were before contact with Europe. Writing was absent from the Americas, from the Pacific, from Australia and New Guinea. The last two have hundreds of languages each. Unlike speaking, writing is something that we learn with difficulty. No child spontaneously begins to communicate in writing. Writing co-opts skills evolved for other purposes. And as a consequence our ability to use writing to express ourselves is extremely variable. Most people are not very good at it. Those who are, are usually celebrated as extraordinary individuals. Writers and their oeuvre are very important in literary cultures.

So to chose writing as the medium of a test for intelligence is an extremely doubtful choice. We don't expect intelligent human beings to be good at writing. Many highly intelligent people are lousy writers. We don't even expect people who are gifted speakers to be good at writing, which is why politicians do not write their own speeches! Writing is not a representative skill. Indeed it masks our inherent verbal skill.

In fact it might be better to use another skill altogether, i.e. tool making. A crow can modify found objects (specifically bending wire into a hook) to retrieve food items. Another important manifestation of intelligence is the ability to work in groups. Some orca, for example, coordinate their movements to create a bow-wave that can knock a seal off an ice-flow. This is a feat that involves considerable ability at abstract thought, and they pass this acquired knowledge onto to their offspring. The ability to fashion a tool or coordinate actions to achieve a goal are at least as interesting as manifestations of intelligence as language is.

Language and Recognition.

My landlady talks to her cats as though they understand her. She has one-sided conversations with them. Explains to them narratively when their behaviour causes her discomfort, as though they might understand and desist (they never do). She's not peculiar in this. Many people feel their pets are intelligent and can understand them even if they cannot speak. Why is this? Well, at least in part, it's because we recognise certain elements of posture in animals corresponding to emotions. The basic emotions are not so different in our pets that we cannot accurately understand their disposition: happy, content, excited, tired, frightened, angry, desire. With a little study we can even pick up nuances. A dog that barks with ears pinned back is saying something different to one that has its ears forward. A wagging tail or a purr can be a different signal depending on circumstances. A lot of it has to do with displays of and reception of affection. 

Intelligence is not simply about words or language. Depending on our expectations the ability to follow instructions (dogs) or the ability to ignore instructions (cats) can be judged intelligent. The phrase emotional intelligence is now something of a cliché, but it tells us something very important about what intelligence is. A dog that responds to facial expressions, to posture and tone of voice is displaying intelligence of the kind that has a great deal of value to us. Some people value relationships with animals precisely because the communication is stuck at this level. A dog does not try to deceive or communicate in confusingly abstract terms. An animal broadcasts its own disposition ("emotions") without filtering and it responds directly to human dispositions. Many people would say that this type of relationship is more honest.

There's a terrible, but morbidly fascinating, neurological condition called Capgras Syndrome. In this condition a person can recognise the physical features of humans, but their ability to connect those features with emotions is compromised. Usually when one sees a familiar face there is an accompanying emotion that tells us what our relationship with the person is. If we feel disgust or anger on recognition, then we know them to be enemies, perhaps dangerous and we act to avoid or perhaps confront them. If the emotion is joy or love then we know it's a friend or loved one. In Capgras the emotional resonance is absent. With loved ones the absence of that emotion is so strange that the most plausible explanation often seems to be that these are mere replicas of loved ones, or lookalikes. The lack of emotion in response to a known face can be incapacitating in the sense of disrupting every existing relationship. In the classic novel, The Echo Maker, by Richard Powers, the man with Capgras is able to recognise and respond to his sister's voice on the telephone, but does not feel anything when he sees her. The same is true for his home and even his dog. The only way he can explain it is that they are all substitutes cleverly recreated to fool him. Only he isn't "fooled" which creates a nightmarish situation for him. 

The problem, then, with the Turing Test is that it is rooted in the old Victorian conceit about reason being our highest faculty. Reason was, until quite recently, considered to float above the mere bodily processes of emotion. In other words it was very much caught up in Cartesian mind/body dualism and the metaphors associated with matter and spirit (See Metaphors and Materialism). Reason is associated, by default, with spirit, since it seems to be distinct from emotion. We now know that nothing could be further from the truth. Cut off from emotions our minds cannot function properly. We cannot make decisions, cannot assess information, and cannot take responsibility for our actions. The Turing test assumes that intelligence is an abstract quality, separable from the body. But these assumptions are demonstrably false.

What Kind of Intelligence?

I've already pointed out that language is more than words. I've expanded the idea of language to include the prosody, gesture and posture associated with the words (which as we know shapes the meaning of the words). An ironic eyebrow lift can make words mean something quite different than their face value. The ability to use and detect irony depends on non-verbal cues. This is why, for example, irony seldom works on Twitter. Text tends to be taken on face value, and attempts at irony simply cause misunderstanding. This is true in all text based media. In the absence of emotional cues we are forced to try to interpolate the disposition of the interlocutor. Getting a computer to work with irony would be an interesting test of intelligence!

Indeed trying to assess the internal disposition of the hidden interlocutor is a key aspect of the Turing Test. Faced with a Turing Test subject I suspect that most of us would ask questions designed to evoke emotional responses. This is because we intuit that what makes us human is not the words we use, but the feelings we communicate. Someone who acts without remorse is routinely referred to as "inhuman". In most cases humans are not good at making empathetic connections using text - which is why text-based online forums seem to be populated with borderline, if not outright, sociopaths. It's the medium, not the message. Personally I find that doing a lot of online communication produces a profound sense of alienation and brings out my underlying psycho-pathology. Writing an essay however is far more productive exercise than trying to dialogue in text. Even the telephone, with it's limited frequency range, is better for communicating, because tone of voice and inflection communicates sufficient to establish an empathetic connection. 

So if a computer can play chess better than a human being (albeit with considerable help from a team of programmers) then that is impressive, but not intelligent. The computer plays well because it does not feel anything, does not have to respond to its environment (internal or external), and does not have any sense of having won or lost. It has nothing for us to relate to. Similarly, even if a computer ever managed to use language with any kind of facility, i.e. if it could form grammatically and idiomatically correct sentences, it would probably still seem inhuman because it would not share our concerns and values. It would not empathise with us, nor us with it. 

I suppose that in the long run a computer might be able to simulate both language and an interest in our values so that in text form it might fool a human being. But would this constitute intelligence? I think not. A friendly dog would be more intelligent by far. Which is not to say that such a computer would not be a powerful tool. But we'd be better off using it to predict the weather or model a genome than trying to simulate what any of us, or any dog, can do effortlessly.

An argument against this point of view is that our minds are tuned to over-estimate intelligence or emotions in objects we see. So we see faces in clouds and agency in inanimate objects. So an approximation of intelligence would not have to be all that sophisticated to stimulate the emotions in us that would make us judge it intelligent. For example, in movies robots are often given a minimal ability to emote in order to make them sympathetic characters. The robot, Number five, in the film Short Circuit has "eyebrows" and an emotionally expressive voice and this is enough for us to empathise with it. So perhaps we will be easily fooled into believing in machine intelligence. But this means that simulation of intelligence is insufficiently impressive because people are easily fooled.

This point is brilliantly made in the movie Blade Runner. The Voight-Kampff test is designed to distinguish "replicants" from humans based on subtle differences in emotional responses. The replicants are otherwise indistinguishable from humans. The test of Rachael is particularly difficult because she has been raised to believe she is human (the logic of the movie breaks down to some extent because we do not learn by Deckard persists in asking 100 questions if Rachael is answering satisfactorily). Ridley Scott has muddied the waters further by suggesting that the blade runner, Deckard, is himself a replicant, though based on the original story and the context of the film this seems an unlikely twist.

So there are two major problems here: what makes a good Turing test; and who makes a good Turing judge. The whole set up seems under-defined and poorly thought out at present. My impression is that passing the Turing test as it is usually specified is a trivial matter that would tell us nothing about artificial intelligence or humanity that we do not already know. 


It seems to me that we have many reasons to rethink the Turing Test. It seems to be rooted in a series of assumptions that are untenable in light of contemporary knowledge. As a test for intelligence the Turing Test no longer seems reasonable. On one hand the way that it defines intelligence is far too limited. The definition of intelligence it uses is rooted in Cartesian Dualism which sees intelligence as an abstract quality, not rooted in physicality, not embodied. And this is simply false. Emotions, as felt in the body, for example, play a key role in how we process information and make decisions.

As much as anything our decision on whether or not an entity is intelligent or not, will be based on how we feel about it, how interacting with it feels to us. We will compare the feeling of interacting with the unknown entity, to how it feels to interact with an intelligent being. And until it feels right we will not judge that entity intelligent.

In Turing's day we simply did not understand how decision making worked. We still thought of abstract reasoning as a detachable mental function unrelated to being embodied. We still saw reason as the antithesis of emotion. Now we know that emotion is an indivisible part of the process. We must now consider that reason itself may not have evolved for seeking truth, but merely for optimising decision making in small groups. At the very least, the lone teletype operator needs to be replaced with a group of people; and mere words must be replaced by tasks that involve creativity and cooperation. A machine ought to show the ability to cooperate with a human being to achieve a shared goal before being judged "intelligent". The idea that we can judge intelligence at arms length, rationally, dispassionately has little interest or value any more. We judge intelligence through interaction, physical interaction as much as anything.

As George Lakoff and his colleagues have shown, abstract thought is rooted in metaphors deriving from how we physically interact with the world. Our intelligence is embodied and the idea of disembodied intelligence is no longer tenable. As interesting as the idea may appear, there is no ghost in the machine that can be extracted or instantiated and maintained apart from the body. Any attempts to create disembodied intelligence will only result in a simulacrum, not in intelligence that we can recognise as such.

Buddhists will often smugly claim this as their own insight, though most Buddhists I know are crypto-dualists (most believe in life after death and karma for example). I've argued at length that the Buddha's insight was into the nature of experience and that he avoided drawing ontological conclusions. Thus, although we read the texts as being a critique of doctrines involving souls, the methods of Buddhism were always different from the methods of Brahmanism. The Brahmins sought to experience the ātman as a reality, and from the Upaniṣadic description ātman could be experienced as a sense of oneness or connection with everything in the world (oceanic boundary loss). Buddhists deconstructed experience itself to show that nothing in experience persisted and that therefore, even if there was a soul we must either always experience it, or it could never be experienced, and since we start off not experiencing it, no permanent soul can ever be experienced (which is not a comment on whether or not such a soul exists!). Therefore the experiences of the Brahmins are of something other than ātman. Only after Buddhists had started down the road of misguided ontological speculation did this become an opinion about the existence of a soul. So the superficial similarities between ancient Buddhist and modern scientific views is an accident of a philosophical wrong turn on the part of Buddhists. They got it partly right by accident, which is not really worth being smug over.

History shows that we must proceed with real caution here. Our Western views on intelligence have been subject to extreme bias in the past and this has led to some horrific consequences for those people who failed our tests for completely bogus reasons. We must constantly subject our views on intelligence to the most rigorous criticism and scepticism we are capable of. Our mistakes in this field ought to haunt us and make us extremely uncomfortable. This is yet another reason why tests for intelligence ought to require more interactivity. If we do create intelligence we need to know we can get along with it, and it with us. And we know that we have a poor record on this score.

The Turing Test seems not to have been updated to take account of what we know about ourselves nowadays. The test itself is anachronistic. The method is faulty, because it is based on a faulty understanding of intelligence and decision making. We are not even asking the correct question about intelligence. With all due respect to Alan Turing, he was a man of his time, a glorious pioneer, but we're moved on since he came up with this idea and it's had its day. 


See also: Why Artificial Intelligences Will Never Be Like Us and Aliens Will Be Just Like Us. (27 June 2014)

27 June 2014

Why Artificial Intelligences Will Never Be Like Us and Aliens Will Be Just Like Us.

"Yet across the gulf of space, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsym-pathetic, regarded this earth with envious eyes, and slowly and surely drew their plans against us."
Artificial Intelligence (AI) is one of the great memes of science fiction and as our lives come to resemble scifi stories ever more, we can't help by speculate what an AI will be like. Hollywood aside seem to imagine that AIs will be more or less like us because we aim to make them like us. And as part of that we will make them with affection for, or at least obedience to us. Asimov's Laws of Robotics are the most well known expression of this. And even if they end up turning against us, it will be for understandable reasons. 

Extra-terrestrial aliens on the other hand will be incomprehensible. "It's like Jim, but not at we know it." We're not even sure that we'll recognise alien life when we see it. Not even sure that we have a definition of life that will cover aliens. It goes without saying that aliens will behave in unpredictable ways and will almost certainly be hostile to humanity. We won't understand them minds or bodies and we will survive only by accident (War of the Worlds, Alien) or through Promethean cunning (Footfall, Independence Day). Aliens will surprise us, baffle us, and confuse us (though hidden in this narrative is a projection of fears both rational and irrational). 

In this essay I will argue that we have this backwards: in fact AI will be incomprehensible to us, while aliens will be hauntingly familiar. This essay started off as a thought experiment I was conducting about aliens and a comment on a newspaper story on AI. Since then it's become a bit more topical as a computer program known as a chatbot was trumpeted as having "passed the Turing Test for the first time". This turned out to be a rather inflated version of events. In reality a chatbot largely failed to convince the majority of people that it was a person despite a minor cheat that lowered the bar. The chatbot was presented as a foreigner with poor English and was still mostly unconvincing. 

But here's the thing. Why do we expect AI to be able to imitate a human being? What points of reference would a computer program ever have to enable it to do so?

Robots Will Never Be Like Us.

There are some fundamental errors in the way that AI people think about intelligence that will begin to put limits on their progress if they haven't already. The main one being that they don't see that human consciousness is embodied. Current AI models tacitly subscribe to a strong form of Cartesian mind/body dualism: they believe that they can create a mind without a body. There's now a good deal of research to show that our minds are not separable from our bodies. I've probably cited four names more than any other when considering consciousness: George Lakoff, Mark Johnson, Antonio Damasio, and Thomas Metzinger. What these thinkers collectively show is that our minds are very much tied to our bodies. Our abstract thoughts are voiced using on metaphors drawn from how we physically interact with the world. Their way of understanding consciousness posits the modelling of our physical states as the basis for simple consciousness. How does a disembodied mind do that? We can only suppose that it cannot.

One may argue that a robot body is like a human body. And that an embodied robot might be able to build a mind that is like ours through it's robot body. But the robot is not using it's brain primarily to sustain homoeostasis mainly because it does not rely on homoeostasis for continued existence. But even other mammals don't have minds like ours. Because of shared evolutionary history we might share some basic physiological responses to gross stimuli that are good adaptations for survival, but their thoughts are very different because their bodies and particularly their sensory apparatus are different. An arboreal creature is just not going to structure their world the way a plains dweller or an aquatic animal does. Is there any reason to suppose that a dolphin constructs the same kind of world as we do? And if not then what about a mind with no body at all? Maybe we could communicate with dolphin with difficulty and a great deal of imagination on out part. But with a machine? It will be "Shaka, when the walls fell." For the uninitiated this is a reference to a classic of first-contact scifi story. The aliens in question communicate in metaphors drawn exclusively from their own mythology, making them incomprehensible to outsiders, except Picard and his crew of course (there is a long, very nerdy article about this on The Atlantic Website). Compare Dan Everett's story of learning to communicate with the Pirahã people of Amazonia in his book Don't Sleep There Are Snakes.

Although Alan Turing was a mathematical genius he was not a genius of psychology. And he made a fundamental error in his Turing Test in my opinion. Our Theory of Mind is tuned to assume that other minds are like ours. If we can conceive any kind of mind independent of us, then we assume that it is like us. This has survival value, but it also means we invent anthropomorphic gods, for example. A machine mind is not going to be at all like us, but that doesn't stop us unconsciously projecting human qualities onto it. Hypersensitive Agency Detection (as described by Justin L Barrett) is likely to mean that even if a machine does pass the Turing Test then we will have over estimated the extent to which it is an agent.

The Turing Test is thus a flawed model for evaluating another mind because of limitations in our equipment for assessing other minds. The Turing Test assumes that all humans are good judges of intelligence, but we aren't. We are the beings who see faces everywhere, and can get caught up in the lives of soap opera characters and treat rain clouds as intentional agents. We are the people who already suspect that GIGO computers have minds of their own because they breakdown in incomprehensible ways at inconvenient times and that looks like agency to us! (Is there a good time for a computer to break?). The fact that any inanimate object can seem like an intentional agent to us, disqualifies us as judges of the Turing Test. 

AI's, even those with robot bodies, will sense themselves and the world in ways that will always fundamentally different to us. We learn about cause and effect from the experience of bringing our limbs under conscious control, by grabbing and pushing objects. We learn about the physical parameters of our universe the same way. Will a robot really understand in the same way? Even if we set them up to learn heuristically through electronic senses and a computer simulation of a brain, they will learn about the world in a way that is entirely different to the way we learned about it. They will never experience the world as we do. AIs will always be alien to us. 

All life on the planet is the product of 3.5 billion years of evolution. Good luck simulating that in a way that is not detectable as a simulation. At present we can't even convincingly simulate a single celled organism. Life is incredibly complex as this 1:1 million scale model of a synapse (right) demonstrates. 

Aliens Will Be Just Like Us.

Scifi stories like to make aliens as alien as possible, usually by making them irrational and unpredictable (though this is usually underlain by a more comprehensible premise - see below).

In fact we live in a universe with limitations: 96 naturally occurring elements, with predictable chemistry; four fundamental forces; and so on. Yes, there might we weird quantum stuff going on, but in bodies made of septillions (1023) of atoms we'd never know about it without incredibly sophisticated technology. On the human scale we live in a more or less Newtonian universe.

Life as we know it involves exploiting energy gradients and using chemical reactions to move stuff where it wouldn't go on its own. While the gaps in our knowledge still technically allow for vitalistic readings of nature, it does remove the limitations imposed on life by chemistry: elements have strictly limited behaviour the basics of which can be studied and understood in a few years. It takes a few more years to understand all the ways that chemistry can be exploited, and we'll never exhausted all of the possibilities of combining atoms in novel ways. But the possibilities are comprehensible and new combinations have predictable behaviour. Many new drugs are now modelled on computers as a first step.

So the materials and tools available to solve problems, and in fact most of the problems themselves, are the same everywhere in the universe. A spaceship is likely to be made of metals. Ceramics is another option, but they require even higher temperatures to produced and tend to be brittle. Ceramics sophisticated enough to do the job suggest a sophisticated metal-working culture in the background. Metal technology is so much easier to develop. Iron is one of the most versatile and abundant metals: other mid-periodic table metallic elements (aluminium, titanium, vanadium, chromium, cobalt, nickel, copper, zinc, etc) make a huge variety of chemical combinations, but for pure metal and useful alloys, iron is king. Iron alloys give the combination of chemical stability, strength to weight ratio, ductility, and melting point to make a space ship. So our aliens are most likely going to come from a planet with abundant metals, probably iron, and their space ship is going to make extensive use of metals. The metals aliens use will be completely pervious to our analytical techniques. 

Now in the early stages of working iron one needs a fairly robust body: one has to work a bellows, wield tongs and hammer, and generally be pretty strong. That puts a lower limit on the kind of body that an alien will have, though strength of gravity on the alien planet will vary this parameter. Very gracile or very small aliens probably wouldn't make it into space because they could not have got through the blacksmithing phase to more sophisticated metal working techniques. A metal working culture also means an ability to work together over long periods of time for quite abstract goals like the creation of alloys composed of metals extracted from ores buried in the ground. Thus our aliens will be social animals by necessity. Simple herd animals lack the kind of initiative that it takes to develop tools, so they won't be as social as cows or horses. Too little social organisation and the complex tasks of mining and smelting enough metal would be impossible. So no solitary predators in space either. 

The big problem with any budding space program is getting off the ground. Gravity and the possibilities of converting energy put more practical limitations on the possibilities. Since chemical reactions are going to be the main source of energy and these are fixed, gravity will be the limiting factor. The mass of the payload has to be not too large to be to costly or just too heavy, and it must be large enough to fit a being in (a being at least the size of a blacksmith). If the gravity of a n alien planet was much higher than ours it would make getting into space impractical - advanced technology might theoretically overcome this, but with technology one usually works through stages. No early stage means no later stages. If the gravity of a planet was much lower than ours then the density would make large concentrations of metals unlikely. It would be easier to get into space, but without the materials available to make it possible and sustainable. Also the planet would struggle to hold enough atmosphere to make it long-term liveable (like Mars). So alien visitors are going to come from a planet similar to ours and will have solved similar engineering problems with similar materials. 

Scifi writers and enthusiasts have imagined all kinds of other possibilities. Silicon creatures were a favourite for a while. Silicon (Si) sits immediately below carbon in the periodic table and has similar chemistry: it forms molecules with a similar fourfold symmetry. I've made the silicon analogue (SiH4) of methane (CH4) in a lab: it's highly unstable and burns quickly in the presence of oxygen or any other moderately strong oxidising agent (and such agents are pretty common). The potential for life using chemical reactions in a silicon substrate is many orders of magnitude less flexible than that based on carbon and would of necessity require the absolute elimination of oxygen and other oxidising agents from the chemical environment. Silicon tends to oxidise to silicon-dioxide SiO2 and then become extremely inert. Breaking down silicon-dioxide requires heating to melting point (2,300°C) in the presence of a powerful reducing agent, like pure carbon. In fact silicon-dioxide, or silica, is one of the most common substances on earth partly because silicon and oxygen themselves are so common. The ratio of these two is related to the fusion processes that precede a supernova and again are dictated by physics. Where there is silicon, there will be oxygen in large amounts and they will form sand, not bugs. CO2 is also quite inert, but does undergo chemical reactions, which is lucky for us as plants rely on this to create sugars and oxygen.

One of the other main memes is beings of "pure energy", which are of course beings of pure fantasy. Again we have the Cartesian idea of disembodied consciousness at play. Just because we can imagine it, does not make it possible. But even if we accept that the term "pure energy" is meaningful, the problem is entropy. It is the large scale chemical structures of living organisms that prevent the energy held in the system from dissipating out into the universe. The structures of living things, particularly cells, hold matter and energy together against the demands of the laws of thermodynamics. That's partly what makes life interesting. "Pure energy" is free to dissipate and thus could not form the structures that make life interesting.

When NASA scientists were trying to design experiments to detect life on Mars for the Viking mission, they invited James Lovelock to advise them. He realised that one didn't even need to leave home. All one needed to so was measure the composition of gases in a planet's atmosphere, which one could do with a telescope and a spectrometer. If life is going to be recognisable, then it will do what it does here on earth: shift the composition of gases away from the thermodynamic and chemical equilibrium. In our case the levels of atmospheric oxygen require constant replenishment to stay so high. It's a dead give away! And the atmosphere of Mars is at thermal and chemical equilibrium. Nothing is perturbing it from below. Of course NASA went to Mars anyway, and went back, hoping to find vestigial life or fossilised signs of life that had died out. But the atmosphere tells us everything we need to know. 

The Nerdist
So where are all the aliens visitors? (This question is known as the Fermi Paradox after the Enrico Fermi who first asked it). Recall that as far as we know the limit of the speed of light invariably applies to macro objects like spacecraft - yes, theoretically, tachyons are possible, but you can't build a spacecraft out of them! Recently some physicists have been exploring an idea that would allow us to warp space and travel faster than light, but it involves "exotic" matter than no one has ever seen and is unlikely to exist. Aliens are going to have to travel at sub-light speeds. And this would take subjective decades. And because of Relativity time passes slower on a fast moving object, centuries would pass on their home planet. Physics is a harsh mistress.

These are some of the limitations that have occurred to me. There are others. What this points to are a very limited set of circumstances in which an alien species could take to space and come to visit us. The more likely an alien is to get into space, the more like us they are likely to be. The universality of physics and the similarity of the problems that need solving would inevitably lead to parallelism in evolution, just as it has done on earth.

Who is More Like Us?

Unlike scifi, the technology that allows us to meet aliens will be strictly limited by physics. There will be no magic action at a distance on the macro scale (though, yes, individual subatomic particles can subvert this); there will be no time travel, no faster than light travel; no materials impervious to analysis; no cloaking devices, no matter transporters, and no handheld disintegrators. Getting into space involves a set of problems that are common to any being on any planet that will support life, and there are a limited set of solutions to those problems. Any being that evolves to be capable of solving those problems will be somewhat familiar to us. Aliens will mostly be comprehensible and recognisable, and do things on more or less the same scale that we do. As boring as that sounds, or perhaps as frightening depending on your view of humanity.

And AI will forever be a simulation that might seem like us superficially, but won't be anything like us fundamentally. When we imagine that machine intelligences will be like us, we are telling the Pinocchio story (and believing it). This tells us more about our own minds, than it does about the minds of our creations. If only we would realise that we're looking in a mirror and not through a window. All these budding creators of disembodied consciousness ought to read Frankenstein; or, The Modern Prometheus by Mary Shelly. Of course many other dystopic or even apocalyptic stories have been created around this theme, some of my favourite science fiction movies revolve around what goes wrong when machines become sentient. But Shelly set the standard before computers were even conceived of; even before Charles Babbage invented his Difference Engine. She grasped many of the essential problems involved in creating life and in dealing with otherness (she was arguably a lot more insightful than her ne'er-do-well husband). 

Lurking in the background of the story of AI is always some version of Vitalism: the idea that matter is animated by some élan vital which exists apart from it; mind apart from body; spirit as opposed to matter. This is the dualism that haunts virtually everyone I know. And we seem to believe that if we manage to inject this vital spirit into a machine that the substrate will be inconsequential, that matter itself is of no consequence (which is why silicon might look viable despite it's extremely limited chemistry; or a computer might seem a viable place for consciousness to exist). It is the spirit that makes all the difference. AI researchers are effectively saying that they can simulate the presence of spirit in matter with no reference to the body's role in our living being. And this is bunk. It's not simply a matter of animating dead matter, because matter is not dead in the way that Vitalists think it is; and nor is life consistent with spirit in the way they think it is.

The fact that such Vitalist myths and Cartesian Duality still haunt modern attempts at knowledge gathering (and AI is nothing if not modern) let alone modern religions, suggests that the need for an ongoing critique. And it means there is still a role for philosophers in society despite what Stephen Hawking and some scientists say (see also Sean Carroll's essay "Physicists Should Stop Saying Silly Things about Philosophy"). If we can fall into such elementary fallacies at the high-end of science then scientists ought to be employing philosophers on their teams to dig out their unspoken assumptions and expose their fallacious thinking.