Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

08 May 2015

What can the Turing Test Tell Us?

Alan Turing's contribution to mathematics, cryptography and computer science was inestimable. Not only did he shorten World War Two, saving thousands of lives, he advanced us onto the path of digital computers. His suicide after being coerced into hormone treatment is a massive blot on the intellectual landscape in Britain. It is an enduring source of shame. Turing's work remained classified for decades because of the fear that war might break out again and knowing how to break the complex codes used by the Germans was too valuable an advantage to throw away. Nowadays, cryptography has advanced to the point where keeping Turing's work a secret no longer confers much advantage.

Turing was prescient in many ways. Not only did he set the paradigm for how digital computers work, but he understood that one day such machines might become so sophisticated that they were indistinguishable from intelligent beings. He was the first person to consider artificial intelligence (AI). Thinking about AI led him to construct one of the most famous thought experiments ever proposed. The Turing Test is not only a way to distinguish intelligence, it is actually a way of thinking about intelligence without getting bogged down in the details of how intelligence works. For Turing and many of us, the argument is that if a machine can communicate in a way that is it indistinguishable from a human being, then we must assume that it is intelligent, however it achieves this. It's a pragmatic definition of intelligence and one that leads to a practical threshold, beyond which all AI researchers wish to pass.

However underpinning the test are some assumptions about communication, language, and intelligence that I wish to examine. The first is that all human beings all seem to be considered good judges for the Turing Test. I think a good case can be made for considering this a false assumption. The second is the assumptions that mere word use is how we define not only intelligence, but language. Both of these are demonstrably false. If the assumptions the test is built on are false, then we need to rethink what the test is measuring, and whether we still feel this is a sufficient measure of intelligence.


Turing Judges.

The idea of the Turing Test is that a person sits at a teletype machine that prints texts and allows the operator to type text. The human and the test subject sit in different rooms and use the teletype machines to communicate. A machine can be said to pass the Turing Test if a human operator of the teletype cannot tell that the subject is not human. This puts word use at the forefront of Turing's definition of what it means to be intelligent. 

Human beings use of language is indeed one of our defining features. Animals use faculties that hint at a proto-language facility. No animal uses language in the sense that we do. At best animals show one or two of the target properties that define language. They might for example have several grunts that indicate objects (often types of predator), but no syntax or grammar. There has been significant interest in programs that sought to teach apes to use language either as symbols or gestures. But most of this research has been discredited. Koko the gorilla was supposedly one of the most sophisticated language uses, but her "language" in fact consisted of rapidly cycling through the repertoire of signs, with the handler picking the signs that made most sense to them. In other experiments subtle cues from handlers told the animals what signs to use. More rigorous experiments show that chimps can understand some language, particularly nouns, but then so can grey parrots, some dogs, and other animals. Crucially they don't use language to communicate. In fact a far more impressive demonstration of intelligence is the ability of crows to improvise tools to retrieve food, or the coordinated pack hunting of aquatic mammals like orca and dolphins. So animals do not use language, but are none the less intelligent. 

Humans are all at different levels when it comes to language use. Some of us are extraordinarily gifted with language and others struggle with the basics. The distinctions are magnified when we restrict language to just written words. This restriction alone is doubtful. Language as written language, even if used for a dialogue, is only small part of what language use consists of. A great deal of what we communicate in language is conveyed by tone of voice, facial expression, hand gestures, or body posture. Those people who can use written language well are rare. So a Turing judge is not simply distinguishing a machine from a human, but is placing a machine on a scale that includes novelists and football hooligans. What happens when the subject responds to any question by chanting "Oi, oi, oi, Come on you reds!"? Intelligence, particularly as measured by word use, is not a simple proposition. 

The Turing Test using text alone would be more interesting if we could define in advance what elements would convince us that the generator of the text was human. To the best of my knowledge this has never been achieved. We don't know what criteria constitute a valid or successful test. We just assume that any generic human being is a good judge. There's no reason to believe that this is true. As I've mentioned many times now, individuals are actually quite poor at solo reasoning tasks (See An Argumentative Theory of Reason). Reason does not work the way they we thought it did. Mercier & Sperber have argued that at least one of the many fallacies that we almost inevitably fall prey to—confirmation bias—is a feature of reason, rather than a bug. M&S argue that this is because reason evolved to help small groups make decisions and those who make proposals think and argue differently to those who critique them. On this account, any given individual would most likely be a poor Turing judge. 

Humans beings evolved to use language. Almost without exception, we all use it without giving it much thought. Certain disorders or diseases may prevent language use, but these stand out against the background of general language use: from the Amazon jungles to the African veldt, humans speak. The likelihood is that we've been using language for tens of thousands of years (See When Did Language Evolve?). But writing is another story. Writing is unusual amongst the world's languages, in that only a minority of living languages are written, or were before contact with Europe. Writing was absent from the Americas, from the Pacific, from Australia and New Guinea. The last two have hundreds of languages each. Unlike speaking, writing is something that we learn with difficulty. No child spontaneously begins to communicate in writing. Writing co-opts skills evolved for other purposes. And as a consequence our ability to use writing to express ourselves is extremely variable. Most people are not very good at it. Those who are, are usually celebrated as extraordinary individuals. Writers and their oeuvre are very important in literary cultures.

So to chose writing as the medium of a test for intelligence is an extremely doubtful choice. We don't expect intelligent human beings to be good at writing. Many highly intelligent people are lousy writers. We don't even expect people who are gifted speakers to be good at writing, which is why politicians do not write their own speeches! Writing is not a representative skill. Indeed it masks our inherent verbal skill.

In fact it might be better to use another skill altogether, i.e. tool making. A crow can modify found objects (specifically bending wire into a hook) to retrieve food items. Another important manifestation of intelligence is the ability to work in groups. Some orca, for example, coordinate their movements to create a bow-wave that can knock a seal off an ice-flow. This is a feat that involves considerable ability at abstract thought, and they pass this acquired knowledge onto to their offspring. The ability to fashion a tool or coordinate actions to achieve a goal are at least as interesting as manifestations of intelligence as language is.


Language and Recognition.

My landlady talks to her cats as though they understand her. She has one-sided conversations with them. Explains to them narratively when their behaviour causes her discomfort, as though they might understand and desist (they never do). She's not peculiar in this. Many people feel their pets are intelligent and can understand them even if they cannot speak. Why is this? Well, at least in part, it's because we recognise certain elements of posture in animals corresponding to emotions. The basic emotions are not so different in our pets that we cannot accurately understand their disposition: happy, content, excited, tired, frightened, angry, desire. With a little study we can even pick up nuances. A dog that barks with ears pinned back is saying something different to one that has its ears forward. A wagging tail or a purr can be a different signal depending on circumstances. A lot of it has to do with displays of and reception of affection. 

Intelligence is not simply about words or language. Depending on our expectations the ability to follow instructions (dogs) or the ability to ignore instructions (cats) can be judged intelligent. The phrase emotional intelligence is now something of a cliché, but it tells us something very important about what intelligence is. A dog that responds to facial expressions, to posture and tone of voice is displaying intelligence of the kind that has a great deal of value to us. Some people value relationships with animals precisely because the communication is stuck at this level. A dog does not try to deceive or communicate in confusingly abstract terms. An animal broadcasts its own disposition ("emotions") without filtering and it responds directly to human dispositions. Many people would say that this type of relationship is more honest.

There's a terrible, but morbidly fascinating, neurological condition called Capgras Syndrome. In this condition a person can recognise the physical features of humans, but their ability to connect those features with emotions is compromised. Usually when one sees a familiar face there is an accompanying emotion that tells us what our relationship with the person is. If we feel disgust or anger on recognition, then we know them to be enemies, perhaps dangerous and we act to avoid or perhaps confront them. If the emotion is joy or love then we know it's a friend or loved one. In Capgras the emotional resonance is absent. With loved ones the absence of that emotion is so strange that the most plausible explanation often seems to be that these are mere replicas of loved ones, or lookalikes. The lack of emotion in response to a known face can be incapacitating in the sense of disrupting every existing relationship. In the classic novel, The Echo Maker, by Richard Powers, the man with Capgras is able to recognise and respond to his sister's voice on the telephone, but does not feel anything when he sees her. The same is true for his home and even his dog. The only way he can explain it is that they are all substitutes cleverly recreated to fool him. Only he isn't "fooled" which creates a nightmarish situation for him. 

The problem, then, with the Turing Test is that it is rooted in the old Victorian conceit about reason being our highest faculty. Reason was, until quite recently, considered to float above the mere bodily processes of emotion. In other words it was very much caught up in Cartesian mind/body dualism and the metaphors associated with matter and spirit (See Metaphors and Materialism). Reason is associated, by default, with spirit, since it seems to be distinct from emotion. We now know that nothing could be further from the truth. Cut off from emotions our minds cannot function properly. We cannot make decisions, cannot assess information, and cannot take responsibility for our actions. The Turing test assumes that intelligence is an abstract quality, separable from the body. But these assumptions are demonstrably false.


What Kind of Intelligence?

I've already pointed out that language is more than words. I've expanded the idea of language to include the prosody, gesture and posture associated with the words (which as we know shapes the meaning of the words). An ironic eyebrow lift can make words mean something quite different than their face value. The ability to use and detect irony depends on non-verbal cues. This is why, for example, irony seldom works on Twitter. Text tends to be taken on face value, and attempts at irony simply cause misunderstanding. This is true in all text based media. In the absence of emotional cues we are forced to try to interpolate the disposition of the interlocutor. Getting a computer to work with irony would be an interesting test of intelligence!

Indeed trying to assess the internal disposition of the hidden interlocutor is a key aspect of the Turing Test. Faced with a Turing Test subject I suspect that most of us would ask questions designed to evoke emotional responses. This is because we intuit that what makes us human is not the words we use, but the feelings we communicate. Someone who acts without remorse is routinely referred to as "inhuman". In most cases humans are not good at making empathetic connections using text - which is why text-based online forums seem to be populated with borderline, if not outright, sociopaths. It's the medium, not the message. Personally I find that doing a lot of online communication produces a profound sense of alienation and brings out my underlying psycho-pathology. Writing an essay however is far more productive exercise than trying to dialogue in text. Even the telephone, with it's limited frequency range, is better for communicating, because tone of voice and inflection communicates sufficient to establish an empathetic connection. 

So if a computer can play chess better than a human being (albeit with considerable help from a team of programmers) then that is impressive, but not intelligent. The computer plays well because it does not feel anything, does not have to respond to its environment (internal or external), and does not have any sense of having won or lost. It has nothing for us to relate to. Similarly, even if a computer ever managed to use language with any kind of facility, i.e. if it could form grammatically and idiomatically correct sentences, it would probably still seem inhuman because it would not share our concerns and values. It would not empathise with us, nor us with it. 

I suppose that in the long run a computer might be able to simulate both language and an interest in our values so that in text form it might fool a human being. But would this constitute intelligence? I think not. A friendly dog would be more intelligent by far. Which is not to say that such a computer would not be a powerful tool. But we'd be better off using it to predict the weather or model a genome than trying to simulate what any of us, or any dog, can do effortlessly.

An argument against this point of view is that our minds are tuned to over-estimate intelligence or emotions in objects we see. So we see faces in clouds and agency in inanimate objects. So an approximation of intelligence would not have to be all that sophisticated to stimulate the emotions in us that would make us judge it intelligent. For example, in movies robots are often given a minimal ability to emote in order to make them sympathetic characters. The robot, Number five, in the film Short Circuit has "eyebrows" and an emotionally expressive voice and this is enough for us to empathise with it. So perhaps we will be easily fooled into believing in machine intelligence. But this means that simulation of intelligence is insufficiently impressive because people are easily fooled.

This point is brilliantly made in the movie Blade Runner. The Voight-Kampff test is designed to distinguish "replicants" from humans based on subtle differences in emotional responses. The replicants are otherwise indistinguishable from humans. The test of Rachael is particularly difficult because she has been raised to believe she is human (the logic of the movie breaks down to some extent because we do not learn by Deckard persists in asking 100 questions if Rachael is answering satisfactorily). Ridley Scott has muddied the waters further by suggesting that the blade runner, Deckard, is himself a replicant, though based on the original story and the context of the film this seems an unlikely twist.

So there are two major problems here: what makes a good Turing test; and who makes a good Turing judge. The whole set up seems under-defined and poorly thought out at present. My impression is that passing the Turing test as it is usually specified is a trivial matter that would tell us nothing about artificial intelligence or humanity that we do not already know. 


Conclusion

It seems to me that we have many reasons to rethink the Turing Test. It seems to be rooted in a series of assumptions that are untenable in light of contemporary knowledge. As a test for intelligence the Turing Test no longer seems reasonable. On one hand the way that it defines intelligence is far too limited. The definition of intelligence it uses is rooted in Cartesian Dualism which sees intelligence as an abstract quality, not rooted in physicality, not embodied. And this is simply false. Emotions, as felt in the body, for example, play a key role in how we process information and make decisions.

As much as anything our decision on whether or not an entity is intelligent or not, will be based on how we feel about it, how interacting with it feels to us. We will compare the feeling of interacting with the unknown entity, to how it feels to interact with an intelligent being. And until it feels right we will not judge that entity intelligent.

In Turing's day we simply did not understand how decision making worked. We still thought of abstract reasoning as a detachable mental function unrelated to being embodied. We still saw reason as the antithesis of emotion. Now we know that emotion is an indivisible part of the process. We must now consider that reason itself may not have evolved for seeking truth, but merely for optimising decision making in small groups. At the very least, the lone teletype operator needs to be replaced with a group of people; and mere words must be replaced by tasks that involve creativity and cooperation. A machine ought to show the ability to cooperate with a human being to achieve a shared goal before being judged "intelligent". The idea that we can judge intelligence at arms length, rationally, dispassionately has little interest or value any more. We judge intelligence through interaction, physical interaction as much as anything.

As George Lakoff and his colleagues have shown, abstract thought is rooted in metaphors deriving from how we physically interact with the world. Our intelligence is embodied and the idea of disembodied intelligence is no longer tenable. As interesting as the idea may appear, there is no ghost in the machine that can be extracted or instantiated and maintained apart from the body. Any attempts to create disembodied intelligence will only result in a simulacrum, not in intelligence that we can recognise as such.

Buddhists will often smugly claim this as their own insight, though most Buddhists I know are crypto-dualists (most believe in life after death and karma for example). I've argued at length that the Buddha's insight was into the nature of experience and that he avoided drawing ontological conclusions. Thus, although we read the texts as being a critique of doctrines involving souls, the methods of Buddhism were always different from the methods of Brahmanism. The Brahmins sought to experience the ātman as a reality, and from the Upaniṣadic description ātman could be experienced as a sense of oneness or connection with everything in the world (oceanic boundary loss). Buddhists deconstructed experience itself to show that nothing in experience persisted and that therefore, even if there was a soul we must either always experience it, or it could never be experienced, and since we start off not experiencing it, no permanent soul can ever be experienced (which is not a comment on whether or not such a soul exists!). Therefore the experiences of the Brahmins are of something other than ātman. Only after Buddhists had started down the road of misguided ontological speculation did this become an opinion about the existence of a soul. So the superficial similarities between ancient Buddhist and modern scientific views is an accident of a philosophical wrong turn on the part of Buddhists. They got it partly right by accident, which is not really worth being smug over.

History shows that we must proceed with real caution here. Our Western views on intelligence have been subject to extreme bias in the past and this has led to some horrific consequences for those people who failed our tests for completely bogus reasons. We must constantly subject our views on intelligence to the most rigorous criticism and scepticism we are capable of. Our mistakes in this field ought to haunt us and make us extremely uncomfortable. This is yet another reason why tests for intelligence ought to require more interactivity. If we do create intelligence we need to know we can get along with it, and it with us. And we know that we have a poor record on this score.

The Turing Test seems not to have been updated to take account of what we know about ourselves nowadays. The test itself is anachronistic. The method is faulty, because it is based on a faulty understanding of intelligence and decision making. We are not even asking the correct question about intelligence. With all due respect to Alan Turing, he was a man of his time, a glorious pioneer, but we're moved on since he came up with this idea and it's had its day. 


~~oOo~~

See also: Why Artificial Intelligences Will Never Be Like Us and Aliens Will Be Just Like Us. (27 June 2014)

27 June 2014

Why Artificial Intelligences Will Never Be Like Us and Aliens Will Be Just Like Us.

"Yet across the gulf of space, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsym-pathetic, regarded this earth with envious eyes, and slowly and surely drew their plans against us."

cosmicorigins.com
Artificial Intelligence (AI) is one of the great memes of science fiction and as our lives come to resemble scifi stories ever more, we can't help by speculate what an AI will be like. Hollywood aside seem to imagine that AIs will be more or less like us because we aim to make them like us. And as part of that we will make them with affection for, or at least obedience to us. Asimov's Laws of Robotics are the most well known expression of this. And even if they end up turning against us, it will be for understandable reasons. 

Extra-terrestrial aliens on the other hand will be incomprehensible. "It's like Jim, but not at we know it." We're not even sure that we'll recognise alien life when we see it. Not even sure that we have a definition of life that will cover aliens. It goes without saying that aliens will behave in unpredictable ways and will almost certainly be hostile to humanity. We won't understand them minds or bodies and we will survive only by accident (War of the Worlds, Alien) or through Promethean cunning (Footfall, Independence Day). Aliens will surprise us, baffle us, and confuse us (though hidden in this narrative is a projection of fears both rational and irrational). 

In this essay I will argue that we have this backwards: in fact AI will be incomprehensible to us, while aliens will be hauntingly familiar. This essay started off as a thought experiment I was conducting about aliens and a comment on a newspaper story on AI. Since then it's become a bit more topical as a computer program known as a chatbot was trumpeted as having "passed the Turing Test for the first time". This turned out to be a rather inflated version of events. In reality a chatbot largely failed to convince the majority of people that it was a person despite a minor cheat that lowered the bar. The chatbot was presented as a foreigner with poor English and was still mostly unconvincing. 

But here's the thing. Why do we expect AI to be able to imitate a human being? What points of reference would a computer program ever have to enable it to do so?


Robots Will Never Be Like Us.

There are some fundamental errors in the way that AI people think about intelligence that will begin to put limits on their progress if they haven't already. The main one being that they don't see that human consciousness is embodied. Current AI models tacitly subscribe to a strong form of Cartesian mind/body dualism: they believe that they can create a mind without a body. There's now a good deal of research to show that our minds are not separable from our bodies. I've probably cited four names more than any other when considering consciousness: George Lakoff, Mark Johnson, Antonio Damasio, and Thomas Metzinger. What these thinkers collectively show is that our minds are very much tied to our bodies. Our abstract thoughts are voiced using on metaphors drawn from how we physically interact with the world. Their way of understanding consciousness posits the modelling of our physical states as the basis for simple consciousness. How does a disembodied mind do that? We can only suppose that it cannot.

One may argue that a robot body is like a human body. And that an embodied robot might be able to build a mind that is like ours through it's robot body. But the robot is not using it's brain primarily to sustain homoeostasis mainly because it does not rely on homoeostasis for continued existence. But even other mammals don't have minds like ours. Because of shared evolutionary history we might share some basic physiological responses to gross stimuli that are good adaptations for survival, but their thoughts are very different because their bodies and particularly their sensory apparatus are different. An arboreal creature is just not going to structure their world the way a plains dweller or an aquatic animal does. Is there any reason to suppose that a dolphin constructs the same kind of world as we do? And if not then what about a mind with no body at all? Maybe we could communicate with dolphin with difficulty and a great deal of imagination on out part. But with a machine? It will be "Shaka, when the walls fell." For the uninitiated this is a reference to a classic of first-contact scifi story. The aliens in question communicate in metaphors drawn exclusively from their own mythology, making them incomprehensible to outsiders, except Picard and his crew of course (there is a long, very nerdy article about this on The Atlantic Website). Compare Dan Everett's story of learning to communicate with the Pirahã people of Amazonia in his book Don't Sleep There Are Snakes.

Although Alan Turing was a mathematical genius he was not a genius of psychology. And he made a fundamental error in his Turing Test in my opinion. Our Theory of Mind is tuned to assume that other minds are like ours. If we can conceive any kind of mind independent of us, then we assume that it is like us. This has survival value, but it also means we invent anthropomorphic gods, for example. A machine mind is not going to be at all like us, but that doesn't stop us unconsciously projecting human qualities onto it. Hypersensitive Agency Detection (as described by Justin L Barrett) is likely to mean that even if a machine does pass the Turing Test then we will have over estimated the extent to which it is an agent.

The Turing Test is thus a flawed model for evaluating another mind because of limitations in our equipment for assessing other minds. The Turing Test assumes that all humans are good judges of intelligence, but we aren't. We are the beings who see faces everywhere, and can get caught up in the lives of soap opera characters and treat rain clouds as intentional agents. We are the people who already suspect that GIGO computers have minds of their own because they breakdown in incomprehensible ways at inconvenient times and that looks like agency to us! (Is there a good time for a computer to break?). The fact that any inanimate object can seem like an intentional agent to us, disqualifies us as judges of the Turing Test. 

AI's, even those with robot bodies, will sense themselves and the world in ways that will always fundamentally different to us. We learn about cause and effect from the experience of bringing our limbs under conscious control, by grabbing and pushing objects. We learn about the physical parameters of our universe the same way. Will a robot really understand in the same way? Even if we set them up to learn heuristically through electronic senses and a computer simulation of a brain, they will learn about the world in a way that is entirely different to the way we learned about it. They will never experience the world as we do. AIs will always be alien to us. 

All life on the planet is the product of 3.5 billion years of evolution. Good luck simulating that in a way that is not detectable as a simulation. At present we can't even convincingly simulate a single celled organism. Life is incredibly complex as this 1:1 million scale model of a synapse (right) demonstrates. 


Aliens Will Be Just Like Us.

Scifi stories like to make aliens as alien as possible, usually by making them irrational and unpredictable (though this is usually underlain by a more comprehensible premise - see below).

In fact we live in a universe with limitations: 96 naturally occurring elements, with predictable chemistry; four fundamental forces; and so on. Yes, there might we weird quantum stuff going on, but in bodies made of septillions (1023) of atoms we'd never know about it without incredibly sophisticated technology. On the human scale we live in a more or less Newtonian universe.

Life as we know it involves exploiting energy gradients and using chemical reactions to move stuff where it wouldn't go on its own. While the gaps in our knowledge still technically allow for vitalistic readings of nature, it does remove the limitations imposed on life by chemistry: elements have strictly limited behaviour the basics of which can be studied and understood in a few years. It takes a few more years to understand all the ways that chemistry can be exploited, and we'll never exhausted all of the possibilities of combining atoms in novel ways. But the possibilities are comprehensible and new combinations have predictable behaviour. Many new drugs are now modelled on computers as a first step.

So the materials and tools available to solve problems, and in fact most of the problems themselves, are the same everywhere in the universe. A spaceship is likely to be made of metals. Ceramics is another option, but they require even higher temperatures to produced and tend to be brittle. Ceramics sophisticated enough to do the job suggest a sophisticated metal-working culture in the background. Metal technology is so much easier to develop. Iron is one of the most versatile and abundant metals: other mid-periodic table metallic elements (aluminium, titanium, vanadium, chromium, cobalt, nickel, copper, zinc, etc) make a huge variety of chemical combinations, but for pure metal and useful alloys, iron is king. Iron alloys give the combination of chemical stability, strength to weight ratio, ductility, and melting point to make a space ship. So our aliens are most likely going to come from a planet with abundant metals, probably iron, and their space ship is going to make extensive use of metals. The metals aliens use will be completely pervious to our analytical techniques. 

Now in the early stages of working iron one needs a fairly robust body: one has to work a bellows, wield tongs and hammer, and generally be pretty strong. That puts a lower limit on the kind of body that an alien will have, though strength of gravity on the alien planet will vary this parameter. Very gracile or very small aliens probably wouldn't make it into space because they could not have got through the blacksmithing phase to more sophisticated metal working techniques. A metal working culture also means an ability to work together over long periods of time for quite abstract goals like the creation of alloys composed of metals extracted from ores buried in the ground. Thus our aliens will be social animals by necessity. Simple herd animals lack the kind of initiative that it takes to develop tools, so they won't be as social as cows or horses. Too little social organisation and the complex tasks of mining and smelting enough metal would be impossible. So no solitary predators in space either. 

The big problem with any budding space program is getting off the ground. Gravity and the possibilities of converting energy put more practical limitations on the possibilities. Since chemical reactions are going to be the main source of energy and these are fixed, gravity will be the limiting factor. The mass of the payload has to be not too large to be to costly or just too heavy, and it must be large enough to fit a being in (a being at least the size of a blacksmith). If the gravity of a n alien planet was much higher than ours it would make getting into space impractical - advanced technology might theoretically overcome this, but with technology one usually works through stages. No early stage means no later stages. If the gravity of a planet was much lower than ours then the density would make large concentrations of metals unlikely. It would be easier to get into space, but without the materials available to make it possible and sustainable. Also the planet would struggle to hold enough atmosphere to make it long-term liveable (like Mars). So alien visitors are going to come from a planet similar to ours and will have solved similar engineering problems with similar materials. 

Scifi writers and enthusiasts have imagined all kinds of other possibilities. Silicon creatures were a favourite for a while. Silicon (Si) sits immediately below carbon in the periodic table and has similar chemistry: it forms molecules with a similar fourfold symmetry. I've made the silicon analogue (SiH4) of methane (CH4) in a lab: it's highly unstable and burns quickly in the presence of oxygen or any other moderately strong oxidising agent (and such agents are pretty common). The potential for life using chemical reactions in a silicon substrate is many orders of magnitude less flexible than that based on carbon and would of necessity require the absolute elimination of oxygen and other oxidising agents from the chemical environment. Silicon tends to oxidise to silicon-dioxide SiO2 and then become extremely inert. Breaking down silicon-dioxide requires heating to melting point (2,300°C) in the presence of a powerful reducing agent, like pure carbon. In fact silicon-dioxide, or silica, is one of the most common substances on earth partly because silicon and oxygen themselves are so common. The ratio of these two is related to the fusion processes that precede a supernova and again are dictated by physics. Where there is silicon, there will be oxygen in large amounts and they will form sand, not bugs. CO2 is also quite inert, but does undergo chemical reactions, which is lucky for us as plants rely on this to create sugars and oxygen.

One of the other main memes is beings of "pure energy", which are of course beings of pure fantasy. Again we have the Cartesian idea of disembodied consciousness at play. Just because we can imagine it, does not make it possible. But even if we accept that the term "pure energy" is meaningful, the problem is entropy. It is the large scale chemical structures of living organisms that prevent the energy held in the system from dissipating out into the universe. The structures of living things, particularly cells, hold matter and energy together against the demands of the laws of thermodynamics. That's partly what makes life interesting. "Pure energy" is free to dissipate and thus could not form the structures that make life interesting.

When NASA scientists were trying to design experiments to detect life on Mars for the Viking mission, they invited James Lovelock to advise them. He realised that one didn't even need to leave home. All one needed to so was measure the composition of gases in a planet's atmosphere, which one could do with a telescope and a spectrometer. If life is going to be recognisable, then it will do what it does here on earth: shift the composition of gases away from the thermodynamic and chemical equilibrium. In our case the levels of atmospheric oxygen require constant replenishment to stay so high. It's a dead give away! And the atmosphere of Mars is at thermal and chemical equilibrium. Nothing is perturbing it from below. Of course NASA went to Mars anyway, and went back, hoping to find vestigial life or fossilised signs of life that had died out. But the atmosphere tells us everything we need to know. 

The Nerdist
So where are all the aliens visitors? (This question is known as the Fermi Paradox after the Enrico Fermi who first asked it). Recall that as far as we know the limit of the speed of light invariably applies to macro objects like spacecraft - yes, theoretically, tachyons are possible, but you can't build a spacecraft out of them! Recently some physicists have been exploring an idea that would allow us to warp space and travel faster than light, but it involves "exotic" matter than no one has ever seen and is unlikely to exist. Aliens are going to have to travel at sub-light speeds. And this would take subjective decades. And because of Relativity time passes slower on a fast moving object, centuries would pass on their home planet. Physics is a harsh mistress.

These are some of the limitations that have occurred to me. There are others. What this points to are a very limited set of circumstances in which an alien species could take to space and come to visit us. The more likely an alien is to get into space, the more like us they are likely to be. The universality of physics and the similarity of the problems that need solving would inevitably lead to parallelism in evolution, just as it has done on earth.


Who is More Like Us?

Unlike scifi, the technology that allows us to meet aliens will be strictly limited by physics. There will be no magic action at a distance on the macro scale (though, yes, individual subatomic particles can subvert this); there will be no time travel, no faster than light travel; no materials impervious to analysis; no cloaking devices, no matter transporters, and no handheld disintegrators. Getting into space involves a set of problems that are common to any being on any planet that will support life, and there are a limited set of solutions to those problems. Any being that evolves to be capable of solving those problems will be somewhat familiar to us. Aliens will mostly be comprehensible and recognisable, and do things on more or less the same scale that we do. As boring as that sounds, or perhaps as frightening depending on your view of humanity.

And AI will forever be a simulation that might seem like us superficially, but won't be anything like us fundamentally. When we imagine that machine intelligences will be like us, we are telling the Pinocchio story (and believing it). This tells us more about our own minds, than it does about the minds of our creations. If only we would realise that we're looking in a mirror and not through a window. All these budding creators of disembodied consciousness ought to read Frankenstein; or, The Modern Prometheus by Mary Shelly. Of course many other dystopic or even apocalyptic stories have been created around this theme, some of my favourite science fiction movies revolve around what goes wrong when machines become sentient. But Shelly set the standard before computers were even conceived of; even before Charles Babbage invented his Difference Engine. She grasped many of the essential problems involved in creating life and in dealing with otherness (she was arguably a lot more insightful than her ne'er-do-well husband). 

Lurking in the background of the story of AI is always some version of Vitalism: the idea that matter is animated by some élan vital which exists apart from it; mind apart from body; spirit as opposed to matter. This is the dualism that haunts virtually everyone I know. And we seem to believe that if we manage to inject this vital spirit into a machine that the substrate will be inconsequential, that matter itself is of no consequence (which is why silicon might look viable despite it's extremely limited chemistry; or a computer might seem a viable place for consciousness to exist). It is the spirit that makes all the difference. AI researchers are effectively saying that they can simulate the presence of spirit in matter with no reference to the body's role in our living being. And this is bunk. It's not simply a matter of animating dead matter, because matter is not dead in the way that Vitalists think it is; and nor is life consistent with spirit in the way they think it is.

The fact that such Vitalist myths and Cartesian Duality still haunt modern attempts at knowledge gathering (and AI is nothing if not modern) let alone modern religions, suggests that the need for an ongoing critique. And it means there is still a role for philosophers in society despite what Stephen Hawking and some scientists say (see also Sean Carroll's essay "Physicists Should Stop Saying Silly Things about Philosophy"). If we can fall into such elementary fallacies at the high-end of science then scientists ought to be employing philosophers on their teams to dig out their unspoken assumptions and expose their fallacious thinking.

~~oOo~~