Showing posts with label Consciousness. Show all posts
Showing posts with label Consciousness. Show all posts

04 July 2014

Is Experience Really Ineffable?

What could this possibly be?

There's an old story from India that seems to crop up everywhere. In Buddhist literature it is found in the Udāna (Paṭhamanānātitthiya sutta) and possibly elsewhere. The story goes, that a group of men blind from birth (jaccandhā) were rounded up and asked to participate in an experiment. They are told "this is an elephant" (‘ediso, jaccandhā, hatthī’ti) and allowed to touch part of it. Then asked to describe "an elephant" they assert that it is either like a pot (the blind man who felt the elephants' head), a winnowing basket (ear), a ploughshare (tusk), a plough (trunk), a granary (body), a pillar (foot), a mortar (back), a pestle (tail) or a brush (tip of the tail).

The parable is supposed to illustrate a principle something like "a little knowledge is a dangerous thing". It says that we get a hold of part of something and claim to know everything, but we're like the blind men who don't see the big picture. The parable ends there, but it has to because the story would fall apart if it didn't. A while ago I noticed that a physicist, whose blog I read, had this as his Twitter profile bio:
If the blind dudes just talked to each other, they would figure out it was an elephant before too long. @seanmcarroll 
I bloody love this! I'm so sick of smug religious platitudes and I really love it when someone slam dunks one. Sean is responding to the way the story is typically told, in which the blind men have to identify an unknown animal. But as I say in the Buddhist version the "blind dudes" are told "this is an elephant" and have to describe it. The difference is not crucial.

Part of the reason I love Sean's comment is that I stood right next to an elephant when I was in India in 2004. It was on the road near Kushinagar, where the Buddha is supposed to have died. Elephants are big, smelly animals. If you got a lot of people crowding around an elephant to touch it, the thing would fidget at the least, and probably shuffle it's feet. As a herbivore an elephant not only eats a lot, but it shits a lot. Many times a day. Chances are it dropped a big load of dung while being examined. Maybe it grumbled in low tones. The elephant's handler would have kept up the constant patter of the mahout: an elephant will do as it's told, but it needs a lot of reminding not to just wander off in search of food. And if you'd grown up in India in the time of the texts you'd know exactly what an elephant was like: sight or no sight. No conferring necessary. 

And this is the problem with so many of these smug little parables. We who tell or read these stories are supposed to be much cleverer than those people who are in the cross hairs. But the story itself is... (shall we say) unsophisticated. How naive do we have to be to take this tripe seriously? 

Even so, Sean Carroll has put his finger on something very important about knowledge that is all too often left completely out of philosophical accounts. We don't live in perpetual isolation from other people. We communicate with them incessantly. A blind man is not of necessity unable to communicate because they can't see. 

In the story the elephant is standing still, it makes no sound, has no smell and the blind men get one touch and no chances to confer, and seem to have been kept in isolation for their whole lives. How is this reasonable? It is a poor story designed to make a presupposition sound plausible. Why does everyone nod sagely when they hear this rubbish? Why do they congratulate themselves on not being like the stupid men in the story? The story is self-defeating - it displays the very attitude it is supposed to guard against. To a scientist it's a ludicrous scenario. Scientists work by comparing their observations and coming up with a theory which will explain them all. If the blind men were scientists they'd want to compare notes, to repeat the experiment with another animal and see what happened. If they were presented with various animals at random could they identify which were elephants? And so on. 

The Tennis Match.

When I read philosophers of mind talking about subjectivity, I find myself experiencing cognitive dissonance. Of course we can argue about the ontological status of the objects behind our experiences: do they exist, do other people exist? But take the case of a tennis match before a crowd of some 10,000 people. What we observe is that heads turn to follow the ball. They do not turn at random, they do not turn in an uncoordinated way. 10,000 people's heads turn in unison, at the same time, at the same speed, and they do so without any connection between the people. Are those 10,000 people really having a completely different experience? Would they really struggle to describe why they where turning their head to follow the ball?

True each person would have had a unique perspective on the ball, but there is a considerable overlap. Different people might have supported different players. Some might be elated that their player won, or dejected that their player lost. Does the fact that they had different emotional responses to the experience of watching a ball get batted back and forth mean that they saw an entirely different event? Surely it does not.

If we go to a concert with like-minded friends, afterwards we can talk coherently about what we've seen and experienced during the show. We don't usually find that we heard Arvo Pärt while our friends heard Metallica. We hear the same music. We might have noticed different nuances. My friend might have noticed an out of tune French Horn, while I was oblivious. Our attention to the details will depend on many factors, but we see and hear the same performance and can talk coherently about it afterwards. If my friend found a particular passage moving and they describe that to me, I may well have responded differently, but I can relate to my friends account with empathy. Or I might have been moved but not understood why and when my friend articulates their experience I will suddenly experience understanding and know exactly what they mean.

If I go to a comedy film and find myself laughing along with a few hundred other people am I truly cut off from them in my own little bubble? Robin Dunbar (of the Dunbar Numbers fame) has shown that we are 30x more likely to laugh at a film when we are with four people than if we are alone. Laughter is very often a shared experience. Dunbar hypotheses that shared laughter is a sublimation of primate grooming behaviour. Physical grooming in the large group sizes that human beings live in (facilitated by our large neocortex/brain ratio) would take up too much time, so we laughter, dance and sing together which has a similar physiological effect to physical grooming. See Dunbar's new book Human Evolution (highly recommended).

Thus is seems to me that characterising each person as being in an impenetrable bubble is not accurate. For a social animal like a human being, a good part of our experience is shared.

Private Experience vs Public Knowledge

It's sometimes said that our subjective experience is entirely private. But I don't think the examples above would be possible if this were true. So am I now a proponent of morphogenic fields? No! We know about the emotional state of another person through various cues that that other uses to broadcast their state: facial expression, posture, tone of voice, direction of gaze, etc. And we take these cues and use them to build an internal model - if I were to make my own face and body take on the configuration of the other persons face and body, how would that feel? And this is surprisingly accurate. Indeed we very often go one step further and adopt the posture of the other in solidarity. Less dominant individuals will adopt the body language of dominant individuals, and so on.

Human beings are capable of mentalising to a much greater extent than other animals. So for example Shakespeare wrote a story in which he has us believe that Iago convinces Othello that he (Iago) believes that the love Desdemona feels for Roderigo is mutual (and we the audience can understand the first person perspective of each character and how they see all the others). We understand our own minds from a first person perspective. We and many other animals are aware that other individuals also have a first person perspective that is just like ours. This is second order mentalising. But we humans can take this inference to a whole new level. On average humans can manage fifth order mentalising: for example we (1) might think that he (2) thinks that she (3) thinks that they (4) believe the proponent (5) is a liar. But in order to write such a story the author must be able to stretch to at least one extra order, they must be able to put themselves in our shoes as we take in the story. This is part of why Shakespeare is a remarkable writer, he has an extraordinary ability to see other points of view. The best story tellers place us inside the head of another human being and allow us to experience the world from their point of view. It's a remarkable gift!

We can easily comprehend the inner world of another person, especially if their identity is shaped by the same cultural factors as ours, but even with humans of very different cultures to a large degree. The capacity is not present in very young children but develops by about age 5. When the capacity does not develop, as in Aspergers Syndrome, it can be very painful to know that other people have inner lives but not to have easy access to them. It can be a source of considerable anxiety. Which is not to say that people who cannot assess the inner states of other person don't have inner lives themselves. They do.

One of the interesting features of the Buddhist tradition is that it seems to be understood that knowledge follows from experience. Far from being ineffable for example, the Spiral Path texts suggest that from the experience of liberation (vimutti) comes the knowledge of liberation (vimuttiñāna). I've noted in the past that Richard Gombrich makes this distinction also. The experience itself might be ineffable, but having had that experience we can say what it is like to have had it. We can say a lot about how the experience changed us, about how we feel about other things now we've had that experience. And this is why early Buddhist texts are full of descriptions of what it is like to have had the experience of bodhi.

In a recent talk at the University of Cambridge philosopher John Searle made an interesting distinction between ontology and epistemology (Consciousness as a Problem in Philosophy and Neurobiology). He said:
"The ontological subjectivity of the domain [of consciousness] does not prevent us from having an epistemologically objective science of that domain".
So conscious experience is ontologically subjective. Our first person perspective is internal to our own mind. By contrast molecules, mountains and tectonic plates are ontologically objective, they undoubtedly exist independently of our minds. If I say "Van Gogh is a better painter than Gauguin" that is an epistemologically subjective statement. It's something I think I know, but it is an aesthetic judgement that others may disagree with. However if I say "Van Gogh died in France", then this is epistemologically objective - it's knowledge that is external to me, something that everyone knows and there is no disagreement over.

Searle says that the argument that we can never study the mind scientifically is mixing up ontology and epistemology. This is a fallacy of ambiguity. We regularly use our ontological subjectivity to create a class of phenomena about which we can then make statements that are epistemology objective. There are many examples of this kind of phenomenon. Searle gives the examples of money, property, government, and cocktail parties.

Computation (2+2=4) is another ontologically subjective phenomenon about which we can make epistemologically objective statements. If I have two bananas and you give me two more, then objectively I have four bananas. As a written statement this is epistemologically objective, despite the fact that as a mental operation perceiving bananas, counting and addition are entirely subjective. Despite the subjective nature of these mental operations, there is no barrier to you having objective knowledge of what's just happened in my mind.

Searle uses the example of a falling object. If you drop a pen onto the floor it follows a path which defines a mathematical function: d = ½gt2 (where g = the acceleration due to gravity, t = time and d = distance). But nature does not do computation. The pen is simply a mass that travels through space. And close to the earth space is bent by the mass of the earth (the pen's mass also bends space, but not nearly as much because the effect is proportional to the quantity and density of matter). The effect looks just like a force of attraction. And that effect is described by the equation given above. But the universe doesn't calculate the distance. Calculation, computation, is purely subjective. Never-the-less the statement d = ½gt2 gives us objective knowledge (it allows us to subjectively make objectively accurate predictions), it's independent of our point of view.

Thus, according to Searle, the argument that the subjectivity of consciousness precludes any objective knowledge of it, is simply a logical fallacy that stems from confusing ontology and epistemology. And this means that consciousness is not ineffable in the way that some Buddhists argue that it is.

I would add to this that it's now possible, through stimulating individual neurons to provoke experiences. We discovered this during surgery on the brain. In some forms of brain surgery the patient remains conscious. If a tumour is in a delicate place the surgeon may want the patient to report what happens when a particular part of the brain is stimulated so as to avoid damaging a crucial function. What patients report under these conditions is entirely dependent on which part of the brain is being stimulated, at times which particular neuron: the results can be memories, sensory hallucinations (the illusion of sensory stimulation coming from direct neuron stimulation), motor activity, and so on. One could spend hours trawling through the search results of the search "awake during brain surgery". It's fascinating.


We need to think critically about parables that smack of platitude. Are they telling us something important, or are they, as in the case of the elephant and the blind man, simply religious propaganda that in fact blind us to greater truths? The whole arena of discussion about consciousness is fraught with difficulty. If Searle is right then there is widespread confusion over epistemology and ontology (which is one of the problems that plagues Buddhist philosophy too). Thinking clearly under these conditions can be exceedingly difficult.

It's true that an elephant, like any complex object of the senses, is a beast of many parts. It does have a ear like a winnowing basket, tusks like ploughshares, a trunk like a plough, a body like a granary, a leg/foot like a pillar, a back like a mortar, a tail like a pestle, and the tip of its tail is like a brush. Ears, tusks, trunk, legs, body, and tail all contribute to the animal we call "elephant". If we know what an elephant looks like we know we're looking at one from the slightest clue. Hence the picture accompanying this essay. I don't expect any of my readers to have any difficulty in identifying the elephant in the picture from its legs alone, even if they've never seen a real elephant.

We need not be like the blind men in the story and remain ignorant. We don't live in isolated bubbles. If we just compare notes on experience we come to a collective understanding. Even if there were plausibly a dozen people blind from birth in Sāvathī and even if plausibly they had never before had any experience of an elephant, the conversation they had would have revealed the bigger picture. In a sense this is what is implied by Mercier & Sperber's account of reasoning: reasoning is something we do together and on our own we're rather poor at it (see An Argumentative Theory of Reason). There's no a priori reason why we cannot compare notes, share knowledge and come to a greater understanding. And even if the domain is subjective, by comparing notes we do know that there are similarities which allow us to gain objective knowledge of that subjective domain.

I know some people like to play up the differences and discontinuities, but that story on its own is incomplete and partial. It's the kind of thing the elephant story warns us about. We always only have partial knowledge. Claims to full or ultimate knowledge are far more likely to come from religieux than scientists. Yes, experience is subjective, but this does not mean we can have no objective knowledge about experience. We can and do have partial objective knowledge about experience - else I could not expect anyone to read these words and find them meaningful. To my mind, religious stories like the elephant parable just get in the way of understanding.


10 May 2013

An Argumentative Theory of Reason

This post is a précis and review of:
Mercier, Hugo & Sperber, Dan. 'Why Do Humans Reason. Arguments for an Argumentative Theory.' Behavioral and Brain Sciences. (2011)  34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.
I'm making these notes and observations in order to better understand a new idea that I find intriguing. I have recently argued that the legacy of philosophical thought may be obscuring what is actually going on in our minds by imposing pre-scientific or non-scientific conceptual frameworks over the subject. I see this rethinking of reason as a case in point. 

In this long article, Mercier & Sperber's contribution runs from pp.57-74. What follows are comments from other scholars titled "Open Peer commentary"  (pp.74-101) and 10 pages of bibliography. The addition of commentary by peers (as opposed to silent peer review pre-publication) is an interesting approach. Non-specialists are given an opportunity to see how specialists view the thesis of the article.

The article begins with a few general remarks about reasoning. Since at least the Enlightenment it has been assumed that reasoning is a way to discover truth through applying logic. As another reviewer puts it:
Almost all classical philosophy—and nowadays, the “critical thinking” we in higher education tout so automatically—rests on the unexamined idea that reasoning is about individuals examining their unexamined beliefs in order to discover the truth. (The Chronicle of Higher Education. 2011)
Over a considerable period now, tests of reasoning capability have documented the simple but troubling fact that people are not very good discovering the truth through reasoning. We fail at simple logical tasks, commit "egregious mistakes in probabilistic reasoning", and we are subject to "sundry irrational biases in decision making". Our generally poor reasoning is so well established that it hardly needs insisting on. Wikipedia has a wonderful long list of logical fallacies and an equally long list of cognitive biases, though of course Mercier & Sperber cite the academic literature which has formally documented these errors. The faculty ostensibly designed to help us discover the truth, more often than not leads us to falsehood.

One thing to draw attention to, which is almost buried on p.71 is that "demonstrations that reasoning leads to errors are much more publishable that reports of its success." Thus all the results cited in the article may be accurate and yet still reflect a bias in the literature. The authors attempt to ameliorate this in their conclusions, but if you're reading the article (or this summary) this is something to keep in mind. 

However, given that there is plenty of evidence that reason leads us to false conclusions, what is the point of reason? Why did we evolve reason if it's mostly worse that useless? The problem may well be in our assumptions about what reason is and does. The radical thesis of this article is that we do not reason in order to find or establish truth, but that we reason in order to argue. The argument is that viewed in its proper context--social arguments--reason works very well. 

It has long been known that there are appear to be two mental processes for reaching conclusions: system 1 (intuition) in which we are not aware of the process; and system 2 (reasoning) in which we are aware of the process. Mercier & Sperber outline a variation on this. Inference is a process where representational output follows representational input; a process which augments and corrects information. Evolutionary approaches point to multiple inferential processes which work unconsciously in different domains of knowledge. Intuitive beliefs arise from 'sub-personal' intuitive inferential processes. Reflective beliefs arise from conscious inference, i.e.  reasoning proper:
"What characterises reasoning proper is indeed the awareness not just of a conclusion but of an argument that justifies accepting that conclusion." (58).
That is to say, we accept conclusions on the basis of arguments. "All arguments must ultimately be grounded in intuitive judgements that given conclusions follow from given premises." (59) The arguments which provide the justification are themselves the product of a system 1 sub-personal inferential system. Thus even though we may reach a conclusion using reason proper, our arguments for accepting the conclusion are selected by intuition.

What this suggests to the authors it that reasoning is best adapted, not for truth seeking, but for winning arguments! They argue that this is its "main function" (60) which is to say the reason we evolved the faculty. Furthermore reasoning helps to make communication more reliable because arguments put forward for a proposition may be weak or strong, and counter arguments expose this. Reasoning used in dialogue helps to ensure communication is honest (hence I suppose we intuit that it leads towards truth - though truthfulness and Truth are different).

Of course this is counter intuitive claim and thus strong arguments must be evinced in its favour. Working with this idea is itself a test of the idea. Anticipating this the authors propose several features which reasoning ought to have if it evolved for the purpose of argumentation.
  1. It ought to help produce and evaluate arguments.
  2. It ought to exhibit strong confirmation bias.
  3. It ought to aim at convincing others rather than arriving at the best decision.
The authors set out to show that these qualities are indeed prevalent in reasoning through citing a huge amount of evidence from the literature the study of reasoning. This is where the peer evaluation provides an important perspective. If we are not familiar with the literature being cited, with its methods and conclusions, it is difficult to judge the competence of the authors, and the soundness of their conclusions. Even so most of us have to take quite a lot of what is said on trust. That it intuitively seems right is no reason to believe it.

1. Producing and Evaluating Arguments

On the first point we have already mentioned that reasoning is poor. However, because we see reasoning as an abstract faculty, testing reasoning is often done out of context. In studies of reasoning in pursuit of an argument, or trying to persuade someone, our reasoning powers improve dramatically. We are much more sensitive to logical fallacies when evaluating a proposition than when doing an abstract task. When we hear a weak argument we are much less likely to be convinced by it. In addition people will often settle for providing weak arguments if they are not challenged to come up with something better. If an experimenter testing someone's ability to construct an argument offers no challenge there is no motivation to pursue a stronger line of reasoning. This changes when challenges are offered. Reasoning only seems to really kick in when there is disagreement. The effect is even clearer in group settings. For a group to accept an argument requires that everyone be convinced or at least convinced that disagreeing is not in their interest. Our ability to reason well is strongly enhanced in these settings - known as the assembly bonus effect
"To sum up, people can be skilled arguers, producing and evaluating arguments felicitously. This good performance stands in sharp contrast with the abysmal results found in other, nonargumentative settings, a contrast made clear by the comparison between individual and group performance." (62) 
On the first point the literature of reasoning appears to confirm the idea that reason helps to produce and evaluate arguments. This does not prove that reasoning evolved for this reason or that arguing is the "main function" of reasoning, but it does show that reasoning works a great deal better in this setting than in the abstract.

2. Confirmation Bias

Confirmation bias is the most widely studied of all the cognitive biases and "it seems that everybody is affected to some degree, irrespective of factors like general intelligence or open-mindedness." (63). The authors say that in their model of reasoning confirmation bias is a feature.

Confirmation bias has been used in two different ways:
  • Where we only seek arguments that support our own conclusion and ignore counter-arguments because we are trying to persuade others of our view. 
  • Where we test our own existing belief by only looking at positive inference. For example if I think I left my keys in my jacket pocket it makes more sense to look in my jacket pocket, than in my trouser pockets. "This is just trusting use of our beliefs, not a confirmation bias." (64) Later they call this "a sound heuristic" rather than a bias.
Thus the authors focus on the first situation since they don't see the second as a genuine case of confirmation bias. The theory being proposed makes three broad predictions about confirmation bias
  1. It should only occur in argumentative situations
  2. It should only occur in the production of arguments
  3. It is a bias only in favour of confirming one's own claims with a complementary bias against opposing claims or counter-arguments. 
I confess that what follows seems to be a bit disconnected from these predictions. The evidence cited seems to support the predictions, but they are not explicitly discussed. This seems to be a structural fault in the article that an editor should have picked them up on. Having proposed three predictions they ought to have dealt with them more specifically.

In the Wason rule discovery task participants are presented with 3 numbers. They are told that the experimenter has used a rule to generate them and asked to guess that rule. They are able to test their hypothesis by offering another triplet of number. The experimenter will say whether or not it conforms to the rule. The overwhelming majority look for confirmation rather than trying to falsify their hypothesis. However the authors take this to be a sound heuristic rather than confirmation bias. The approach remains the same even when the participants are instructed to attempt to falsify their hypothesis. However if the hypothesis comes from another person, or from a weaker member of a group then participants are much more likely to attempt to falsify it and more ready to abandon it in favour of another. "Thus falsification is accessible provided that the situation encourages participants to argue against a hypothesis that is not their own." (64)

A similar effect is noted in the Wason selection task (the link enables you to participate in a version of this task) The participant is give cards marked with numbers and letters which are paired up on opposite sides of the card according to rules. The participant it given a rule and asked which card to turn over in order to test the rule. If the rule is phrased positively participants seek to confirm it, and if negatively to falsify it. Again this is an example of a "sound heuristic" rather the confirmation bias. However "Once  the participant's attention has been drawn to some of the cards, and they have arrived at an intuitive answer to the question, reasoning is used not to evaluate and correct their initial intuition but to find justifications for it. This is genuine confirmation bias." (64)

One of the key observations the authors make is that participants of studies must be motivated to falsify. They draw out this conclusion by looking at syllogisms e.g. No C are B; All B are A; therefore some A are not C. Apparently the success rate of dealing with such syllogisms is about 10%. What seems to happen is that people go with their initial intuitive conclusion and do not take the time to test it by looking for counter-examples. Mercier & Sperber argue that this is simply because they are not motivated to do so. On the other hand if people are trying to prove something wrong--if for example we ask them to consider a statement like "all fish are trout"--they readily find ways to disprove this. Participants will spend an equal amount of time on the different tasks.
"If they have arrived at the conclusion themselves, or if they agree with it they try to confirm it. If they disagree with it then they try to prove it wrong." (65) 
But doesn't confirmation bias lead to poor conclusions? Isn't this why we criticise it as faulty reasoning? It leads to conservatism in science for example and to the dreaded groupthink. Mercier& Sperber argue that confirmation bias in these cases is problematic because it is being used outside its "normal context: that is the resolution of a disagreement through discussion." (65) When used in this context confirmation bias works to produce the strongest, most persuasive arguments. Scholarship at its best ought to be like this.

The relationship of the most persuasive argument to truth is debatable, but the authors suppose that the truth will emerge if that is the subject of disagreement. If each person presents their best arguments, and the group evaluate them then this would seem to be an advantageous way of arriving at the best solution the group is capable of. Challenging conclusions leads people to improve their arguments, thus the small group may produce a better conclusion than the best individual in the group operating alone. Thus:
confirmation bias is a feature not a bug
This is the result that seems to have most captured the imaginations of the reading public. However the feature only works well in the context of a small group of mildly dissenting (not polarised) members. The individual, the group with no dissent, and the polarised group with implacable dissent are at a distinct disadvantage in reasoning! Conformation bias works well for the production of arguments, but not so well for evaluation, though the later seemed less of a problem.

Does this fulfil the three broad predictions made about confirmation bias? We have seen that confirmation bias is not triggered unless these is a need to defend a claim (1). Confirmation bias does appear to be more prevalent when producing arguments than in evaluating them, and that we do tend to argue for our own claims and against the claims of others (2 & 3). However the predictions included the word only, and I'm not sure that they have, or could have, demonstrated the exclusiveness of their claims. More evidence emerges in the next section which deals (rather more obliquely with convincing others).

3. Convincing others.

Proactive Reasoning  in belief formation.

The authors' thesis is that reasoning ought to aim at convincing others rather than arriving at the best decision. This section discusses the possibility that, while we do tend to favour our own argument, we may also anticipate objections. The latter is said to be the mark of a good scholar, though the article is looking at reasoning more generally. There is an interesting distinction here between beliefs we expect to be challenged and those which are not:
"While we think most of our beliefs--to the extent that we think about them at all--not as beliefs but just as pieces of knowledge, we are also aware that some of them are unlikely to be universally shared, or to be accepted on trust just because we express them. When we pay attention to the contentious nature of these beliefs we typically think of them as opinions." (66) 
And knowing that our opinions might be challenged we may be motivated to think about counter-arguments and be ready for them with our own arguments. This is known as motivated reasoning. Interestingly from my point of view, because I think I have experienced this, one of the examples they give is: "Reviewers fall prey to motivated reasoning and look for flaws in a paper in order to justify its rejection when they don't agree with its conclusions." (66).

The point being that from the authors' perspective it seems that what people are doing in this situation is not seeking truth, but only seeking to justify an opinion.
"All these experiments demonstrate that people sometimes look for reasons to justify  From an argumentative perspective, they do this not to convince themselves of the truth of their opinion but to be ready to meet the challenges of others." (66)
If we approach a discussion or a decision with an opinion, then we our goal in evaluating another's argument is often not to find the truth, but to show that the argument is wrong. The goal is argumentative rather than epistemic (seeking knowledge). We will comb through an argument looking for flaws for example, finding fault with study design, use of statistics or employing logical fallacies. Thus although there are benefits to confirmation bias in the production of arguments, confirmation bias in the evaluation of arguments can be a serious problem: it may lead to nitpicking, polarisation or strengthening of existing polarisation.

Two more effects of motivated reasoning are particularly relevant to my interests: belief perseverance and violation of moral norms. The phenomenon of belief perseverance (holding onto a belief despite evidence that a view is ill founded) is extremely common in religious settings. The argumentative theory sees belief perseverance as a form of motivated reasoning: when presented with counter-arguments the believer focuses on finding fault, and actively disregards information which runs counter to the belief. If the argument is particularly unconvincing--"not credible"--it can lead to further polarisation. And in the moral sphere reasoning is often used to come up with justifications for breaking moral precepts. Here reasoning can be clearly seen to be in service of argument rather than knowledge or truth.

Thus in many cases reasoning is used precisely to convince others rather than arriving at the best decision, even when this results in poor decisions or immoral behaviour. We use reason to find justifications for our intuitive beliefs or opinions.

Proactive Reasoning

The previous section was mainly concerned with defending opinions, while the next (and final) sections looks at how reason relates to decisions and actions more broadly. On the classical argument we expect reasoning to help us make better decisions. But this turns out not to be the case. Indeed in experiments people who spend time reasoning about their decisions consistently make decisions that are less consistent with their own previously stated attitudes. They also get worse at predicting the results of basketball games. "People who think too much are also less likely to understand other people's behavior." (69). A warning note is sounded here that some of the studies which showed that intuitive decisions were always better than thought out decisions have not be able to be replicated. So Malcolm Gladwell's popularisation of this idea in his book Blink may have over-stated the case. However the evidence suggests that reasoning does not necessarily confer advantage. Which to my mind is in line with what I would expect.

The argumentative theory suggests that reasoning should have most influence where our intuitions are weak - where we are not trying to justify a pre-formed opinion. One can then at least defend a choice if it proves to be unsatisfactory later. In line with research dating back to the 1980s this is called reason-based choice. reason-based choice is able to explain a number of unsound uses of reasoning noted by social psychologists: the disjunction effect, the sunk-cost fallacy, framing effects, and preference inversion.

The connecting factor is the desire to justify a choice or decision. We can see this in action in many countries today with the insistence on fiscal austerity as a response to economic crisis. Evidence is mounting that cutting government spending only causes further harm, but many governments remain committed to it. As long as they can produce arguments for the idea, they refuse to consider arguments against.


Some important contextualising remarks are made in the concluding section, many of which are very optimistic about reasoning. Reasoning as understood here makes human communication more reliable and more potent.
"Reasoning can lead to poor outcomes not because humans are bad at it but because they systematically look for arguments to justify their beliefs or actions.... Human reasoning is not a profoundly flawed general mechanism: it is a remarkably efficient specialized device adapted to a certain type of social and cognitive interaction at which it excels" (72) 
The authors stress the social nature of reasoning. Generally speaking it is groups of people that use reason to make progress not individuals, though a small number of individuals are capable of being their own critics. Indeed the skill can be learned, though only with difficulty and one only ever mitigates and does not eliminate the tendency towards justification. Thus though confirmation bias seems inevitable in producing arguments, it is balanced out in the evaluation by other people.
"To conclude, we note that the argumentative theory of reasoning should be congenial to those of us who enjoy spending endless hours debating ideas - but this, of course, is not an argument for (or against) the theory. (73)


It ought to come as no surprise that a faculty of a social ape is evolved to function best in small groups. The puzzle is why we ever thought of the individual as capable of standing alone and apart from their peers. It's a conceit of Western thinking that is going to come under increasing attack I think.

This review is also sort of a follow up to an earlier blog Thinking it Through sparked by a conversation with Elisa Freschi in comments on her blog post: Hindu-Christian and interreligious dialogue: has it any religious value? I think Mercier & Sperber raise some serious questions about this issue. Reasoning does work well in polarising environments. And religious views tend to be mutually exclusive. 

I think it's unlikely that we'll ever be able to say that we evolved x for the purposes of y except in a very general sense. Certainly eyes enable us to see, but it is simplistic to say that eyes evolved in order for us to see. We assume that evolution has endowed us with traits for a purpose, even when the purpose is unclear. And we observe that we have certain traits which serve to make us evolutionarily fit in some capacity. In this case the trait--reason--does not perform the function we have traditionally assigned to it. We are in poor at discovering the truth through reasoning alone, and much of the time not even looking. Therefore we must look again at what reason does. This is what Mercier and Sperber have done. Whether their idea will stand the text of time remains to be seen. My intuitive response is that they have noticed something very important in this paper.

My own interest in decision making stems from the work of Antonio Damasio, particular in Descartes's Error. My argument has been that decision making is unconscious and emotional and that reasons come afterwards. Mercier & Sperber are pursuing a similar idea at a different level. Damasio suggests that we make decisions using unconscious emotional responses to information and then justify our decision by finding arguments. And we can see the different parts of the process disrupted by brain injuries or abnormalities in specific locations. Thus neuroscience provides a confirmation of Mercier & Sperber's theory and correlates the behavioural observation with brain function. Neither cites the work of the other.

I presaged this review and my reading of this article in my essay The Myth of Subjectivity when I claimed that objectivity emerges from scientists working together. Mercier & Sperber confirm my intuition about how science works, including my note that scientists love to prove each other wrong. However they take it further and argue that this is the natural way that humans operate, and emphasise the social, interactional nature of progress in any field. And after all even Einstein went in search of support for his intuitions about the speed of light. He did not set out to disprove it. Thus we must reassess the role of falsification in science. It may be asking too much for any individual to seek to falsify their own work; but we can rely on the scientific community to provide evaluation and especially disagreement!

Those wishing to comment on this review should read Mercier & Sperber First. There's not much point in simply arguing with me. I've done my best to represent the ideas in the article, but I may have missed nuances, or got things wrong - I'm new to this subject. By all means let us discuss the article, or correct errors I have made, but let's do it on the basis of having read the article in question. OK?


Other reading. 
My attention was draw to this article by an economist! Edward Harrison pointed to The Reason We Reason by Jonah Lehrer in Wired Magazine. (Be sure to read the comment from Hugo Mercier which follows the article). Amongst Lehrer's useful links was one to the original article. 
Hugo Mercier's website, particularly his account of the Argumentative Theory of Reasoning; and on
Dan Sperber's website, and on

18 Aug 2016

Hugo Mercier has uploaded a new paper to

The Argumentative Theory: Predictions and Empirical Evidence A Social Turn in the Study of Higher Cognition. Trends in Cognitive Science. September 2016, 20(9): 689-700.


The argumentative theory of reasoning suggests that the main function of reasoning is to exchange arguments with others. This theory explains key properties of reasoning. When reasoners produce arguments, they are biased and lazy, as can be expected if reasoning is a mechanism that aims at convincing others in interactive contexts. By contrast, reasoners are more objective and demanding when they evaluate arguments provided by others. This fundamental asymmetry between production and evaluation explains the effects of reasoning in different contexts: the more debate and conflict between opinions there is, the more argument evaluation prevails over argument production, resulting in better outcomes. Here I review how the argumentative theory of reasoning helps integrate a wide range of empirical findings in reasoning research.

09 April 2013

What is Consciousness Anyway?

I'm often frustrated by simplistic worldviews, especially when I fall into one myself. A couple of years back I wrote a response to the charge that is frequently levelled at me, namely that I am a materialist (gasp!). The choices in these cases seem to be materialist or non-materialist (where the latter involves believing in a range of supernatural entities and forces). Similarly there seems to be an assumption that if one is a materialist that one considers consciousness to be a mere epiphenomena. The suggestion is often that if you don't think that consciousness is an ineffable supernatural entity then you must believe it to be mere epiphenomena. But these are not the only two choices. 

A related subject is the idea that science can and does tell us nothing about consciousness. This is clearly not true, as scientists who study the mind are able to tell us a great deal about it. The idea that science cannot explain consciousness seem to be rooted in particular views rather than based on familiarity with scientific inquiry. In other words it's just an ideological position.

I don't think scientists have fully explained consciousness by any means, but there are some very interesting observations of, and ideas about, the mind, and a lot of really insightful deductive work on how the mind must function in order to exhibit the features it does (aka reverse engineering). At present we have some interesting conjectures about how the mind might work that are guiding our search for more data. Scientists are busy trying to disprove one theory or another.

Now, I happen to be a fan of info-graphic guru David McCandless and recently bought a copy of his book Information is Beautiful. One of his infographics lists 12 explanations for consciousness (including a Buddhist version). Each is represented by a graphic and a sentence. The same information with animations is online here. (At time of writing they are conducting a survey of opinions about consciousness using this set). Below is his set of 12 with a couple of additions. The heading in bold and the summary in italics come from McCandless. I have added a few explanatory comments in each case.

Substance Dualism
Consciousness is a field that exists in its own parallel "realm" of existence outside reality so can't be seen.
Aka Cartesian Duality. Strict separation between mind and body. Consciousness and matter are two distinct types of substance. The problems with this view are legend and almost no one takes it seriously any more. Still, if you believe in ghosts or psychic powers then you have a foot in this camp!

Substance Monism
The entire universe is one substance, 
All is one, dude. Included in this is the form of idealism which says that everything is the mind, and physical objects don't really exist. Buddhists sometimes flirt with idealism e.g 'mind only' cittamātra. The opposite extreme, which is more popular in the West is that everything is just material, which is covered by epiphenomenalism, behaviourism and functionalism.

Emergent Dualism
Consciousness is a sensation that "grows" inevitably out of complicated brain states.
This features in a common science fiction theme: a computer network becomes so complex that it spontaneously develops consciousness. As a philosophy of mind this view relies on observations about complex systems emerging from simpler units interacting. One of the central insights of work on fractals and complexity theory is that simple repeating units can produce patterns and processes of startling complexity. The view accepts that we are constructed from matter, but argues that complex arrangements of matter are capable of displaying properties which are greater than the sum of their parts - consciousness and even a soul are attributable to this by proponents.

Property Dualism
Consciousness is a physical property of all matter, like electromagnetism, just not one the scientists know about.
Science is making new discoveries all the time right? So why should we assume that all the properties of matter have been discovered yet? The idea here is that everything is made of one substance, matter, (and it is  thus a form of substance monism) but that matter has multiple properties. In particular matter has physical properties and mental properties. In this view all matter has a psychic component.

This is similar to the Jain view of the world which considered that everything was conscious. Consciousness exists in a hierarchy depending on how many senses the entity possesses. Rock only has the sense of touch, so is only minimally conscious. Some animals have more or different senses than we do.

As a way around both materialism and idealism this view has some merits.

Pan Psychism
All matter has a psychic part. Consciousness is just the psychic part of our brain.
This seems to be a popular view amongst my colleagues. Sometimes its described in terms of the brain being like an radio that 'picks up' consciousness and tunes it in so we can be aware of it. Not very different from property dualism, indeed it is sometimes called Panpsychic property dualism. However Pan Psychism treats everything as mind, where mind has physical and mental properties. As I understand it Theravāda Abhidhamma sees the world in this way. Many Buddhists argue that in our world mind creates the physical world, possibly on the basis of the Nidāna sequence in which viññāna is the condition for nāmarūpa.

Identity Theory
Mental states are simply physical events that we can see in brain scans.
Aka type physicalism or reductive materialism. In this view the states and processes of the mind are identical to states and processes of the brain. In other words what you think of as your consciousness is simply the physical states of the brain. This is a form of monism - it doesn't see the mind as substantially different from the brain. 


Consciousness and its states (belief, desire, pain) are simply functions the brain performs.
Consciousness is the sum of the functions of the brain. Mental states are constituted solely by causal relations to other mental states, sensory inputs, and behavioural outputs. Presumably this does away with the hard problem of consciousness? Functionalism has it's origins in Aristotle's idea of a soul: that it is just that which enables us to function as a human being. Functionalism can be thought of as behaviourism as seen through the lens of cognitive psychology.


Consciousness is literally just behaviour. When we behave in a certain way, we appear conscious.
Once a very popular view behaviourism dispenses with the idea of consciousness. Life is just stimulus and response. In higher animals such as humans this is so complex that it appears to be consciousness, but really it isn't. This kind of mechanistic thinking about humans was popular early in the Enlightenment period when clockwork was the complex mode. I associate behaviourism with the advent of computers. The mind is often likened to the most complex human creation of the moment. Cavemen no doubt thought of the mind as a flint knife. When computers came along as they seemed like a metaphor for the mind. But in practice computers work very differently from the mind. However the invention of neural networks showed that it is possible to imitate more closely how the human mind works. This is the subject of one of De Bono's lesser known works: I am Right You Are Wrong (which I recommend).


Consciousness is an accidental side effect of complex physical processes in the brain.
This is the view that seems to get Buddhists most steamed up. Another form of mechanistic thinking which down plays the hard problem of consciousness by denying that anything is going on. "Move along folks, there's nothing to see here." It arose out of attempts to get around mind/body dualism.

If this view were to hold then we ought to be able to build a sufficiently complex clockwork device that was indistinguishable from a conscious being.

Quantum Consciousness
Not sure what consciousness is, but quantum phsyics over classical physics, can better explain it.
There is no reason why the mind should not involve quantum phenomena. But there is no evidence that it does. For some time it has been trendy to invoke quantum mechanics as an explanation for all sorts of things. But those attempting this seem to be philosophers rather than quantum theorists (Dennett for instance) and I'm doubtful. I've attempted to debunk the idea that Buddhism has anything in common with quantum mechanics (see Erwin Schrödinger Didn't Have a Cat).

100 billion cells each with 1000 connections is really very complex, so I don't see an a priori need to invoke quantum mechanics in order to explain or describe consciousness. On the other hand the adaptability of an amoeba might make us think again since it is capable of remarkably sophisticated responses to its environment given its relative simplicity of form. However until there's actual evidence of quantum effects this remains in the realm of speculation. Maybe someone more familiar with Dennett can point to the evidence that he cites?


Consciousness is the sensation of your most significant thoughts being highlighted
Quite a lot in common with Functionalism, in that it uses insights from cognitive psychology to improve on Behaviourism. In a sense it highlight thinking as a distinct kind of behaviour. It incorporates the idea of the mind as a computer which processes information and produces behaviour (Behaviourism only acknowledges behaviour).

Higher Order Theory

Consciousness is just higher order thoughts (thoughts about other thoughts)
The approach emerges from the understanding that there are different types of thoughts, and that they operate at different levels of organisation. One of the basic distinctions being between unconscious perception and conscious perception. Another is between intransitive consciousness (mere consciousness) and transitive consciousness (consciousness of some object). Distinctions amongst philosophers of mind often depend on finding the right level at which to describe it. Higher Order  Theory is primarily concerned with understanding conscious, transitive mental states (in this it is similar to early Buddhism).

Consciousness is a continuous stream of ever-recurring phenomena, pinched, like eddies, into isolated minds.
Clearly McCandless is not that well informed on Buddhist ideas about consciousness, and since he doesn't cite sources we can't get at why he thinks we think like this. The last part sounds more like Hinduism to me.

Early Buddhism 
Consciousness is always consciousness of... 
If consciousness is even a subject of inquiry (and I'm not convinced it is) then the usual way of talking about it is that consciousness arises when sense object meets sense faculty and gives rise to sense consciousness.  Early Buddhism focusses on transitive consciousness and has almost no interest in the mind otherwise. The word being translated as 'consciousness' is viññāna which probably means some more like cognition or awareness. Such a cognition which arises in dependence on conditions is referred to as conditioned (saṅkhata); it can be analysed into five branches (pañcakhandhā ≡ papañca). It is possible to have unconditioned (asaṅkhata) cognition when one sees and knows mental objects (dhammā) as they are (yathābhūta-ñānadassana). It is claimed that the six senses (eye, ear, nose, tongue, body, mind) and their objects make up the totality (sabbaṃ) and that any other proposition about the world is beyond the proper domain (visāya) of inquiry.

Late Buddhism
Consciousness is a manifestation of karmic seeds
Consciousness arises on the basis of a storehouse for the 'seeds' of karma (ālayavijñāna). Floating on top of this layer are the sensory cognitions which produce provisionally valid cognitions (relative truth). The extra layer at the bottom was invented to try to account for difficulties explaining rebirth (the problem of continuity of consequences). However the ālayavijñāna is a kind of permanent substrate and thus suffers from metaphysical problems related to eternalism. I argue that the problem of continuity between births cannot be practically solved without positing some kind of ātman.


Damasio's Model of Consciousness
This is a rubric for ideas in which consciousness is an emergent property of the brain's role of monitoring the environment and the body's own internal states using virtual representations created in the brain. Combined with temporal memories of previous states (memory), and projections of futures states (imagination) and representing the observing subject as a virtual self, consciousness is the overall effect of these functions. This emerges particularly from the work of Antonio Damasio and Thomas Metzinger and is closest to my own understanding of what consciousness is or does.


Of course it must be said that all of these are the thinnest of glosses on some quite complex ideas, and that not being expert in any of them I have probably got them wrong. My purpose here is mainly to represent the complexity of the subject matter and encourage readers to take in some of the options that are available. There are more than two choices. Being interested in the science of the mind and uninterested in the supernatural leaves me choices other the epiphenomenalism.

In trying to understand McCandless's categories it becomes obvious that many of them have substantial cross-over. Some are in fact subsets of broader categories. So I wouldn't put too much store by his list. It illustrates the point that there are a lot of theories, but not much more.

It seems to me that if we are to make any progress in understanding ourselves then we need to begin with observation and allow understanding to emerge. My beef with philosophy is that it starts with theories and searches for facts to fit. Indeed the vast legacy of philosophical speculation of the mind completely divorced from observation would seem to be a major impediment to progress.

My enthusiasm for Thomas Metzinger is precisely that he starts with observations and works towards an explanation. I'm also interested George Lakoff's ideas about categorisation, metaphor and embodied cognition influence how we see cognition and selfhood. Lakoff's work also stems from observation. I don't mind being presented with a worked out theory as long as the evidence for and against the theory follows.

I tend towards rejecting any strong form of mind/body dualism. Free floating, disembodied consciousness simply does not make sense to me. All the evidence I am aware of points to an intimate connection between brain and consciousness. Metzinger's account of his out of body experiences is central to undermining the last vestiges of my dualistic thinking in this area because it showed that unusual phenomena, like religious traditions, don't have to be taken on face value. Yes, it really does seem as if the consciousness can leave the body; but no, it doesn't have to literally do so to produce a convincing illusion. Traditional Buddhist ideas about consciousness are compatible with this view, as long as we are not too literalistic.

With Kant I accept the existence of an objective world distinct from my perception of it along with the caution that we can only infer things about this world, we can never know it directly (since our only source of information about the world is our senses). However this is not a problem in the foreground of early Buddhist thought. The objective world is a given in early Buddhist texts. Our experience of the world occurs in the space of overlap between a sense endowed body, a world of objective, and attention to the overlap. The entire focus of early Buddhist practice takes place in this liminal space, where our responses to experience feedback into, and to some extent determine, the quality of our experience.

One of the main criticisms that comes from the anti-physicalist side of the argument is that theories which don't accept a supernatural aspect to mind, i.e. an aspect of mind which operates outside the known laws of nature, can't account for qualia. One of the reasons this claim stands is that such people do not keep up with neuroscience. Some recent research looks promising.
Orpwood, Roger. 'Qualia Could Arise from Information Processing in Local Cortical Networks.' Frontiers in Psychology. 2013; 4: 121. Published online 2013 March 14. doi:10.3389/fpsyg.2013.00121
Jakub Limanowski, and Felix Blankenburg. 'Minimal self-models and the free energy principle.' Frontiers in Human Neuroscience. 2013; 7: 547. Published online 2013 September 12. doi: 10.3389/fnhum.2013.00547

See also
 The Where of What: How Brains Represent Thousands of Objects by Ed Yong (Dec 2012), which summarises the state of research on this subject as of 2012. 
I also recommend A Brain in a Supercomputer, a TED talk by Henry Markham which helps with getting an idea of the complexity of the brain. Follow this up at The Blue Brain Project.
We do not yet fully understand consciousness. But this is no reason to fall back on supernatural explanations.
The route away from superstition and fearful projections onto the world has been long and difficult but it has been worth it. On the other hand what we are learning is far more sophisticated than Medieval insights from Buddhists and if we stick to what's in our ancient texts at some point we'll become irrelevant. The Mindfulness Therapy movement is already showing how this this might work since they have been far more successful in communicating their version of Buddhist methods in a shorter space of time.


See also this in the Guardian (10.4.13): Transparent brains reveal their secrets – video. A fly-through of a whole mouse brain where the non-neuronal material has been rendered transparent - every dendrite of every neuron is visible! Selective stains enable neurons of different functionality to be coloured differently. The original article is: Chung, K.,  et al (2013). 'Structural and molecular interrogation of intact biological systems.' Nature. doi:10.1038/nature12107.

I should also have given a nod to the Human Connectome Project. No doubt this new technique used above will advance their work considerably.

Brain as Receiver

One of the options that comes up regularly to explain consciousness in a dualistic frame is the brain as TV receiver analogy. This is ruled out by Steven Novella. He argues that to compare the brain to a TV that simply displays the information beamed into it the analogy would have to answer these questions positively:
A more accurate analogy would be this – can you alter the wiring of a TV in order to change the plot of a TV program? Can you change a sitcom into a drama? Can you change the dialogue of the characters? Can you stimulate one of the wires in the TV in order to make one of the on-screen characters twitch?
Disrupting the reception, via brain damage, does not simply distort the image of the show, it changes the plot and the characters. The brain simply cannot be a passive receiver. The brain creates consciousness. This is the only way to explain the correlations. 

27 July 2012

The 'Mind as Container' Metaphor

"Whatever complex biological and neural processes go on backstage, it is my consciousness that provides the theatre where my experiences and thoughts have their existence, where my desires are felt and where my intentions are formed."

Oxford Dictionary of Philosophy
ONE OF THE MOST fundamental metaphors we use when talking about mind is: the mind is a container. The container prototype is very important in terms of how we interact with the world. A container is a physically finite and bounded space, with a clear distinction between inside and outside. Containers often, but not always, have lids which seal the inside from outside, or vice versa.

Our body is a physically a container with a sealable opening at either end: a mouth and an anus. We put food into our body via our mouth. The mouth itself is a container, because we put food into it as well. Various things happen inside our body And shit comes out of our anus. Similarly we breath into our lungs (which are inside our body) and out. Virtually all other animals follow a similar body plan, and set of biological processes.

But these metaphors have implications which go well beyond the way we talk. George Lakoff and his colleagues, especially Mark Johnson, have shown that abstract thought is always metaphorical, and that the metaphors we draw on for abstract thought are often based on how we physically interact with the world.

So when "a thought comes into our head" there are two metaphorical processes happening. Firstly we are allowing that our head is a kind of container. This might be obvious because physically our skull is a hollow chamber of bone filled with our brain. It also has some extrusions attached and several openings. But the head here is also standing for the what goes on 'in' the head, a form of metaphor called metonymy: where a part stands for the whole, and sometimes vice versa. Our head is the container of thoughts; that is the head here stands for the mind as the container of thought: the thought is in [the container of] the mind, which is in [the container of] the head. The head is a particular kind of container, more like a room which we inhabit. Experientially when the thought comes into my head, I become aware of it because it enters the space "I" also occupy.

The second metaphorical process that is happening is that both "I" and the thought are (metaphorically) solid objects with shape and mass. We can take an idea and turn it over in our heads, kick it around; we juggle priorities, manipulate data and crunch numbers; we weigh alternatives, and can be weighed down by our cares. "I" am the same kind of object because I exist in the same domain as the ideas - thinking goes on in my head, and I am thinking my thoughts.

As metaphors there is absolutely nothing wrong with these abstractions. Abstraction allows us to be much more sophisticated in how we interact with the world and each other. Abstraction allows us to use our imagination to consider how things might be, to think about new ways of using tools for example, or new ways to modify tools to do a certain job. In part at least this ability to abstract is related to a set of neurons called mirror neurons. These neurons are active when we do an action, but also when we see an action being performed by someone else. If the action is a facial expression, then something interesting happens: observing the action our own neurons become active and we have a sense of what it would be like to have that facial expression. This allows to know how someone is feeling by observing their face (and their body language and listening to the tone of voice). This is a very useful facility to have.

However we are not usually aware of what we are doing: we have the result without understanding the working. In fact the working only became visible when we started to use powerful real-time brain scanning techniques. When we respond to a smile we aren't aware of the mechanisms that allow us to parse the visual information, recognise the face and the expression, and translate that into an internal state that we can feel, and then formulate a response. We just smile back or mutter humbug or whatever.

Similarly when a thought comes into our head we see this is a naive realistic way: just as though something with shape and mass has entered a room in which we were already an occupant. For many centuries philosophers took this metaphor as real and asked a lot of questions about the container and the nature of the container. Alternatively as the ODP definition says we think of consciousness as a special kind of room (a theatre) in which we observe the experiences we have, with the implication that we are the audience watching the action on the stage. Since we started to learn about the function of the brain we have extended the metaphor to make the brain the container of consciousness or personality.

But as comfortable as this way of referring to our minds it's still just a metaphor, not a reality. Remember that a metaphor is when you explain one thing in terms as though it were something else. It's very important to remember what it is you are describing, which is a hard thing to keep in mind. The mind is also sometimes a leaky container!

Most people, I think, would be surprised to learn that this metaphor is not universal. Luhrmann (2011) surveying the ethnographical literature on hallucinations, notes some research conclusions: "The Iban [tribe of Borneo] do not have an elaborated idea of the mind as a container". (p.79) In the context of research on psychosis this means that "the idea that someone could experience external thoughts as placed within the mind or removed from it was simply not available to them."

We also know that people experience the complete breakdown of the sense of in-here and out-there under certain circumstance (e.g. Jill Bolte Taylor's stroke). So the metaphor is not universal, not hardwired. It is a culturally conditioned aspect of a virtual model which our organism generates for the purposes of optimising its interactions with the environment. But the metaphor is so pervasive in English that it's very difficult for me to write a sentence about the virtual model without referencing the mind as container metaphor.

I might add that all of the aspects of consciousness are similarly contingent and plastic.

I came across a quote from Wittgenstein recently which seems apposite: "meaning is use". By this he seems to have meant that a word takes it's meaning not from a relationship with the object it names, nor by the ideas the objects engender, but only from the way that speakers use the word. There is something in this. However I would add some caveats because the study of sound symbolism, and embodied cognition don't allow for a strict application of 'meaning is use'. Research in sound symbolism tells us that the sounds we use to make words are symbols, and that there is a relationship between the symbols we choose and the objects and events we are observing or thinking of. The case I've been making above is that how we think, the very metaphors we use to represent abstract ideas are based on how we physically interact with the world. So, yes 'meaning is use' but use is not arbitrary, it is motivated (to use de Saussure's term) by these existing relationships, i.e. it operates within limits and tends towards pre-determined states.

The thing I wanted to draw out is that the question 'what is consciousness' might not be a sensible qustion. We might accurately answer that consciousness is the experience of being aware, of having a sense of agency and a first-person perspective. In effect there might be no 'Problem of Consciousness'.  There is certainly an experience, but does it point to a real container, a real theatre in which we experience consciousness? The answer would seem to be that is does not.

I think this would be the Buddhist answer as well, or it would be outside of Western Buddhism. As I suggested above it's very difficult for use Westerners to think of consciousness at all without unconsciously invoking a metaphor which we habitually take to be real. The very terminology we use asserts the reality of the abstractions and metaphors we use to describe the experience. In a targeted, but not comprehensive search I have not found viññāṇa being used in the sense of a container of experience in Pāli. Indeed just what viññāṇa refers to is not entirely clear to me except that it is an essential component of perception; and that it is consistently distinguished from the sense objects (rūpa, sadda, etc) and from the sense organs (cakkhu, sota, etc), and that it comes in six varieties (cakkhu-viññāṇā, sota-viññāṇa, etc.) including mano-viññāṇa. So the one thing this does not look like is the theatre of experience. If, for instance the Buddhist texts say that we experience vedanā in viññāṇa, then I have yet to find the passage where they do. In what sense does viññāṇa resemble our Western conceptions of consciousness at all? My response would be that it doesn't resemble it at all.

One can broaden search quite easily by looking for viññāṇasmiṃ/viññāṇe (the locative singular), which we would expect to translate 'in consciousness' if viññāṇa were a container. We find many examples of this grammatical form in Pāli. One of them is indeed treating viññāṇa a metaphorical container. At M iii.18 and many other places the assutavā [i.e. the ignorant, or uninformed person] seeks (in vain) for self in viññāṇa and viññāṇa in self. But it's clear that the view being described is not one that the knowledgeable Buddhist would subscribe to.

At M i.139 we find another use of the locative (with the sense of 'with reference to'). Here it is the well-informed (sutavā) disciples of the nobles, and they become fed up (nibbindati) with reference to rūpa, vedanā, saññā, saṅkhāra and viññāṇa. At M i.230 a materialist says of the khandhas: With viññāṇa [and the others] as self (atta) a person (purisapuggala) from resting in viññāṇa (viññāṇe patiṭṭhāya) produces merit or non-merit. Gotama proceeds to demolish the views of the materialist, treating the khandhas as he customarily does: not mine, not me, not myself.

And that accounts for all of the occurrences of viññāṇāsmiṃ/viññāṇe I found in a brief survey of the nikāyas. No doubt there are others, but they don't stand out. Buddhist texts, so far as I can tell, are aware that some misguided people do use the 'mind as container' metaphor, but the Buddhist Theory of Mind does not. For Buddhist thinkers there is no theatre of experience, there is just experience. The implication for us is that the experience of being in a theatre of experience, is just another experience. Perhaps the difference lies in the lack of theatres in Iron Age India and the largely outdoor lifestyle of the Buddhists. Virtually all of the action of the Pāli Canon takes place outside.

In any case we think very differently from the ancient Buddhists about the mind. Recall also that they did not see emotion as a separate category of experience but lumped it in the citta. (Cf Emotions in Buddhism) Judging by their language we can see that they lived in very a different world to us. Our conceptions about the world, the mind, and life generally are often not applicable to the past; nor theirs to the present. Our scientists and philosophers have spent time and resources looking for this theatre, and ironically neuroscientists seem to be confirming that our ancient forebears were right: mind as a container is a figment, generated by hypostasizing a metaphor we once used to describe the experience of having experiences.


This essay was inspired by reading: On Containers and Content, with a Cautionary Note to Philosophers of Mind, by Eric Schwitzgebel.

Mind Metaphors in Pāli

iti kho, ānando, kammaṃ khettaṃ viññāṇaṃ bījaṃ taṇhā sineho... hīnāya dhātuyā viññāṇaṃ patiṭṭhitaṃ (AN i.232 )
Thus, Ānanda, action is a field, cognition is a seed, and craving is sap... cognition is established on a low level. 

Seyyathāpi bhikkhave, kūṭāgāraṃ vā kūṭāgārasālā vā uttarāya vā dakkhiṇāya vā pācīnāya vā vātapānā. suriye uggacchante vātapānena rasmi pavisitvā kvāssa patiṭṭhitāti. (SN 12.64 )
Just as if, bhikkhus, a roofed house or roofed hall with windows in the north, south or east. When the sun rises where do rays land when they come through the windows? 
Yaññadeva bhikkhave paccayaṃ paṭicca uppajjati viññāṇaṃ tena teneva saṅkhaṃ gacchati... Seyyathāpi bhikkhave yaññadevāpaccayaṃ paṭicca aggi jalati, tena teneva saṅkhaṃ gacchati. (MN 38)
Bhikkhus, whatever condition cognition arises upon, it is called after that... just as whatever condition fire burns, it is named after that.

Magic trick
Pheṇapiṇḍūpamaṃ rūpaṃ vedanā bubbuḷupamā
Marīcikupamā saññā saṃkhārā kadalūpamā,
Māyūpamañca viññāṇaṃ dīpitā diccabandhunā.
(SN 22.95)
The kinsman of the Sun has taught that:
form is like a ball of foam, sensation is like a bubble,
perception is like a mirage, intention is like a plantain,
cognition is like an illusion, 

27 April 2012

Subjective & Objective

These two terms subjective & objective occur very frequently in discussions of Buddhism. The terms are used in fairly standard ways according to psychological or philosophical norms. But there is also the suggestion that bodhi consists of a breakdown of the distinction between subjective and objective. In this essay I will look at some of the philosophical assumptions behind these two words, and suggest that they are not in fact very useful to us as Buddhists because they don't apply in the domain that most interests us: experience.

The two words are part of a cluster linked by the common word 'ject' (meaning 'to throw out, to spout') which comes from a Proto-Indo-European root * 'to do, throw, project'. The cluster of English words includes: abject, adjacent, adjective, deject, eject, gist, inject, interject, jet, jetsam, jetty, jut, object, project, reject, subject, trajectory.

Etymologically an object is something thrown (ject) against (ob-) i.e. something we come into contact with through our senses (Buddhists also saw objects as striking the senses). While a subject is something thrown under (sub-), meaning something under our control. A 'subject' of the king is subjected to their rules. Similarly we are said (psychologically) to be 'a subject' because we believe our body and thoughts to be under our control. How much this is true is debatable, but this is what the etymology tells us.

Now the suffix -ive is used to turn a verb into an adjective. So objective simply means 'of or pertaining to objects' and subjective means 'of or pertaining to control'. But time has extended the simple meaning. In the case of objective the OED suggests "anything external to the mind, and actually real or existent; exhibiting facts without emotion or opinions; objects which are seen by other observers not just the subject." There are other definitions, but these are the relevant ones. Similarly subjective is now defined in terms of "the personal, proceeding from idiosyncrasy or individuality; not impartial; belonging to the individual consciousness or perception; imaginary, partial or distorted."

So these two terms have come to represent a fundamental dichotomy: what exists in the world, and what I individually perceive, including my sense of being a unique independent self. Along with this dichotomy is the assumption that we can tell the difference between the two domains. A shared experience, for example, is more likely to be considered objective, than a private one. Though we do also doubt the objectivity of groups. It is thought that scientists who describe objects dispassionately are being objective; that they are describing what really exists, as it exists. There are some notable attacks on this view from the 20th century, but the pendulum is already beginning to swing back from the extreme relativity of French nihilism and distrust of authority. Scholars are once again seeking objectivity (scientists never stopped!) though with more caveats than in the past, so that post-modernism was not a complete loss.

Now the Buddhist model of consciousness I have described on a number of occasions, but most recently on my Rave on Phenomena. Early Buddhism grants that there are objects of the senses. It is dualistic to this extent. It grants that there is a sense faculty and that this is associated with a locus of experience (body) and with mental processes such as sensing, apperception, and categorisation. When these come together in the light of sense consciousness then we have an experience. What we are aware of, and respond to is experience: it is the complex product of interactions between sense object, sense faculty and sense consciousness. This is similar to the kind of process outlined in recent years by, for example, Thomas Metzinger. In this model we know nothing of either objects, nor of ourselves as a subject. What we know is the experience of objects and the experience of ourselves as a subject. This distinction is vitally important to get clear.

Shared and repeatable experience leaves us with only one sensible conclusion: objects exist independently of us. There's every reason to think that the early Buddhists agreed with this, and that early Buddhism was therefore a form of Transcendental Realism. That the self is simply an object of the mental faculty is more difficult to show, but I have summarised and endorsed Thomas Metzinger's ideas on the first-person perspective. I'm convinced largely because of what happens when the first-person perspective breaks down. The self is a dynamic process of self-awareness. Like Metzinger I find Antonio Damasio's accounts of how this might come about quite plausible.

The terms objective and subjective as they are used today seem to make assumptions which, if we accept the Buddhist model of consciousness, we must conclude are false. When we say "objective" we cannot be referring to what exists, because it is implicit in our model that we can say nothing about it except how we experience, and experience contains an irreducible subjective component. Indeed I've challenged people several times now to come up with an unequivocal reference to the Buddha discussing the nature of objects and so far no one has come forward to accept the challenge. The objective world becomes a short hand for what we regularly and repeatedly experience, and what seems to be experienced by other people regularly and repeatedly. And while I do say that it makes sense that these experiences must be based on something independent of the observer, I go no further than that.

The idea of subjectivity also needs to be critiqued. The subjective is said to be private and individual, our sense of being a self and being in control. But if we accept that all experiences are conditioned - i.e. arise in dependence on sense object, faculty and consciousness - then we get into a loop of subject and object. We can't be a subject unless we are simultaneously an object, and vice versa. We've tended to separate so-called "subjective experience" off - and to distrust it as a source of knowledge. But experience arises in the interactions of sense-object, faculty and consciousness. No experience can be subjective or objective, all experience is both at the same time.

One of the most important points we can make is that far from being under our control, neither the mind or the body respond easily to our commands. We have limited control at best: we cannot stop our bodies from becoming ill, ageing and dying for example; there are some reflexes we cannot over-ride; we cannot consciously control our viscera. [1] Similarly with our mind. Thoughts and impulses appear unbidden from no-where. Measurement has shown that our motor cortex becomes active some time before we consciously come to a decision to move a limb, that movements are not actually under our conscious control, despite the persistent illusion that they are. Our mind is more amenable to control perhaps, but only with rigorous training spanning years. And then it is so tightly linked to our bodies that as our body ages and becomes ill our minds are involuntarily affected. So many things affect our moods - weather, diet, exercise, social status - and none of this is under our direct control.

So if the terms subjective and objective do not even apply to the Buddhist model of consciousness, then in what sense can bodhi be said to be a breakdown of the distinction between them? We are fortunate in this respect to have the testimony of Jill Bolte Taylor, a neuroscientist who had a massive temporal lobe stroke that deprived her of language and disrupted her sense of self. She described being unable to distinguish where her body ended, and as a result feeling huge and extended. This is a common sensation for meditators, which even I have experienced. For Taylor it was accompanied by bliss and a sense of profundity. This is obviously a very desirable mode of functioning. She had the classic mystical experience of feeling at one with everything, and that everything was one. But in her case the cause was a massive stroke causing extensive brain damage. There can be no doubt that the stroke changed Taylor's life, and that she has dedicated herself to talking about human potential since her rehabilitation (which took many years). But did she experience bodhi? I don't know, in a way I can't know, but my sense was that despite being a likeable person that her experience had some real limitations. My main worry is that apart from having a massive stroke she did not seem to have insights into the processes which might bring about such as experience. I acknowledge the value she found in the experience, and that it is interesting and inspiring to hear her talk, but I am reluctant to pursue the experience of having a massive stroke.

I've tried to show that subjective and objective cannot have the same meaning in a Buddhist context as they do in either in philosophy or everyday speech; that really, considering the way we use these words, they don't apply. I'm resigned to talking about objects of the senses, but I don't see a role for the term 'subject' at all. I find Metzinger's more descriptive terminology - e.g. sense of self, first-person perspective - less fraught and more useful. We don't have subjectivity or objectivity, we have experiences arising from being equipped with sensory apparatus in a world of objects to be sensed. However sometimes it is safe to conclude that an experience was private: if we have a vision, but no one else in the room sees it, then it is a private experience. In this case the object may very well be an internal object such as a memory.

In the long run early Buddhism seems entirely unconcerned by the nature of objects. The nature of self-awareness gets some attention, but the main thrust of the Buddhist program is to be aware of our responses to sensory experience - of being drawn to, attached to, addicted to and obsessed by pleasure especially. The mainstream of practice seems to be paying attention to what is happening in our field of experience, and monitoring our responses to it.


  1. Most of us have control over the last part of our gastrointestinal tract, and some people do seem able to gain limited control over their body temperature and heart rate. But I've yet to read of anyone with control over, say, their liver or spleen.