Showing posts with label Reason. Show all posts
Showing posts with label Reason. Show all posts

25 August 2017

Rationality

There's been quite a lot of talk of "meta-rationality" lately amongst the blogs I read. It is ironic that this emerging trend comes at a time when the very idea of rationality is being challenged from beneath. Mercier and Sperber, for example, tell us that empirical evidence suggests that reasoning is "a form of intuitive [i.e., unconscious] inference" (2017: 90); and that reasoning about reasoning (meta-rationality) is mainly about rationalising such inferences and our actions based on them. If this is true, and traditional ways of thinking about reasoning are inaccurate, then we all have a period of readjustment ahead.

It seems that we don't understand rationality or reasoning. My own head is shaking as I write this. Can it be accurate? It is profoundly counter-intuitive. Sure, we all know that some people are less than fully rational. Just look at how nation states are run. Nevertheless, it comes as a shock to realise that I don't understand reasoning. After all, I write non-fiction. All of my hundreds of essays are the product of reasoning. Aren't they? well, maybe. In this essay, I'm going to continue my desultory discussion of reason by outlining a result from experimental psychology from the year I was born, 1966. In their recent book, The Enigma of Reason, Mercier & Sperber (2017) describe this experiment and some of the refinements since proposed.

But first a quick lesson in Aristotelian inferential logic. I know, right? You're turned off and about to click on something else. But please do bear with me. I'm introducing this because, unless you understand the logic involved in the problem, you won't get the full blast of the 50-year-old insight that follows. Please persevere and I think you'll agree at the end that it's worth it.


~Logic~

For our purposes, we need to consider a conditional syllogism. Schematically it takes the form:

If P, then Q.

Say we posit: if a town has a police station (P), then it also has a courthouse (Q). There are two possible states for each proposition. A town has a police station (P); it does not have a police station (not P or ¬P); it has a courthouse (Q); it does not have a court house (¬Q). What we concerned with here is what we can infer from each of these four possibilities, given the rule: If P, then Q.

The syllogism—If P, then Q—in this case tells us that it is always the case that if a town has a police station, then it also has a courthouse. If I now tell you that the town of Wallop in Hampshire, has a police station, you can infer from the rule that Wallop must also have a courthouse. This is a valid inference of the type that Aristotle called modus ponens. Schematically:

If P, then Q.
P, therefore Q. ✓

What if I tell you that Wallop does not have a police station? What can you infer from ¬P? You might be tempted to say that Wallop has no courthouse. But this would be a fallacy (called denial of the antecedent). It does not follow from the rule that if a town does not have a police station that it also doesn't have a court house. It is entirely possible under the given rule that a town has a courthouse but no police station.

If P, then Q.
¬P, therefore ¬Q. ✕

What if we have information about the courthouse and want to infer something about the police station. What can we infer if Wallop had a courthouse (Q)? Well, we've just seen that we cannot infer anything. Trying to infer something from the absence of the second part of the syllogism leads to false conclusions (affirmation of the consequent)


If P, then Q.
Q, therefore P. ✕

But we can make a valid inference if we know that Wallop has no courthouse (¬Q). If there is no courthouse and our rule is always true, then we can infer that there is no police station in Wallop. And this valid inference is the type called modus tollens by Aristotle.

If P, then Q.
¬Q, therefore ¬P. ✓

So, given the rule and information about one of the two propositions P and Q, we can make inferences about the other. But only in two cases can we make valid inferences, P and ¬Q.

rulegiveninferencevalidity
If P, then Q.PQ
¬P¬Q
QP
¬Q¬P


Of course, there are even less logical inferences one could make, but these are the ones that Aristotle deemed sensible enough to include in his work on logic. This is the logic that we need to understand. And the experimental task, proposed by Peter Wason in 1966, tested the ability of people to use this kind of reasoning.


~Wason Selection Task~

You are presented with four cards, each with a letter and number printed on either side.

The rule is: If a card has E on one side, it has 2 on the other.
The question is: which cards must be turned over to test the rule, i.e., to determine if the cards follow the rule. You have as much time as you wish.
~o~

Wason and his collaborators got a shock in 1966 because only 10% of their participants chose the right answer. Having prided ourselves on our rationality for millennia (in Europe, anyway) the expectation was that most people would find this exercise in reasoning relatively simple. Only 1 in 10 got the right answer. This startling result led Wason and subsequent investigators to pose many variations on this test, almost always with similar results.

Intrigued, they began to ask people about the level of confidence in their methods before getting their solution. Despite the fact that 90% would choose the wrong answer, 80% of participants were 100% sure they had the right answer! So it was not that the participants were hesitant or tentative. On the contrary, they were extremely confident in their method, whatever it was.

The people taking part were not stupid or uneducated. Most of them were psychology undergraduates. The result is slightly worse than one would expect from random guessing, which suggests that something was systematically going wrong.

The breakthrough came more than a decade later when, in 1979, Jonathan Evans came up with a variation in which the rule was: if a card has E on one side, it does not have 2 on the other. In this case, the proportions of right and wrong answers dramatically switched around, with 90% getting it right. Does this mean that we reason better negatively?
"This shows, Evans argued, that people's answers to the Wason task are based not on logical reasoning but on intuitions of relevance." (Mercier & Sperber 2017: 43. Emphasis added)
What Evans found was that people turn over the cards named in the rule. Which is not reasoning, but since it is predicated on an unconscious evaluation of the information, not quite a guess, either. Which is why the success rate is worse than random guessing.

Which cards did you turn over? As with the conditional syllogism, there are only two valid inferences to be made here: Turn over the E card. If it has a 2 on the other side, the rule is true for this card (but may not be true for others); if it does not have a 2, the rule is falsified. The other card to turn over is the one with a seven on it. If it has E on the other side, the rule is falsified; if it does not have an E, the rule may still be true.

Turning over the K tells us nothing relevant to the rule. Turning over the 2 is a little more complex, but ultimately futile. If we find an E on the other side of the 2 we may think it validates the rule. However, the rule does not forbid a card with 2 on one side having any letter, E or another one. So turning over the 2 does not give us any valid inferences, either.

Therefore, it is only by turning over the E and 7 cards that we can make valid inferences about the rule. And, short of gaining access to all possible cards, the best we can do is falsify the rule. Note that the cards are presented in the same order as I used in explaining the logic. E = P, K = ¬P, 2 = Q, and 7 = ¬Q.

Did you get the right answer? Did you consciously work through the logic or respond to an intuition? Did you make the connection with the explanation of the conditional syllogism that preceded it?

I confess that I did not get the right answer, and I had read a more elaborate explanation of the conditional logic involved. I did not work through the logic, but chose the cards named in the rule. 

The result has been tested in many different circumstances and variations and seems to be general. Humans, in general, don't use reasoning to solve logic problems, unless they have specific training. Even with specific training, people still get it wrong. Indeed, even though I explained the formal logic of the puzzle immediately beforehand, the majority of readers would have ignored this and chosen to turn over the E and 2 cards, because they used their intuition instead of logic to infer the answer.


~Reasons~

In a recent post (Reasoning, Reasons, and Culpability, 20 Jul 2017) I explored some of the consequences of this result. Mercier and Sperber go from Wason into a consideration of unconscious processing of information. They discuss and ultimately reject Kahneman's so-called dual process models of thinking (with two systems, one fast and one slow). There is only one process, Mercier and Sperber argue, and it is unconscious. All of our decisions are made this way. When required, they argue, we produce conscious reasons after the fact (post hoc). The reason we are slow at producing reasons is that they don't exist before we are asked for them (or ask ourselves - which is something Mercier and Sperber don't talk about much). It takes time to make up plausible sounding reasons; we have to go through the process of asking, given what we know about ourselves, what a plausible reason might be. And because of cognitive bias, we settle for the first plausible explanation we come up with. Then, as far as we are concerned, that is the reason.

It's no wonder there was scope for Dr Freud to come along and point out that people's stated motives were very often not the motives that one could deduce from detailed observation of the person (particularly paying attention to moments when the unconscious mind seemed to reveal itself). 

This does not discount the fact that we have two brain regions that process incoming information. It is most apparent in situations that scare us. For example, an unidentified sound will trigger the amygdala to create a cascade of activation across the sympathetic nervous system. Within moments our heart rate is elevated, our breathing shallow and rapid, and our muscles flooding with blood. We are ready for action. The same signal reaches the prefrontal cortex more slowly. The sound is identified in the aural processing area, then fed to the prefrontal cortex which is able to override the excitation of the amygdala.

A classic example is walking beside a road with traffic speeding past. Large, rapidly moving objects ought to frighten us because we evolved to escape from marauding beasts. Not just predators either, since animals like elephants or rhinos can be extremely dangerous. But our prefrontal cortex has established that cars almost always stay on the road and follow predictable trajectories. Much more alertness is required when crossing the road. I suspect that the failure to switch on that alertness after suppressing it might be responsible for many pedestrian accidents. Certainly, where I live, pedestrians commonly step out into the road without looking.

It is not that the amygdala is "emotional" and the prefrontal cortex is "rational". Both parts of the brain are processing sense data, but one is getting it raw and setting off reactions that involve alertness and readiness, while the other is getting it with an overlay of identification and recognition and either signalling to turn up the alertness or to turn it down. And this does not happen in isolation, but is part of a complex system by which we respond to the world. The internal physical sensations associated with these systems, combined with our thoughts, both conscious and unconscious, about the situation are our emotions. We've made thought and emotion into two separate categories and divided up our responses to the world into one or the other, but in fact, the two are always co-existent.

Just because we have these categories, does not mean they are natural or reflect reality. For example, I have written about the fact that ancient Buddhist texts did not have a category like "emotion". They had loads of words for emotions, but lumped all this together with mental activity (Emotions in Buddhism. 04 November 2011). Similarly, ancient Buddhist texts did not see the mind as a theatre of experience or have any analogue of the MIND IS A CONTAINER metaphor (27 July 2012). The ways we think about the mind are not categories imposed on us by nature, but the opposite, categories that we have imposed on experience. 

Emotion is almost entirely missing from Mercier and Sperber's book. While I can follow their argument, and find it compelling in many ways, I think their thesis is flawed for leaving emotion out of the account of reason. In what I consider to be one of my key essays, Facts and Feelings, composed in 2012, I drew on work by Antonio Damasio to make a case for how emotions are involved in decision making. Specifically, emotions encode the value of information over and above how accurate we consider it.

We know this because when the connection between the prefrontal cortex and the amygdala is disrupted, by brain damage, for example, it can disrupt the ability to made decisions. In the famous case of Phineas Gage, his brain was damaged by a railway spike being drive through his cheek and out the top of his head. He lived and recovered, but he began to make poor decisions in social situations. In other cases, recounted by Damasio (and others) people with damage to the ventro-medial pre-frontal cortex lose the ability to assess alternatives like where to go for dinner, or what day they would like doctor's appointment on. The specifics of this disruption suggests that we weigh up information and make decisions based on how we feel about the information.

Take also the case of Capgras Syndrome. In this case, the patient will recognise a loved one, but not feel the emotional response that normally goes with such recognition. To account for this discrepancy they confabulate accounts in which the loved one has been replaced by a replica, often involving some sort of conspiracy (a theme which has become all too common in speculative fiction). Emotions are what tell us how important things are to us and, indeed, in what way they are important. We can feel attracted to or repelled by the stimulus; the warm feeling when we see a loved one, the cold one when we see an enemy. We also have expectations and anticipations based on previous experience (fear, anxiety, excitement, and so on).

Mercier and Sperber acknowledge that there is an unconscious inferential process, but never delve into how it might work. But we know from Damasio and others that it involves emotions. Now, it seems that this process is entirely, or mostly, unconscious and that when reasons are required, we construct them as explanations to ourselves and others for something that has already occurred.

Sometimes we talk about making unemotional decisions, or associate rationality with the absence of emotion. But we need to be clear on this: without emotions, we cannot make decisions. Rationality is not possible without emotions to tell us how important things are, where "things" are people, objects, places, etc. 

In their earlier work (See An Argumentative Theory of Reason) of 2011, Mercier and Sperber argued that we use reasoning to win arguments. They noted the poor performance on a test of reasoning like the Wason task and added the prevalence of confirmation bias. They argued that this could be best understood in terms of decision-making in small groups (which is, after all, the natural context for a human being). As an issue comes up, each contributor makes the best case they can, citing all the supporting evidence and arguments. Here, confirmation bias is a feature, not a bug. However, those listening to the proposals are much better at evaluating arguments and do not fall into confirmation bias. Thus, Mercier and Sperber concluded, humans only employ reasoning to decide issues when there is an argument. 

The new book expands on this idea, but takes a much broader view. However, I want to come back and emphasise this point about groups. All too often, philosophers are trapped in solipsism. They try to account for the world as though individuals cannot compare notes, as though everything can and should be understood from the point of view of an isolated individual. So, existing theories of rationality all assume that a person reasons in isolation. But I'm going to put my foot down here and insist that humans never do anything in isolation. Even hermits have a notional relation to their community - they are defined by their refusal of society. We are social primates. Under natural conditions, we do everything together. Of course, for 12,000 years or so, an increasing number of us have been living in unnatural conditions that have warped our sensibilities, but even so, we need to acknowledge the social nature of humanity. All individual psychology is bunk. There is only social psychology. All solipsistic philosophy is bunk. People only reason in groups. The Wason task shows that on our own we don't reason at all, but rely on unconscious inferences. But these unconscious (dare I say instinctual) processes did not evolve for city slickers. They evolved for hunter-gatherers.

It feels to me like we are a transitional period in which old paradigms of thinking about ourselves, about our minds, are falling away to be replaced by emerging, empirically based paradigms that are still taking shape. What words like "thought", "emotion", "consciousness", and "reasoning" mean is in flux. Which means that we live in interesting times. It's possible that a generation from now, our view of mind, at least amongst intellectuals, is going to be very different. 

~~oOo~~



Bibliography

Mercier, Hugo & Sperber, Dan. (2011) 'Why Do Humans Reason. Arguments for an Argumentative Theory.' Behavioral and Brain Sciences. 34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.

Mercier, Hugo & Sperber, Dan. (2017) The Enigma of Reason: A New Theory of Human Understanding. Allen Lane.

See also my essay: Reasoning and Beliefs 10 January 2014





20 July 2017

Reasoning, Reasons, and Culpability.

My worldview has undergone a few changes over the years. Not just because of religious conversion or obvious things like that. It has usually been a book that has shifted my perspective in an unexpected direction. Take, for example, Mercier and Sperber's book The Enigma of Reason: A New Theory of Human Understanding.

We all just assume that actions are explained by reasons. If actions are baffling then we seek out reasons to explain them. What is the reason that someone acted the way they did? Given a reason, we think the action has been explained. But has it? How?

Furthermore, when discussing someone's actions we assume that particular kinds of internal motivations are sufficient to explain the actions. We almost never consider external factors, like, say, peer-pressure. It's not that we're not aware of peer pressure, but that we don't see it as a reason.
So, if person P does action A, we expect to find a simple equation, P did A for reason R. R is likely to be expressed as a desire to bring about some kind of goal G, call this R(G). So the calculus of our lives is something like this:

P did A for R(G)

But this is not how reasoning works and it is not how people decide to do things. Most decisions, even the ones that feel conscious are, in fact, unconscious. The decision-making machinery is emotional and operates below our conscious radar - the result that pops into consciousness is preprocessed and preformed. Essentially, it is what feels right, on an unconscious level.

Having decided, we may either just do it with a conscious sense of it feeling right (so-called "feeling types") and only produce reasons after the fact (post hoc) when asked; or we may first seek a reason (so-called "thinking types") and then act. Both kinds of reasons are post hoc - the decision to act comes first, then we come up with reasons to support that decision. The number of times that someone asks "why did you do that?" and you come up with nothing is a sign of this.

The most extreme examples of this occur in people with no memories due to brain damage. Oliver Sachs described the case of a man who when asked "What are you doing here?" never knew, because he could not remember. But the part of his brain that still worked would conjure up a likely reason, and since it fit the criteria of a reason, that's what we would say. But he would not remember saying it and asked again, might come up with another equally plausible answer. He was only ever accurate by accident. He was not consciously lying but, not understanding the deficit caused by his injury, was speaking what popped into his head.

We are very far from assiduous in generating and selecting reasons. For a start, we all suffer from confirmation bias. We typically only look for reasons to support and justify our decision. Ethics is partly about realising that our actions are not always justified and admitting that. Not only this, but we are also lazy. Once we come up with one reason that fits our criteria, we just stop looking. We typically take the first reason, not the best one, then, having settled on it, will defend it as the best reason.

Of course, we can train to overcome the cognitive biases, but most of us are still bought into the paradigm of P did A for R (G). It's transparent. We don't see it. I know about it and I don't usually see it. It's only when I'm being deliberately analytical that I can retrospectively see the nature of my reasoning. And it is not what we have taken it to be all these centuries. 

I'm never been very convinced by so-called post-modernism. They make the mistake that I would now call an ontological fallacy - they mistake experience for reality. But the mistake is so common amongst intellectuals that they cannot be singled out. This idea about reasoning might well be the kind of epistemic break that would really constitute our either leaving modernity behind or, more likely, finally becoming truly modern. The idea that modernity represents a break with medieval superstition, is also clearly not quite right because our reasons are no better than superstition in most cases. 

And, of course, some of us are able to see more complex networks of cause and effect. We see political complexities, or sociological complexities, for example. These produce more sophisticated reasons, but even these tend to get boiled down into generalisations or interpreted from ideological points of view. And ideologies make sense to people because of reasons

The whole 2010 UK general election was fought on the basis of a single idea: Labour borrowed too much money. This falsified the situation in a dozen different ways but because it offered a reason for the disastrous economic crash in the UK in 2008 and, because Labour could not offer a similar simple reason, it won the day. A lot of the political right appears to be convinced that this explains everything. So the whole world has the same economic problems, and the economies are incredibly complex, but it all boils down to Labour borrowed too much money. And this—this simplistic, fake fact— is widely considered to be plausible. The UK is leaving the EU for reasons. And so on. 

But here's the thing. Reasons, on the whole, do not explain behaviour. They are just post hoc rationalisations of decisions made unconsciously on the basis of the value we give to experiences and memories, which are encoded as emotions. The reasons you give for your own actions, let alone the reasons you give for mine, do not explain anything. And as I have said, we simply ignore some of the more obvious reasons that any social primate does what it does (because of social norms). It's not a matter of deliberate deception. After all, we all believe that the reasons we give sufficiently explain our actions and that we can accurately gauge the kinds of reasons that are applicable (and we believe this for reasons). The problem is more that we don't understand reasons or reasoning.

How does this affect the issue of culpability? 

Any student of Shakespeare will be familiar with the problem of people being puzzled by their own actions. Shakespeare might have been the first depth psychologist. But if we are discussing the issue of culpability, then things get really difficult. One could write a book on the actions for which Hamlet might be culpable and to what degree (probably someone has!). 

The whole notion of culpability has taken a beating, lately. Advocates for the non-existence of contra-causal freewill are persuasive because metaphysical reductionism is a mainstream paradigm of reasoning. One hopes that the flaws in such arguments will eventually be exposed—contra-causal freewill isn't relevant or interesting; structure is real; reductionism is less than half the story of reality; etc.—but until they are, discussions of culpability are likely to remain confused. 

Mercier and Sperber's argument about the nature of, and the relationship between, reasoning and reasons is a deeper challenge. Because we now know that even if we get a sincere answer to the question "Why did you do that?", very few of us are even aware that the reasons we give are simply post hoc rationalisations and that they are not sufficient to explain any action. Clearly, our will is always involved in deliberate actions, but we ourselves may not understand the direction our will takes. We generate reasons on demand because society has taught us to do so... for reasons. But at root, most of us are mystified by our own actions most of the time. 

Legal courts still represent a pragmatic approach to culpability. Did P factually do A? Yes or no? If yes, then punish P in the way mandated by the legislative branch of government. As readers may know, George Lakoff has analysed this dynamic in terms of metaphors involving debts and bookkeeping. If action A incurs a debt to society, then P is expected to repay it We still largely operate on the basis that the best way to repay a social debt is to suffer pain, but we have created "more humane" ways to make people suffer that are, on-the-whole, less gross but also more drawn out than physical punishment. Indeed, we consider inflicting physical harm as barbaric. And why? Oh, you know, for reasons

If you're going to make someone suffer, it's better to inflict psychological suffering on them―through extended social isolation, for example, or enforced cohabitation with unsavoury strangers―than to inflict physical harm. Because of reasons. If my choice was between years of incarceration with criminals and being beaten senseless one time, I might well opt for the latter (well, I wouldn't but some might). Quite a lot of people are beaten and raped in prison, anyway, and a majority are psychologically damaged by the experience, so a one-off payment in suffering might make more sense. It's more economical. Just because you are squeamish about beating me, but not about psychologically torturing me by imprisoning me, doesn't make your squeamishness more ethical. You are still seeking to inflict harm on me in the belief that it will balance out my culpability for acting against the laws of society... for reasons

Then again, if I am an Afghani, fighting for my homeland against a foreign invader, you might just choose to drop a bomb on me from 40,000 ft, killing me and my entire family, because of reasons

What happens to justice when reasons are exposed as fraudulent? And they may as well be fraudulent because they're only relevant by accident. We see this happening all the time. The UK no longer has the death penalty; not because British people don't like killing (Britain has been almost constantly fighting wars it has initiated or encouraged for 1000 years!). Rather, we realised that we killed a few too many falsely convicted innocents. That means we have created a debt for which we ought to suffer. D'oh! 

We're for or against capital punishment for reasons. We vote left or right for reasons. We are for or against, this or that for reasons. We love, marry, fight, work, take on religious views and practices, choose our haircut, our friends, etc... for reasons. Good reasons! Sound reasons. Thought out reasons. Wait! We can explain. And you have to take our reasons seriously, because of... other reasons. Don't you see? It all makes sense... doesn't it? 

In other words, our whole lives are based on post hoc rationalisations of decisions we do not understand and cannot explain, but which we are convinced that we do understand and can explain. Not to put too fine a point on it, it's fucked up.

So, how confident should anyone be about their reasons? 

We so often seem very confident indeed (because of reasons), but if there is one other rational person who disagrees with us, then we ought to be at best 50% certain. If it's just a matter of reasons... then 50% seems optimistic, because chances are that neither party has any real idea of why they believe what they do. On most social matters one can usually find a dozen rational opinions based on reasons, and we believe our own reasons (for reasons), or we are persuaded of a different view for other reasons.

What does any of this amount to?

And more to the point, how can we tell what is of value, if reasons are not a reliable guide?

I think Frans de Waal has got the right idea (for reasons). Ethics (i.e., social values) are based on empathy and reciprocity, capacities we and all social mammals evolved in order to make living in big groups possible and tolerable. It all builds from there. Other rational opinions are available, but for reasons, I like this one. I still have no idea what gives something an aesthetic value, but I do believe (for other reasons) that we experience that value as an emotional response. Again, other rational opinions are available.

I cannot help but think that my view, cobbled together from other people's views, makes more sense than any other view I've come across. But then, everyone thinks this already. So then the question is, how do some opinions become popular? And I think Malcolm Gladwell has some interesting things to say on that matter in The Tipping Point. In his terms, I'm a "maven", but not a persuader or connector. 


~~oOo~~

17 February 2017

Experience and Reality

"Our relation to the world is not that of a thinker to an object of thought"
—Maurice Merleau Ponty. The Primacy of Perception and Its Philosophical Consequences.

Introduction

In this essay and some to follow, I want to look an an error that many philosophers and most meditators seem to make: the confusion of epistemology and ontology; i.e., the mixing up of experience and reality. This essay will outline and give examples of a specific version of this confusion in the form of the mind projection fallacy.

I agree with those intellectuals who think that we do not ever experience reality directly. This is where I part ways with John Searle who, for reasons I cannot fathom, advocates naïve realism, the view that reality is exactly as we experience it. On the other hand, I also disagree with Bryan Magee that reality is utterly different from what we experience and we can never get accurate and precise knowledge about it. He takes this view to be a consequence of transcendental idealism, but I think it's a form of naïve idealism.

The knowledge we get via inference is not complete, but we can, and do, infer accurate and precise information about objects. This makes a mind-independent reality seem entirely plausible and far more probable than any of the alternatives. So, we are in a situation somewhere between naïve realism and naïve idealism. 

This distinction between a mind-independent reality and the mind is not ontological, but epistemological. The set of reality includes all minds. However, the universe would exist, even if there were no beings to witness is. The universe is not dependent on having conscious observers. So by "reality" I just mean the universe generally; i.e., the universe made up from real matter-energy fields arranged into real structures that have emergent properties, one of which is conscious states. And by "mind" I specifically mean the series of conscious states that inform human beings about the universe. 

What I don't mean is reality in the abstract. I'm deeply suspicious of abstractions at present. For the same reason, I avoid talking about conscious states in the abstract as "consciousness". Things can be real without there necessarily being an abstract reality. Reality is the set of all those things to which the adjective "real" applies. Things are real if they exist and have causal potential. Members of this set may have no other attributes in common. Unfortunately, an abstract conception of reality encourages us to speculate about the "nature of reality", as though reality were something more than  a collection of real things, more than an abstraction. Being real is not magical or mystical.

I'm not making an ontological distinction between mental and physical phenomena. I think an epistemological distinction can be made because, clearly, our experience of our own minds has a different perspective to our experience of objects external to our body, but in the universe there are just phenomena. This is a distinct position from materialism, which privileges the material over the mental. What I'm saying is that what we perceive as "material" and "mental" are not different at the level of being.  

When we play the game of metaphysics and make statements about reality, they arise from inferences about experience. There are three main approaches to this process:
  • we begin with givens and use deduction to infer valid conclusions.
  • we begin with known examples and use induction to infer valid generalisations.
  • we begin with observations and use abductions to infer valid explanations.
We can and do make valid inferences about the universe from experience. The problem has always been that we make many invalid inferences as well. And we cannot always reliably tell valid from invalid.

For example, we know that if you submerge a person in water they will drown. That tells us something about reality. However, for a quite a long time, Europeans believed that certain women were in league with the devil. They believed that witches could not be drowned. So they drowned a lot of women to prove they were not witches; and burned the ones who didn't drown. The central problem here being that witches, as understood by the witch-hunters, did not exist. The actions of some women were interpreted through an hysterical combination of fear of evil and fear of women, and from this witches were inferred to be real. It was a repulsive and horrifying period of our history in which reasoning went awry. But it was reasoning. And it was hardly an isolated incident. Reasoning very often goes wrong. Still. And that ought to make us very much more cautious about reasoning than most of us are.

One of the attractions of the European Enlightenment is that it promised that reason would free us from the oppression of superstition. This has happened to some extent, but superstition is still widespread. Confusions about how reason actually works are only now being unravelled. And this meant that the early claims of the Enlightenment were vastly overblown. If our views about the universe are formed by reasoning, then we have to assume that we're wrong most of the time, unless we have thoroughly reviewed both our view and our methods, and compared notes with others in an atmosphere of critical thinking, which combines competition and cooperation. The latter is science at its best, though admittedly scientists are not always at their best. 

Into this mix comes Buddhism with its largely medieval worldview, modified by strands of modernism. Buddhists often claim to understand the "true nature of reality"; aka The Absolute, The Transcendental, The Dhamma-niyāma, śūnyatā, tathatā,  pāramārthasatya, prajñāpāramitā, nirvāṇa, vimokṣa, and so on. Reality always seems to boil down to a one word answer. And this insight into "reality" is realised by sitting still with one's eyes closed and withdrawing attention from the sensorium in order to experience nothing. Or by imagining that one is a supernatural being in the form of an Indian princess, or a tame demon, or an idealised Buddhist monk, etc. Or any number of other approaches that have in common that seem to take the approach of trying to develop a kind of meta-awareness of our experience.To experience ourselves experiencing.

It's very common to interpret experience incorrectly. As we know the lists of identified cognitive biases and logical fallacies, which each have over one hundred items. From these many problems I want to highlight one. When we make inferences about reality we are biased towards seeing our conclusions, generalisations, and explanations as valid, and to believing that our interpretation is the only valid interpretation. This is the mind projection fallacy.


The Sunset Illusion

An excellent illustrative example of the mind projection fallacy is the sunset. If I stand on a hill and watch the sunset, it seems to me that the the hill and I are fixed in place and the sun is moving relative to me and the hill. Hence, we say "the sun is setting". In fact, we're known for centuries that the sun is not moving relative to the earth, but instead the hill and I are pivoting away on an axis that goes through the centre of the earth. So why do we persist in talking about sunsets?

The problem is that I have internal sensors that tell me when I'm experiencing acceleration: proprioception (sensing muscle/tendon tension) kinaesthesia (sensing joint motion and acceleration) and the inner-ear's vestibular system (orientation to gravity and acceleration). I can also use my visual sense to detect whether I am in motion relative to nearby objects. A secondary way of detecting acceleration is the sloshing around of our viscera creating pressure against the inside of our body.

My brain integrates all this information to give me accurate and precise knowledge about whether my body is in motion. And standing on a hill, watching a sunset, my body is informing me, quite unequivocally, that I am at rest.

I'm actually spinning around the earth's axis of rotation at ca. 1600 km/h or about 460 m/s. That's about Mach 1.5! And because velocity is a vector (it has both magnitude and direction) moving in a circle at a uniform speed is acceleration, because one is constantly changing direction. So why does it not register on our senses? After all, being on a roundabout rapidly makes me dizzy and ill; a high speed turn in a vehicle throws me against the door. It turns out that the acceleration due to going moderately fast in very large circle, is tiny. So small that it doesn't register on any of our onboard motion sensors. The spinning motion does register in the atmosphere and oceans where it creates the Coriolis effect.

Everyone watching a sunset experiences themselves at rest and the sun moving. It is true, but counterintuitive, to suggest that the sun is not moving. Let's call this the sunset illusion.

I'm not sure where it comes from, but in the Triratna Order we often cite four authorities for believing some testimony: it makes sense (reason), it feels right (emotion), it accords with experience (memory), and it accords with the testimony of the wise. Before about 1650, seeing ourselves as stationary and the sun and moving, made sense, it felt right, it accorded with experience, and it accorded with the testimony of the wise. The first hint that the sunset illusion is an illusion came when Galileo discovered the moons of Jupiter in January 1610.

Even knowing, as I do, that the sunset illusion is an illusion, doesn't change how it seems to me because my motion senses are unanimously telling me I'm at rest. This is important because it tells us that this is not a trivial or superficial mistake. It's not because I am too stupid to understand the situation. I know the truth and have known for decades. But I also trust my senses because I have no choice but to trust them.

The sunset illusion is sometimes presented as a 50:50 proposition, like one of those famous optical illusions where whether we see a rabbit or a duck depends on where we focus. The assertion is that we might just as easily see the sun as still and us moving. This is erroneous. Proprioception, kinesthesia, the vestibular organ, and sight make it a virtual certainty that we experience ourselves at rest and conclude that the sun moving. It takes a combination of careful observation of the visible planets and an excellent understanding of geometry to upset the earth-centric universe. If some ancient cultures got this right, it was a fluke.

The sunset illusion exposes an important truth about how all of us understand the world based on experience. Experience and reality can be at odds.

And note that we are not being irrational when we continue to refer to the sun "setting". Given our sensorium, it is rational to think of ourselves at rest and the sun moving. It's only in a much bigger, non-experiential framework that the concept becomes irrational. For most of us, the facts of cosmology are abstract; i.e., they exist as concepts divorced from experience. Evolution has predisposed us to trust experience above abstract facts.


Mind Projection Fallacy

The name of this fallacy was coined by physicist and philosopher E.T. Jaynes (1989). He defined it like this:
One asserts that the creations of [their] own imagination are real properties of Nature, and thus, in effect, projects [their] own thoughts out onto Nature. (1989: 2)
I think it's probably more accurately described as a cognitive bias, but "fallacy" is the standard term. Also, instead of imagination, I would argue that we should say "interpretation". The problem is not so much that we imagine things and pretend they are real, though this does happen, but that we have experiences and interpret them as relating directly to reality (naïve realism).

The sunset illusion tells us that reality is not always as we experience it. 

We all make mistakes, particularly these kinds of cognitive mistakes. We actually evolved in such a way as to make these kinds of mistakes inevitable. However, reading up on cognitive bias, I was struck by how some of the authors slanted their presentation of the material to belittle people. I don't think this is helpful. Our minds are honed by evolution for survival a particular kind of environment, but almost none of us live in that environment any more. So if we are error-prone, it is because our skill-set is not optimised for the lifestyles we've chosen to live. 

This fallacy can occur in a positive and a negative sense, so that it can be stated in two different ways:
  1. My interpretation of experience → real property of nature
  2. My own ignorance → nature is indeterminate
David Chapman has pointed out that there has been considerable criticism of Jaynes' approach in the article I'm citing and has summarised why. He suggests, ironically, that Jaynes suffered from the second kind of mind projection fallacy when it came to logic and probability. But the details of that argument about logic and probability are not relevant to the issue I'm addressing in this essay. It's the fallacy or bias that concerns us here. 


Interpreting Experience
    A problem like the sunset illusion emerges when we make inferences about reality based on interpreting our experience. For example, when we make deductions from experience to reality, they invariably reflect the content of our presuppositions about reality. For example, a given for most of us is "I always know when I am moving". In the sunset illusion, I know I am at rest because motions sensors and vision confirm that it is so. The experience is conclusive: it must be the sun must be moving. My understanding of how the universe works and my understanding of my own situation as regards movement are givens in this case. We don't consciously reference them, but they predetermine the outcome of deductive reasoning. This means the deduction is of very limited use to the individual thinking about reality.

    If I watch a dozen sunsets and they all have this same character, then I can generalise from this (inductive reasoning) that the sun regularly rises, travels in an (apparent) arc across the sky and sets. All the while, I am not moving relative to earth. What's more, I've experienced dozens of earthquakes in my lifetime, so I also know what it is like when the earth does move! From my experiential perspective, the earth does not move, but the sun does move. Given our experience of the situation, this is the most likely explanation (abductive reasoning).

    So here we see that a perfectly logical set of conclusions, generalisations, and explanations follow from interpreting experience, which are, nonetheless, completely wrong. I am not at rest, but moving at Mach 1.5. The earth is not at rest. The sun is at the centre of our orbit around it, but it also is moving very rapidly around the centre of the galaxy. Our galaxy is accelerating away from all other galaxies. The error occurs because our senses evolved to help us navigate through woodlands, in and out of trees, and swimming in water. And we're pretty good at this. When it comes to inferring knowledge about the cosmos, human senses are the wrong tool to start with!

    A common experience for Buddhists is to have a vision of a Buddha during meditation. And it is common enough for that vision to be taken as proof that Buddhas exist. But think about it. A person is sitting alone in a suburban room, their eyes are closed, their attention withdrawn from the world of the senses, they've attenuated their sense experience to focus on just one sensation and have focussed their attention on it. They undergo a self-imposed sensory deprivation. They've also spent a few years intensively reading books on Buddhism, looking at Buddhist art, thinking about Buddhas, and discussing Buddhas with other Buddhists. We know that sensory deprivation causes hallucinations. And someone saturated in the imagery of Buddha is more likely to hallucinate a Buddha. This is no surprise. But does it really tell us that Buddhas exist independently of our minds, or does it just tell us that in situations of sensory deprivation Buddhists hallucinate Buddhas? 

    The Buddhist who has the hallucination feels that this is a sign; it feels important, meaningful, and perhaps even numinous (in the sense that they felt they were in the presence of some otherworldly puissance). They are immersed in Buddhist rhetoric and imagery, as are all of their friends. As I have observed before, hallucinations are stigmatised, whereas visions are valorised. So if you see something that no one else sees, then your social milieu and your social intelligence will dictate how you interpret and present the experience. If you mention to your comrades in religion that you saw a Buddha in your meditation, you are likely to get a pat on the back and congratulations. It will be judged an auspicious sign. And all those people who haven't had "visions" will be quietly envious. If you mention it to your physician, they may well become concerned that you have suffered a psychotic episode. On the other hand, in practice, psychotic episodes are rather terrifying and chaotic, and not all hallucinations are the result of psychosis. 

    Not only do we have the problem of our own reasoning leading us to erroneous inferences, we have social mechanisms to reinforce particular interpretations of experience, especially in the case of our religiously inspired inferences. Our individual experience is geared towards a social reality. One of the faults of humans thinking about reality is to think that reality somehow reflects our social world. A common example is the nature of heaven. Many cultures see heaven as an idealised form of their own social customs, usually with the slant towards male experiences and narratives. Medieval Chinese intellectuals saw heaven as an idealised Confucian bureaucracy, for example. If we take Christian art as any indication, then Heaven is an all male club. The just-world fallacy probably comes about because we expect the world to conform to our social norms in which each member is responsive to the others in a hierarchy where normative behaviour is rewarded and transgressive behaviour is punished.

    So, given the way our senses work, given the pitfalls of cognitive bias and logical fallacies, given the pressure to conform to social norms, the mind projection fallacy can operate freely. As we know, challenging the established order can be difficult to the point of being fatal. And understanding the power of something like the sunset illusion is important. Facts don't necessarily break the spell. Yes, we know the earth orbits the sun. But standing on a hill watching the sunset, that is just not how we experience it (our proprioception and vision tell us a different story that we find more intuitive and credible, even though it is wrong). And this applies to a very wide range of situations where we are reasoning from experience to reality.


    If I Don't Understand It...

    The second form of this fallacy was rampant in 19th century scholarship. In the first form, one erroneously concludes that one understands something and projects private experience as public reality. Mistaking the sunset as resulting from the movement of the sun, because our bodies tell us that we are at rest. This leads to false claims about reality.

    In the second case there is also a false claim about reality, but in this case it emerges from a failure to understand and the assumption that this is because the experience or feature of reality cannot be understood. This is a problem which is particularly acute for intellectuals. Intellectuals are often over‑confident about their ability to understand everything. These days it is less plausible, but 150 years ago it was plausible for one intellectual to be well informed about more or less every field of human knowledge. So, if such an intellectual comes across something they don't understand, then they deduce that it cannot be understood by anyone. 

    A common assertion, for example, is that we will never understand consciousness from a third person perspective (leaving aside the problematic abstraction for a moment). Very often such theories are rooted in an ontological mind/body dualism, which may or may not be acknowledged. Many Buddhists who are interested in the philosophy of mind, for example, cannot imagine that we will ever understand conscious states through scientific methods. They argue that no amount of research will ever help us understand. So they don't follow research into the mind and don't see any progress in this area. On the other hand, they hold that through mediation we do come to understand conscious states and the nature of them. Many go far beyond this and claim that we will gain knowledge of reality, in the sense of a transcendent ideal reality that underlies the apparent reality that our senses inform us about. In other words, meditation takes us beyond phenomena to noumena

    Another common argument is that scientists don't understand 95% of the world because they don't understand dark matter and dark energy. People take this to mean that scientists don't understand 95% of what goes on here on earth. But this is simply not true. Scale is important, and being ignorant at one scale (the scale that effects galaxies and larger structures) does not mean that we don't understand plate tectonics, the water cycle, or cell metabolism, at least in principle. The popular view of science often seem to point towards a caricature that owes more to the 19th century than the 21st. Criticism of science often goes along with an anti-science orientation and very little education in the sciences. 

    The basic confusion in both cases is mistaking what seem obvious to us, for what must be the case for everyone else, either positively or negatively. 


    The Confusion
    "It's not that one gains insight into reality, but that one stops mistaking one's experience for reality"
    The basic problem here is a confusion between what we know about the world (epistemology) with what the world is (ontology). In short, we mistake experience for reality. And this problem is very widespread amongst intellectuals in many fields.

    The problem can be very subtle. Another illuminating example is the idea that sugar is sweet. We might feel that a statement like "sugar is sweet" is straightforward. Usually, no one is going to argue with this, because the association between sugar and sweetness is so self-evident. But the statement it is false. Sugar is not sweet. Sugar is a stimulus for the receptors on our tongues that register as "sweet". We experience the sensation sweet whenever we encounter molecules that bind with these receptors. But sweet is an experience. It does not exist out in the world, but only in our own conscious states. Sugar is not sweet. Sugar is one of many substances that cause us to experience sweet when they come into contact with the appropriate receptors on our tongue. Equally, there is no abstract quality of sweet-ness, despite the effortless ease with which we can create abstract nouns in English. Sucrose, for example has nothing much in common with aspartame at a chemical level. And yet both stimulate the experience of sweet. Indeed, aspartame is experienced as approximately 200 times as sweet as sucrose, but this does not mean that it contains 200 times more sweetness. There is no sweet-ness. The experience of sweet evolved to alert us to the high calorific value of certain types of foods and the enjoyable qualities of sweet evolved to motivate us to seek out such foods. 

    For Buddhists, the application of this fallacy comes from experiencing altered states of mind in and out of meditation. Meditators may experience altered states of mind that they judge to be more real than other kinds of states, causing them to divide phenomena into more real and less real. And they manage to convince people that this experience of theirs reflects a reality that ordinary mortals cannot see -- a transcendent reality that is obscured from ordinary people. 

    The problem is that an experience is a mental state; and a mental state is just a mental state. No matter how vivid or transformative the experience was, we must be careful when reasoning from private experiences (epistemology) to public reality (ontology) because we usually get this wrong. I've covered this in many essays, including Origin of the Idea of the Soul (11 Nov 2011) and
     Why Are Karma and Rebirth (Still) Plausible (for Many People)? (15 Aug 2015), etc.

    Most of us are really quite bad at reasoning on our own. This is because humans suffer from an inordinate number of cognitive biases and easily fall into logical fallacies. There are dozens of each and, without special training and a helpful context, we naturally and almost inevitably fall into irrational patterns of thought. The trouble is that we too often face situations where there is too much information and we cannot decide what is salient; or there is too little information and we want to fill the gaps. 

    Our minds are optimised for survival in low-tech hunter-gatherer situations, not for sophisticated reasoning. The mind helps us make the right hunting and gathering decisions, but in most cases it's just not that good at abstract logic or reasoning. Of course, some individuals and groups are good at it. Those who are good at it have convinced us that it is the most important thing in the world. But, again, this is probably just a cognitive bias on their part. 


    Conclusion

    The whole concept of reason and the processes of reasoning are going through a reassessment right now. This is because it has become clear that very few people do well at abstract reasoning. Most of the time, we do not reason, but rely on shortcuts known as cognitive biases. A lot of the time our reasoning is flawed by logical fallacies. Additionally, we are discovering that most mammals and birds are capable of reasoning to some extent. 

    In this essay, I have highlighted a particular problem in which one mistakes experience for reality. Using examples (sunset, visions, sweetness) I showed how such mistakes come about. Unlike others who highlight these errors, I have tried to avoid the implication that humans are thereby stupid. For example, I see the sunset illusion because my senses are telling me that I am definitely at rest, because they tune out sensations that are too small to affect my body. Social conditioning is a powerful shaping force in our lives, and visions are valuable social currency in a religious milieu.

    In terms of our daily lives the sunset illusion or the sweetness illusion hardly matter. It's not like the mistakes cost us anything. Such problems don't figure in natural selection because our lives don't depend on them. We know what we need to know to survive. Although our senses and minds are tuned to survival in pre-civilisation environments, we are often able to co-opt abilities evolved for one purpose to another one. 

    But truth does matter. For example, when one group claims authority and hegemony based on their interpretation of experience, then one way to undermine them is to point out falsehoods and mistakes. When the Roman Church in Europe was shown to be demonstrably wrong about the universe, the greater portion of their power seeped away into the hands of the Lords Temporal, and then into the hands of captains of industry. For ordinary people, this led to more autonomy and better standards of living (on average). Democracy is flawed, but it is better than feudalism backed by authoritarian religion.

    But as Noam Chomsky has said:
    “The system protects itself with indignation against a challenge to deceit in the service of power, and the very idea of subjecting the ideological system to rational inquiry elicits incomprehension or outrage, though it is often masked in other terms.”
    In subjecting Buddhism to rational inquiry, I do often elicit incomprehension or outrage. And sometimes it's not masked at all. There are certainly Buddhists on the internet who see me as an enemy of the Dharma, as trying to do harm to Buddhism. As I understand my own motivations, my main concern is to recast buddhism for the future. I think the urge of the early British Buddhists to modernise Buddhism and, particularly, to bring it into line with rationality was a sensible one. However, as our understanding of rationality changes so Buddhism will have to adapt to continue being thought of as rational. But also we have to move beyond taking Buddhism on its own terms and to consider the wider world of knowledge. The laws of nature apply in all cases.

    Whilst Buddhism is largely influenced by people who mistake experience for reality, Buddhism will be hindered in its spread and development. This particular error is one that we have to make conscious and question closely. Just because it makes sense, feels right, and accords with experience doesn't mean that it is true. The sunset illusion makes sense, but is wrong. It feels right to say that sugar is sweet, but it isn't. It accords with experience that meditative mental states are more real than normal waking states. But they are not. The testimony of the wise is demonstrably a product of culture, and varies across time and space.

    ~~oOo~~

    27 January 2017

    Doctrine & Reason III: Madhyamaka Karma

    4.4 Multiple Versions of Karma

    In a recent online discussion with members of the Triratna Buddhist Order I discovered that we have no common narrative when it comes to karma. A majority believe in karma of some kind, but very often the kind of karma an Order member believes in is mutually contradictory with the kind that another Order member believes in. "Actions have consequences" is a relatively common way of expressing karma, but as we have seen (Part II), it is inadequate. The traditional idea of karma leading to rebirth is supernatural by its very nature, but encouragingly, a sizeable minority are reluctant to commit to any supernatural version of "actions have consequences". There is certainly no explanation to be found for karma in nature.

    In a sense, the Order reflects the confused history of karma in Buddhism. Different versions emerged from time to time, presumably in response to perceived needs, and many of them were incompatible with others. More or less the only common features are the word karma and the notion that willed actions are somehow significant.

    I've critiqued some of the main versions of karma, especially in an essay called The Logic of Karma (16 Jan 2015). So, for the purposes of this argument, I will focus on my critique of the Madhyamaka version of karma, particularly as set out in Nāgārjuna's Mūlamadhyamakakārikā. I don't think I've given a detailed critique of this version before and it turns out to be the one most resistant to reasoned argument and is thus the view most in need of effective refutation.


    5. Madhyamaka

    5.1 Nāgārjuna the Nihilist

    The most difficult version of karma to argue against is the one that begins with Nāgārjuna and comes down to us via various groups that have assimilated elements of his metaphysics (including those various schools that claim the label madhyamaka). It took me many years of  losing arguments with pseudo-intellectual mādhyamikas to work out what is wrong with Nāgārjuna's explanation of karma. As Nāgārjuna says, near the end of his chapter on karma:
    karma cen nāsti kartā ca kutaḥ syāt karmajaṃ phalaṃ |
    asaty atha phale bhoktā kuta evan bhaṣyati 
    || MMK 17.30 ||
    If action and agent don't exist, how would an action produce a consequence?
    And if the consequence does not exist, who would suffer it? 
    Ultimately, for Nāgārjuna, there is no action (karma) and no agent (kartṛ), thus there is no consequence (phala), no one who experiences it (bhoktṛ), and thus no rebirth, either. At best, they are like an imaginary city in the sky, like a mirage, or a dream (MMK 17.33). So Nāgārjuna rejects the idea of actions having consequences.

    I've read a number of explanations of Nāgārjuna's approach to karma and they all baulk at accepting his dismissal of karma and restate the mainstream Buddhist assertion that actions have real consequences. For example, Kalupahana concluded:
    "The most significant assertion here is that the rejection of permanence and annihilation and the acceptance of emptiness and saṃsāra (or the life-process) do not imply the rejection of the relationship between action (karma) and the consequence." (1986: 55)
    But, clearly, Nāgārjuna does reject the relationship between action and consequence and, what's more, he rejects the more fundamental notions of action, consequence, and relationship per se. To Nāgārjuna, these concepts are not part of paramārthasatya or ultimate truth. How should we read a statement like Kalupahana's which is echoed in other academic work? It seems that Nāgārjuna's rejection of karma and rebirth does not sit well with anyone who identifies with more mainstream Buddhist ideas. To say that agent, action, patient, and consequence are all just illusions is a form of nihilism.

    My sense of Nāgārjuna is that he is trapped by his own articles of faith. In maintaining that nothing persists in the face of plentiful evidence to the contrary, he is left with no choice but to obfuscate and distract us from his dilemma. Ironically, we know this because we still have his actual words. They, at least, have persisted for some eighteen centuries. Mādhyamikās (those who follow madhyamaka ideology) are apt to point out that this is not what commentators have understood him to be saying. However, when the text is clear and the commentary contradicts it, we have little choice but to reject the commentary as driven by motivations unrelated to those of the author.

    Nāgārjuna's view is a pernicious one, because it destroys the basis of morality. If actions do not have consequences at all, let alone appropriate and timely consequences, the observation of which allows us to modify our behaviour in the future to obtain different results, then morality is simply not possible. If there is no definite relationship between action and consequences, then there could only be chaos. The view appears to be based on a fundamental confusion.


    5.2 Arguing Against Madhyamaka

    However, this is also a view that is extremely resistant to rational argument, because part of the madhyamaka ideology, at least in its modern versions, is that rational argument has no place in the Buddhist system. Only personal experience counts towards knowledge and experience, by definition not susceptible to logic. Here we see medieval Buddhist folly meshing with Victorian Romantic folly to produce a persistent delusion. Mādhyamikas further stretch the credibility of a critic through the structure of their rhetoric. In the typical conversation about karma, the mādhyamika asserts their view (some variation on MMK 17.30) as though it were ultimate truth (pāramārtha-satya). If one disagrees on any grounds, they assign those grounds to relative truth, which is simply an illusion and can be safely ignored. Thus, any argument against the asserted view is defeated solely on the grounds that to dissent against the ultimate truth is always wrong. One cannot argue with ultimate truth. The use of reason to undermine the assertion of ultimate truth is dismissed or even mocked, because the ultimate truth allows no role whatever for reason. Having declined to recognise the validity of any objection, the mādhyamika will often emphatically restate their view and then refuse any further discussion.

    The view itself is irrational, but the defence that any dissent can only be a manifestation of ignorance is potent. It allows the believer to summarily reject any argument without ever having to consider it. One cannot win an argument with a mādhyamika on their terms, so one must shift the terms and one way to do this is to undermine the foundations; i.e., to point out Nāgārjuna's fundamental errors and argue that the framework itself is flawed.


    5.3 The Two Truths

    The two truths doctrine is completely absent from the early Buddhist suttas. This suggests that the problem which the two truths were supposed to solve did not exist earlier. I see this problem emerging from the confusion of experience and reality. This happened partly because Buddhists took a description of experience and tried to use it to describe reality. At the same time, they singled out certain rarefied meditative experiences and thought of them as reality.

    The early texts are fairly clear that the domain of application of Buddhist practice is experience. There is no word that conveys anything like our word "reality", no discussion of the nature of existence, the nature of objects. The focus is on the nature of experience. As Bodhi has said:
    “The world with which the Buddha’s teaching is principally concerned is ‘the world of experience,’ and even the objective world is of interest only to the extent that it serves as that necessary external condition for experience.” (Bodhi 2000: 394, n.182
    This is highlighted in the Kaccānagotta Sutta (SN 12:15), a text which Nāgārjuna appears to cite, but completely misunderstand. The importance of this text is emphasised by Kalupahana when he suggests that MMK is a commentary on KS. What KS says is that existence (astitā) and non-existence (nāstitā) do not apply to the world of experience (loka). This means that the usual way of looking at objects doesn't apply to experience. When we have an experience, nothing comes into being; when the experience stops, nothing goes out of being. The ontology of experience, especially in Iron Age Ganges Valley, is difficult to pin down, in a way that the ontology of objects is not.

    Experience is what it is, fleeting, insubstantial, and unsatisfactory. This was important at the time because Buddhists were in an argument with Brahmins about the possibility of experiencing absolute being (brahman/ātman). The Buddhist argument was that, since absolute being is unchanging, ever-changing experience could not allow access to it. We could not perceive something unchanging, because experience is always changing. So, even if an object was existent in this absolute sense, our experience of it would constantly change.

    The classical texts say nothing much about the world of objects, except that they do acknowledge that some objects (particularly our bodies) persist through time. So the world of experience and the world of objects have a different ontology for early Buddhists (to the extent that they have any awareness of ontology). It is only experience that is governed by pratītyasamutpāda. Also, there seem to be no Pāḷi texts that seek to explain karma in terms of dependent arising, but by the early medieval period when Nāgārjuna was writing this distinction had been lost. By then, everything was understood to be governed in the same way. The description of mental events arising in the meditative mind was taken to be a universal principle. And this means that nothing whatever in the world might persist even for a second. And this in a world where objects do persist for years, decades, centuries, and millennia (the universe is currently thought to be 13.7 billion years old and will continue expanding indefinitely).

    So Nāgārjuna's task was to explain away the ubiquitous evidence of persistence in favour of a reality in which nothing persists, based on an Iron Age theory of how experience works. He had to allow for persistence, because all the evidence of our senses tells us that external objects persist, while not allowing for persistence because dependent arising applied universally ruled it out.

    By this time the Brahmanical arguments about absolute being seem to be a distant memory to Buddhists, which is puzzling because Brahmanical influence is seen everywhere in the development of Buddhism. The problem of absolute being is still present, but it is seen as a mistake that everyone makes with respect to their own experience. Some Buddhist groups were struggling to explain the connection between karma and phala. A Sanskrit term exists for this problem, i.e., karma-phala-saṃbandha, where saṃbandha means "connection".

    Since it was completely implausible to assert that the world did not exist (or that existence did not apply to the world), Nāgārjuna was forced to accept that the world does exist. But he argued that this existence is saṃvṛti, a word meaning 'concealing, covering up, keeping secret'. Saṃvṛti-satya is often translated as "relative truth", but a Sanskrit speaker would be alive to the connotation of "concealing reality". In defiance of early Buddhists' reactions against absolute being, Nāgārjuna contrasted the world with an absolute reality: paramārtha-satya, translated as "ultimate reality", or "ultimate truth".

    Both saṃvṛtisatya and paramārthasatya are not true. They are mistaken views that come about when we try to shoehorn dependent arising into everything. This is not to say that the experience of emptiness (śūnyatā) is not profound and transformative, only that it is an experience. It changes the way we perceive the world, which is an epistemological change. Ontology is unaffected by meditation.


    5.4 The Confusion of Experience and Reality

    Nāgārjuna's method is thus the theory tail wagging the evidence dog. And this methodology is one of the reasons his followers are locked into irrational positions. Evidence is made to fit the theory, not the other way around. And since this requires deprecating reason, rational arguments find no purchase. Compare this to the Pāḷi texts were rational arguments are part and parcel of Buddhism, alongside myth, legend, and inner monologues.

    Nāgārjuna's worldview was one in which all domains are governed by dependent arising. He appears to see no alternative to this, despite being familiar with and valuing the Kaccānagotta Sutta. But this creates many problems for him, precisely because the persistence of the world and objects in the world is self-evident. Even something as simple as perceiving movement or change become problematic for Nāgārjuna. And, frankly, his task is not made any easier by composing his answers in metered verse.

    The central problem with karma is what I have been calling action at a temporal distance, but which Indian commentators called karmaphalasaṃbandha. Karma requires consequences to manifest long after the condition for them have ceased. And this is forbidden by the formula of dependent arising.

    Knowledge that we get by reasoning about experience is useful (i.e., an accurate and precise guide to interacting with the world), as long as we are actually reasoning rather than relying on a bias. Accurate and precise ontology requires careful comparing of notes and critical questioning of which assumptions in our worldview are valid. We have to switch to using abduction and eliminate all the impossible premises.  We did not begin to get this right until after 1543 when Nicolaus Copernicus published De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres). The critical comparing of notes about experience is what enables us to understand the world. Unless we make a strict distinction between experience and reality, and have a very critical eye out for bias, we are apt to come to erroneous conclusions.

    Nāgārjuna's fundamental mistake was to mix up epistemology and ontology, which is to say that he mistook experience, especially meditative experience, for reality; and the nature of experience for the nature of reality. Meditators I know continue to make this same fundamental error. Buddhists are constantly talking about the "nature of reality", but nothing about how we go about seeking insight could possibly tell us about reality.

    It is entirely possible that we might gain insights into the workings of our minds, seen from the inside; that we might gain insight into the nature of experience. And this kind of knowledge is certainly very useful for avoiding misery. And even though reality is an over-arching super-set, which incorporates the mind and experience, as I have tried to show in my previous essays on reality, it is layered, and descriptions that work on one scale of mass, length, energy or complexity, may not work on another scale. So a perfect description of experience may still be a faulty description of other kinds of phenomena. In fact, the classical texts were wrong about the persistence of mental states - these do persist for short periods of time beyond the stimulating sensory contact, else we could not perceive the passage of time or any kind of change. Language and music both depend on this extension in time.

    Nāgārjuna's description of reality is copied from a description of experience. Unsurprisingly, he comes to false conclusions about reality. He takes it as axiomatic that nothing persists. Indeed, he says that if anything were to persists that would contradict dependent arising (MMK 17.6). Note again that the classical Pāḷi texts don't have this problem, because they do not take dependent arising as a description of the world, only of experience (i.e., they take it to be an epistemology, not an ontology). In order to accommodate these obviously false conclusions, he has to bifurcate the truth into two domains, apparent and ultimate, because, for example, it is self-evident that our bodies and identities do persist over time. Nāgārjuna accommodates this by saying that it is true, but only relatively true (saṃvṛti-satya); i.e., true only in the sense that we perceive it to be true. In the ultimate view it is not true. Again this mixes up ontology and epistemology.


    5.5 Compatibility with Reason

    Ironically for modern Western mādhyamikas, our own intellectual tradition, from Heraclitus onwards, tells us that all existence is impermanent. At no point do we assume that if something exists, it is permanent and unchanging, except in the case of God. And since God no longer features in mainstream Western thought, even he is not a problem. For the Western tradition, persistence is not a problem per se because, unlike Buddhists, we do not associate all being with absolute being. We are not forced into the position of explaining away persistence as an illusion, because temporality is built into our notions of the world. We say quite explicitly that we live in a temporal world.*
    * Pedants may be tempted to point out that quantum physics theorists are now suggesting that time might be an emergent property. 1. There is no consensus on this speculation. 2. Even if there were a consensus, descriptions of the quantum level are not relevant to the macro-world that was the whole world until the invention of the telescope and microscope in the early 17th Century. 

    Rather than the classical position—that neither existence or non-existence apply to any experience—Nāgārjuna is forced into the bizarre assertion that both existence and non-existence apply to everything. Thus, the obviously false conclusions that his philosophy leads to are rationalised away. This is a philosophy in which obviously false conclusions have to be tolerated; the irrational is valorised, and logic is deprecated in favour of a religious ideal. Paradox becomes the sine qua non. And these conditions fit perfectly with the Romantic threads of modernism. The nihilism also fits the zeitgeist in which people feel that they don't matter and have no influence in the world, despite being bombarded with information about events in the world.

    However, in our Western tradition, paradox usually suggests a deeper flaw in our understanding, which has led us to make false assumptions, or to frame the problem ineptly. Or they are curiosities. For example, "this sentence is not true" is a trivial example of a paradoxical sentence that is both grammatically and semantically well formed, but is logical impossible. All it tells us is that there is more to language than grammar and syntax. A glance at anyone's eyebrows as they speak could have told you the same.

    For all these reasons, the Mādhyamikā view of karma is not compatible with reason. It's not a rational view. Nor, I argue, is it resolved by insight, because those with insight seem to be beset by the same confirmation bias as all of us: they seek and find confirmation of their pre-existing views. Most meditators spend many years absorbing the rhetoric of Buddhism before making any significant progress in developing insight. Thus when insights arise, confirmation bias prompts us to see them as proof of our view.

    My best informant on the process of having insights suggests that each insight both shatters existing views, but tends to set up an alternative view. One finally sees the truth and is prepared to settle down with it. However, if we persist in practising, the next insight shows the flaws in this new view and points to another view. One has to go through this "Aha... Oh. Aha... Oh." process many times before one stops taking the views seriously and realises that all views are just different perspectives on experience. It's not that one gains insight into reality, but that one stops mistaking one's experience for reality.

    However, Buddhists tend to treat Nāgārjuna as a god -- someone who had infallible omniscience. His words, or at least the interpretations of his words by commentators, are seen as ultimate truth. I notice that some people are puzzled that I would argue against Nāgārjuna. It seems to cause cognitive dissonance, because they accept what he says as gospel. To dissent from the "ultimate truth" is almost unimaginable to many Buddhists. It is akin to blasphemy, and they often respond the way theists to do blasphemy: with hostility.

    So why do modern scholars not take Nāgārjuna to task as someone who mistook experience for reality? After all, they are supposed to bring a certain objectivity to their work, aren't they? Buddhist Studies is all about accepting Buddhism on its own terms, rather than taking a critical stance. So in the 21st Century we still find scholars trying to elucidate Nāgārjuna on his own terms and he is still hailed as probably the greatest Buddhist philosopher. To me, Nāgārjuna is the greatest disaster in Buddhist philosophy because his mistake continues undetected and his influence is pervasive (it goes far beyond Madhyamaka). This is partly because the mādhyamika rhetoric is impervious to reason, but partly also because Buddhists don't use reason when thinking about their views anyway: they only seek confirmation, they do not seek falsification. Of course confirmation bias is a feature of argument production, but religious argumentation discourages doubt and scepticism.

    This critique will most like not make any impact whatever on the way people see Nāgārjuna or the way his disciples see the world. The way Madhyamaka is set up employs several cult-like features that make adherents particularly hard to reach. Those who do not simply reject the argument out of hand, will condescendingly explain that I have simply misunderstood the ultimate truth. I'm with Richard Feynman however, "I'd rather have questions that cannot be answered, than answers that cannot be questioned."

    This concludes the central argument of this essay. It remains to sum up and conclude.


    6. Compatible With Reason?

    I set out in this essay to explore the idea that the Buddhist belief in karma is compatible with reason. I argued that both karma and reason are complex subjects on which authorities disagree about almost every detail. Karma has few common features across Buddhist sects apart from the proposition that actions cause rebirth. Also, reason and our ability to employ the methods of reasoning have been widely misunderstood. Reasoning is, more often than not, subverted by cognitive biases and logical fallacies. Even so, I tried to set out a coherent account of how reason works and how we might use it to think about karma in general terms. I then critiqued a particular Buddhist view about how karma is supposed to work, by showing how the reasoning in that view is flawed.

    The question I posed in Part I of this essay was, could we come up with the doctrine of karma from first principles. That is, based on experience, can we infer—using deduction, induction, and/or abduction—a doctrine in which our actions lead to rebirth; or the watered down version that our actions infallibly lead to appropriate and timely consequences.

    Based on observations across many species of primate, Frans de Waal is able to deduce that we all experience empathy and understand reciprocity. From reciprocity we can induce an understanding of fairness and justice. And from this we can construct a highly plausible, bottom-up theory of morality that has broad applicability and explains a great deal. In this view, morality can be understood as a principle in which the social consequences of actions are appropriate and timely.

    To get to a doctrine of karma however, we have to go beyond experience and observation, and make a number of unsupported assumptions. Firstly, we have to assume a just world. This assumption is so common that it has its own name: the just-world fallacy. Secondly, we have to assume that a supernatural afterlife exists, in defiance of the laws of nature. Thirdly, we have to assume that this afterlife is cyclic or a hybrid between cyclic and linear. Many religions have a linear eschatology, a single destination afterlife. There is no credible evidence that we cite to help us choose which is the true version of events. In fact, the way the world seems to work rules out all these possibilities. Fourthly, we have to assume that some mechanism connects our actions to our post-mortem fate.

    None of these assumptions is compatible with reason, since none of these assumptions is based on inferences from evidence or experience; i.e., they were not produced by reasoning. They are assumptions that we make so that our doctrine works in the way that we wish it to. All the evidence suggests that these assumptions are simply false (an afterlife is demonstrably false). So assuming that they are true is certainly not compatible with reason. And yet, without these assumptions, there can be no karma doctrine. So karma doctrines, as a class, are not compatible with reason.

    Forms of morality in which the social consequences of our social interaction are appropriate and timely are at least possible, even if our social groups seldom attain the ideal. Beyond this, reason, fails.

    In my critique of Madhyamaka karma I tried to show that the problem of continuity (saṃbandha) remains unsolved and that it seems insoluble within the traditional Buddhist metaphysics. A completely different approach to ontology would be required because the description of mental-states arising does not work as a general description of the world. In other essays I have proposed such an approach. In my proposed ontology all existence is temporary, both substance and structure are real, and structures (such as our bodies and minds) persist over time, for a time. Morality is explained by bottom-up manifestations of empathy and reciprocity, but karma is ruled out because there is no afterlife, no supernatural, and no just-world.

    karma is not
    compatible with
    reason
    Belief in karma fails to meet the standard set in Subhuti's essay (cited in Part I). So, the major conclusion of this long essay is that karma is not compatible with reason. By this I mean that no existing Buddhist version of the doctrine of karma is compatible with reason. I also infer that any theory of karma that involves logical fallacies (such as the just-world fallacy) or supernatural elements (such as an afterlife) cannot ever be compatible with reason. Since no logical fallacy or supernatural element is demonstrable, karma also appears to fail Subhuti's verifiability criterion.

    ~~oOo~~

    Post Script. 29 Jan 2017. Someone wrote in to say that my understanding of Nāgārjuna's approach to karma was "obviously false", because he talks about karma in more conventional ways in other texts, such as the Ratnāvalī. But the fact that a Buddhist talks about karma in different ways in different contexts is completely consistent with the trend I first identified in 2013. In contexts that emphasise morality, Buddhists maintain a narrative that emphasises continuity between actions and consequences; for example, in the Jātakas, the personal continuity of people across lifetimes is normal; while in contexts that emphasise metaphysics this continuity is denied, and the idea of any persistence of any kind is rejected. And these two narratives co-exist. Buddhists switch between them without even noticing that they are doing so. Our metaphysics denies the possibility of morality; and yet morality is clearly very important to all Buddhists and karma is maintained in defiance of our metaphysics, without even achieving resolution. So the fact that Nāgārjuna exhibits this same kind of duplicity is not evidence that he does not deny the reality of karma in Mūlamadhyamakakārikā


    Bibliography

    Attwood, Jayarava. (2014). Escaping the Inescapable: Changes in Buddhist Karma. Journal of Buddhist Ethics, 21, 503-535. http://blogs.dickinson.edu/buddhistethics/2014/06/04/changes-in-buddhist-karma

    Barrett, Justin L. (2004). Why Would Anyone Believe in God? Altamira Press.

    Bodhi. 2000. The Connected Discourses of the Buddha: a Translation of the Saṃyutta Nikāya. Boston: Wisdom Publications.

    Kalupahana, David J. (1986) Nāgārjuna, The Philosophy of the Middle Way: Mūlamadhyamakakārikā. SUNY.

    Mercier, Hugo & Sperber, Dan. (2011). Why Do Humans Reason. Arguments for an Argumentative Theory. Behavioral and Brain Sciences. 34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.

    Subhuti (2007) There are Limits or Buddhism with Beliefs. Privately circulated.

    Subhuti & Sangharakshita (2013) Seven Papers. Triratna. See also https://thebuddhistcentre.com/triratna/seven-papers-subhuti-sangharakshita

    Yang, J. H., Barnidgeb, M. and Rojasa, H. (2017) The politics of “Unfriending”: User filtration in response to political disagreement on social media. Computers in Human Behavior 70, May 2017: 22–29