Agency without rationality

Just another WordPress site

Agency without rationality

It’s really nice to have these academics here because this marks my first ten years in Birmingham, hopefully not the last, and I’m really happy to be here. What I’m going to do now is just tell you a little bit about how my research has progressed from the time I was interested in rationality as a PhD student to the projects that I’m running now in Birmingham. I’m saying maybe not very much about the past and more about the present, what my current interests are and how I think they may be relevant to other things that we’re doing in Birmingham and elsewhere So the plan is really to look at the relationship between human agency and rationality and one question that philosophers – not just philosophers, social scientists, psychologists – but philosophers in particular, have asked about the mind is really whether, and to what extent, irrational behaviour undermines agency. So that’s the first question. When we behave in irrational ways, when we have irrational beliefs, are we still agents? Are we still successful agents? My answer will be yes, we are still agents Sometimes very even successful agents and the next step would be to ask whether there are circumstances – and this is much more counter-intuitive – where actually behaving irrationally is good for us as agents, whether it supports our agents in some way and although I’m at the very beginning of asking this huge question, I’m just going to tell you about two cases where I think an argument can be made for irrationality to be at least partially, at least in some context, good for us. And the first case is a case of confabulation and in particular, a case when we are asked to explain our own choices and we don’t really know much about how those choices came about, how they were produced, but we tell more than we can know. We give an explanation anyway, even if we ignore some important causal features of the process that led us to make those choices The other case I want to look at is optimism, especially unrealistic optimism. That is when we have certain beliefs about ourselves and certain predictions about our own future which are too positive given the evidence that is available to us. And my conclusion will be that at least in some cases of confabulation and optimism, our being irrational in this way, not following the evidence, not being constrained by the evidence, is actually a good thing for us as agents because it helps support our motivation to act So, agency and rationality, what is the relationship between them? When philosophers talk about agency, and I think this is probably common in other disciplines as well, they usually talk about a special type of agency – intentional agency – which is doing things for reasons in contrast with not doing anything at all or doing things randomly or doing things because we have no choice. So typical humans are agents in this sense. They are intentional agents, they do things for reasons and the things that they do are constantly being evaluated according to different standards. So we may have some resemblance of rationality, is what you’re doing the best thing for you right now? We can have also standards that are being more pragmatic, can you be successful as an agent? And when I talk about success here I mean the simplest possible sense of success So if you as an agent set certain goals for yourself, you are going to be successful when you fulfil your goals. So you take means in order to achieve what you set yourself to do Irrationality is more complicated. It’s a huge value concept, it’s clearly a negative connotation, but the way in which we apply it to different things, for instance we apply it to agents themselves, we apply it to what they think, we apply it to what they do, means that very often we are confused about what irrationality means unless we specify what it is. And here is just a list of some of the situations in which an agent can be deemed as irrational, or what the agent does. So sometimes an agent is irrational when their

behaviour defies the expectations and other people find it, other agents find it, difficult to explain or predict the behaviour of that agent. Sometimes we talk about irrationality in a strange contrast between reason and emotion So when our decisions are driven by emotional factors or intuition or instinct rather than by something like reflective deliberation rather than sitting down and thinking about advantages and disadvantages I think one of the most common sense of irrationality is really about the fact that we sometimes do not conform to principles or norms of good reasoning or of logic when we are thinking and also when our beliefs are not consistent with one another and so we tend to believe things that clash with one another and lead to confusion or paralysis of action sometimes When we have beliefs that are consistent with one another but they do not really fit the evidence very well so that we find that we tend to maintain beliefs that seem to be contradicted by the evidence all the time, maybe because they have a special role for us and we will see examples of that too. And sometimes we are irrational in a more practical sense, so our actions or our thoughts are self-inflicting and undermine what we want to achieve, undermine our wellbeing and that’s usually a judgment that comes from what other people see we’re doing and where they think the consequences of what we are doing are So each form of rationality, clearly there are lots of different standards there in the list and the list was not exhaustive, it could go on for much, much longer. So each form of rationality is characterised by a set of norms, a set of standards, and some of these standards are really core, so they are basic standards and some standards may be something that are more like something we aspire to rather than something we feel we need to be governed by. And basic standards of rationality in the traditional philosophy of mind literature are considered to be conditions for agency So you can be an agent, or in a weaker thesis, you can be a successful agent, only if you are rational in the sense that you conform to some of the very basic standards of rationality Every day experience in dealing with ourselves and with other agents and also the mounting psychological evidence on reasoning and behaviour that we have followed for the last fifty years, have actually shown that basic standards of rationality are violated when we act, when we think and that irrationality doesn’t compromise successful agency either. So not only can you be an agent and have irrational thoughts, behave irrationally, but you can be a successful agent in the sense I’ve specified before, so you can achieve what you set yourself to do even when your thoughts and your actions do not seem to conform to these basic standards. And it’s not my place here to give you a full argument for that view but I just want to give an example of how something that we care a lot as scientists, academics and philosophers in particular, there seems to be a [0:08:36] all the time, not just occasionally but systematically, which is one of the fundamental norms of epistemic rationality. An epistemic rationality has to do with the relationship between what we believe and the evidence that is available to us So one of these basic norms is that our beliefs need to be responsive to evidence. That means that if we have a certain belief and we get evidence of that belief might not be true, we need to ask ourselves whether the evidence is robust evidence, whether we should hold onto the belief, maybe suspend judgment until we get more evidence to settle the case. So responsiveness to evidence is often conducive to the truth of our beliefs but of course it does n not imply truth. It is very appealing to believe that something like intentional agency requires rationality in this sense So if our beliefs are not responsive to evidence, can we still coordinate our actions with other agents, can we still explain and predict what goes on around us, understand what goes on around us, can we plan for our own future? It seems as if it would be a challenge to do so if we had beliefs that do not respond to evidence in the right way. So rational agents adopt beliefs on the basis of evidence in their favour and are prepared to revise them or reject them when evidence against them becomes available. The study of human reasoning and decision making suggests that

the standards of rationality such as responsiveness to evidence, are systematically violated by agents and I’m just going to give you three brief examples of how that can happen in everyday life. So these are of course psychological studies done in the lab but they reflect tasks that we all engage in on an everyday basis So we have heard when we have studied the history of science that sometimes scientists get really committed to a theory that they have out-developed for instance and they become quite blind to counter-evidence. It’s quite difficult to persuade them that really their theory doesn’t work. They can become quite resistant to amendments to their theory but actually it’s interesting to think that we become very conservative about the things we believe, even when our commitment is not as strong as that. So in this particular case young people were asked to choose between one theory that could explain the disappearance of dinosaurs, the extinction of dinosaurs, something to do with volcanic eruption, something to do with something else. They just hear about these theories, they hear about the possible evidence for and against and they make a choice on that day, something they had not probably thought about very much before then. Then they are presented with evidence for and against the theory they have just chosen and it’s very interesting that they assess the evidence that comes in in completely different ways. If the evidence is consistent, it’s compatible with the theory they’ve just chosen, they tend to think that the evidence is very good, very robust and strongly supports their theory. But if the evidence goes against the theory they’ve just chosen then they tend to think that the evidence is not robust, it’s not very good evidence and actually there are alternative explanations. And here the commitment has just been made 10 minutes ago. So it’s not a big commitment to the theory but still it shows how conservative we can become pretty quickly with the beliefs that we chose to adopt and how we protect them from counter-evidence as much as we can Probably a much more familiar example is the example of prejudiced beliefs. Now prejudiced beliefs are notorious for being very insulated from counter-evidence. We can get lots of evidence against our prejudiced beliefs and still hang onto the prejudice and in this study which was done in the US on restaurants, has been shown that there is something called tableside racism which is the fact that white waiting staff tend to have a number of negative beliefs towards customers that are either black or other ethnic minorities and for instance one of the common beliefs is ‘oh, no matter how good I will be serving these customers, they will not tip me well’. Now such beliefs very often are disconfirmed. So of course there are black customers that tip very generously and what is really interesting is the kind of reasoning that the waiting staff goes through when these things happen. The surprising event is always taken to be something that signals how excellent there service was on that particular day, not how generous the customers were And in this way the prejudice remains and an alternative explanation is found Another case is the case of superstitious beliefs. This is interesting because it merges two common problems with human rationality: the fact that we sometimes are not responsive to evidence and the fact that our beliefs very often do not form a coherent set, they clash with one another. And that’s the lunar effect, right? So police officers, medics who work in emergency rooms and mental health workers, all tend to believe that more crimes, accidents and psychological crises occur during nights of full moon, although there is no evidence suggesting that there is any causal link between the phases of the moon and these kind of episodes. Not only but only the people who have been interviewed about these issues of course have quoted that theoretical and practical knowledge of the causes of psychological crises and accidents but that doesn’t seem to make them less convinced in the truth of the prejudiced belief. And on a side, there is also a saying that actually maternity wards

are very busy on nights of full moon but you’re safe it’s not the 21st this month and we can finish the lecture I think! [laughing] So these are just some examples where people who are very well educated, very well informed about a particular domain, tend nonetheless to exhibit those forms of irrationality that we would think may be just theorising from the armchair would be quite key, quite essential, to intentional agency. When we uphold the theory that has just been confirmed by new data or refused to change our prejudiced beliefs in the face of counter-evidence, and maintain beliefs that conflict with other things we know, we do not stop asking for reasons. We still are intentional agents, we still do things because we intend to do them and we behave in ways that can be made sense of So people may think that we are unreasonable in not dropping our prejudice or racist beliefs, but we still understand why we’re acting in the way we are. So we can be agents and successful agents, even if we are not responsive to evidence. And standards of rationality, such as responsiveness to evidence, seems to be too demanding to come under recognitions for agency. Clearly they are in aspiration Clearly sometimes they are things we do apply and do conform to but they’re not a necessary constraint After suggesting that not even systematic violations of basic standards of rationality undermine agency, I want to ask something that is much more provocative Do some forms of irrationality support agency? So is it the case that human agency may be because some of its intrinsic limitations works better when we adopt certain kinds of beliefs? And if this is the case, how does it work because I mean that would be interesting in itself but what we want to know is exactly what the mechanism is. Why is it that sometimes irrationality seems to work in our favour? And when we look at two particular cases that I’ve chosen because they are not pathological, so they concern us all, they concern the non-clinical population in the most widespread way which is confabulating reasons for choices and then idealistically optimist beliefs and predictions So not only are these not pathological cases of irrationality but actually it has been argued that they contribute to mental health So there may be things that we do just because we do not have any mental pathology So, confabulation first. Confabulation comes from the ‘fabula’ which is in Latin a very interesting word because it could mean an historical account, so a very very old story or a fictional story such as a fairy tale or a legend. And when we confabulate we kind of combine the two meanings because we tell a story that we believe to be true, we have no intention to deceive, but the story itself is fictional in the sense that it’s not based on evidence. So we don’t know that we’re telling a fictional story, we think we are telling the true story of what we are accounting, but actually confabulation is such that is ill-grounded, not based on evidence One very common situation in which we confabulate is when we explain choices on the basis of reasons that are perfectly plausible given the choices we have made but actually they now play a causal role in bringing about the choices that we made and this has been observed in a variety of context, all of them are actually more interesting than the one I’m going to talk about today but because consumer choice, which is what I’m going to discuss today, has been studied much more than the other domains and is less controversial, I will start from there. But we can find the same kind of mechanisms in moral preferences, political preferences, attitudes concerning romantic relationships. So even things that are quite key to our understanding of who we are as people, they are not kind of marginal issues, they are choices and attitudes that define ourselves So the classic study here which I’m sure most of you know is the 1977 study by Nisbett and Wilson and they were interested in the fact that sometimes we make choices but we are not aware of the mechanisms responsible for those choices and when we are asked to give reasons for those choices, we come up with an explanation which is again perfectly

plausible given the type of choice that we made, but does not seem to fit with the results of the study. So in a mall, participants are asked to choose some items as part of a consumer survey. Some participants are asked to choose between four different items and other participants are asked to choose between four identical pairs of nylon stockings. Then both groups are asked why they made those choices. Now what Nisbett and Wilson found is that participants choices were very heavily influenced by the position of the items. So if you had kind of four different items just in front of you, most participants would choose the one which is most on the right. And that was a systematic preference. It was a very robust result. But when people then were asked ‘OK, you made your choice, why did you make this choice? Why did you prefer this nightgown?’, they mentioned of course not the position of the items but features of the items, such as their softness or their colour, the quality of the texture and so on. Even when the items, like in the case of the stockings, differed only in their positions – so there were four identical pairs of stockings and the only difference was how they were positioned on the table. So according to Nisbett and Wilson, this means that there are some mental processes responsible for our choices that we are not aware of. So for instance, we are not aware of certain priming effects such as position effects. We are not aware that if we are right-handed, we have a tendency to prefer items on the right. We might become aware of that if we read about the study but otherwise it’s not the kind of thing that springs to mind when you’re thinking why you chose a particular pair of socks So what do we do when the experimental, when our friends ask us ‘why did you choose this pair of socks?’ we come up with an explanation which is perfectly plausible – softness, colour and so on – given our background beliefs but that does not match with how actually the choice came about. So what is interesting about confabulation is that it has some common feature that suggests that we are being epistemically irrational. So we are not really getting clues from the environment, we are not really responding to evidence. One item, I mean the first point is that clearly we are ignorant of some of the key causal factors that lead to the making of our choices and forming our attitudes is just the same. So we do not realise that we are affected by certain priming effects unless we get specific information about it. The second point is that we come up with a causal claim, with an explanation, which is not based on the evidence, even if it’s a plausible claim. And that is kind of interesting because nobody forces us to give an explanation of that type, we would think. We would think that we could just say ‘why did you chose those socks?’, ‘I don’t know’, right? We could acknowledge ignorance but we don’t d that, we have a very strong tendency and this is both in the clinical and non-clinical population when we’re thinking about pathological and non-pathological responses, so we have a strong tendency to fill the gap with something, even if that something doesn’t respond to evidence. And further, and I think this is also very interesting, we commit to a further belief about the world that does not fit the evidence. When I’ve got the four identical pairs of socks in front me and I say ‘I chose those one because it’s softer’, I’m actually committed to say that that pair of socks is softer than the other ones, than the alternatives, and that is something false but my ignorance is now somehow excusable in the same sense as the ignorance that concerns the mental processes that are responsible for my choices, which I might not be aware of, it’s something that I should just be able to see, you know, those socks are just the same. But that doesn’t happen, which I think is extremely interesting So does confabulation imply a failure of rationality? Yes, some aspects of it may be more excusable than others but certainly when we confabulate we tell more than we can know and we offer grounded causal claims as explanations. We also end up committing, in some cases of confabulation not in all of them of course, to beliefs that are false when they apply to the situation in which we find us, and that are disconfirmed by evidence that is available to us, that we should be able to read of the environment But critically we do not realise that we do not know the mental processes responsible for our choices, so when we provide the explanation,

what we would be tempted to call ‘the made up explanation’, it’s not actually made up because we genuinely believe that it’s the true explanation and that is what makes confabulation different from lying or different from bluffing, right? We truly believe that that’s the explanation for our choice If that’s the case, then some researchers, for instance William Erstain and Martha Turner, have argued that in confabulation we are not in a position to acknowledge our ignorance because we do not realise that we are ignorant We think we know, we think the explanation we give is the right one. And then I’m kind of interested in a further question. If we could acknowledge our ignorance, if we could just say ‘I don’t know why I chose this pair of stockings’, would that be better for us, or not? Would there be advantages in saying ‘I don’t know’ rather than confabulating? Clearly from the point of view of going for the truth, confabulating is bad because I’m providing a causal claim that is not supported by evidence, but are there other considerations? I think there are three Confabulation can be self-enhancing. By that I mean that compared to replying ‘I don’t know’ when I’m asked about my own choices, which may undermine my confidence and incur social sanctioning because how can you not know why you made a choice or why you have a certain moral/political belief, confabulating an explanation can support our sense of ourselves as competent decision makers. So believing that we have done it for a reason, makes us think that we are the sort of person who makes choices for a reason, even when the choice is between socks, ‘of course I can tell why I chose this pair of stockings’. The second point is that confabulation can help impose coherence on preferences that are often incoherent and fluctuating all the time. So by confabulating we can make sense of behaviour that otherwise is lacking an explanation because we may not be aware of the mental processes responsible for our choices, and identify threats in our choices that make our commitments more meaningful to us, ‘I always like bright colours in stockings’. Now in the consumer choice case it’s a bit ridiculous but again, think about the moral and political case, ‘I’m a person who supports conservative ideas’ or ‘I’m a person who supports pro-choice’. And that makes sense of a set of preferences that sometimes may not be fully coherent to start with. So the single choice becomes part of a largely coherent pattern of preferences that contributes to our image of ourselves and can also be used to guide future behaviour, ‘I’m the sort of person who loves right colours’, ‘I’m the sort of person who next time will make another choice like this one’ And third, but I think not less important than the previous two points, confabulation helps us to say something rather than nothing, to share the reasons for our choice, even when those reasons are misleading. It enables communication and exchange of information with the people around us in a social context So by allowing us to share information about our choice it allows us to get feedback from other people. Again, in the case of socks not so interesting, in the case of moral and political views, quite a bit more interesting because it may help us build distance between ourselves and our opinions when we get critical feedback from others. And that’s interesting because Whitley says for instance that the healthy human brain is not a very good recorder of events. That’s not what it is supposed to do. When we are giving the explanation we are not supposed to tell other people exactly how we got to that preference, how we got to that choice, but rather it’s a milling machine that fills in gaps and generates false explanations via available cultural theories and again, there is a bad side to it, the story we tell is false, but then there is a good side to it which is the story we tell is understandable by people around us because it comprises a theory that is shared. It is understandable and plausible that people choose socks because of their softness and their colour. It’s not so understandable unless you’ve done quite a bit of psychology to think that you have chosen that pair of socks because it was on your right So to summarise the benefits of confabulation that we say play a more important role of course when we explain choices that have a more central role in determining our self-image because the reasons that we offer for our self-defining choices can be a starting point for reflection and for a constructive dialogue with other agents and by contributing to our sense of ourselves as competent and coherent agents, self-enhancing and coherent, agents

that are embedded in a social context, the benefits of confabulation I think are not just psychological. It’s not just that they make us feel better about ourselves, ‘I’m a competent decision maker’, but may also have an impact on our actual agency, our capacity to achieve the goals that we set for ourselves Let me go through the other case, the case of optimism. So positive illusions are systematic tendencies that we all have, so the non-clinical population, to form beliefs and make predictions that are overly optimistic and I’m sure you will recognise many of those – the illusion of control is a key one; we believe that we control e vents that are very often both external and independent of us, superiority illusion; we believe that we are better than average at most things, and an optimism bias; so we expect our future to be rosier than arranged by the evidence or rosier than the future of other people. And here are some examples, in a bettering situation we tend to bet more money when we are actually rolling the dice, as if the fact that we are rolling the dice could give us a sense that we are in control of what the outcome will be. The better than average effect, and I always choose the case of academics because that’s an interesting one, so when college professors are asked whether they do above average work, 94% of them says that they do Optimism bias is again extremely generalised We underestimate the likelihood that our marriage will end in divorce that we will develop a serious health condition during our lives, and that happens again in an absolute way and in a comparative way. We think it won’t happen to us but we also think that if it happens to us, it’s less likely than if it happens to other people Unrealistically optimistic beliefs and predictions, just like confabulations, are neither well-supported by the evidence, nor are responsive to counter-evidence and so are another case of epistemic irrationality, and indeed the kind of mechanisms that have been studied and are thought to cause this kind of optimism, include incompetence – sometimes we don’t know the subject we are asked about when we are asked to evaluate our own skills in a certain context – neglect of relevant information, we tend not to think about our failure, just focus on our success, failure to learn from feedback – people tell us that we are a bit rubbish at something but we kind of not think of it in that way. We tend to be quite inventive and creative with negative feedback when we get it. Selective updating, again we saw an example of this before, if something has evidence against what we believe, we think it’s bad evidence If it’s something that is evidence in favour of what we believe, we think it’s excellent evidence. And also motivational factors such as defensiveness. We tend to be defensive when we get negative feedback And I just want to look at optimism in two interesting cases; these are much more core than consumer choice. How does optimism affect romantic relationships? You’ve got almost all the types of positive illusions as applied to close relationships. Optimism bias, even those who are very well-informed about divorce rates in society, tend to underestimate their own likelihood to get separated or divorced, and over estimate the longevity of their relationships My favourite is the ‘love is blind’ illusion We tend to be blind to our partner’s faults and perceive our romantic partner as better than average in a number of domains, including intelligence and attractiveness, and the superiority illusion which is just like the superiority illusion that you apply to yourself, you can apply it to your relationship. So we tend to perceive our relationship as better than most relationships, more caring, more lasting and so on Now you won’t be surprised by now, but these kind of irrational beliefs and predictions have a positive effect on our relationships themselves. So unrealistically optimistic couples have more constructive responses to stressful situations and this leads them to coping more effectively with a crisis. Their attitude is correlated with greater and more stable relationship satisfaction over time and this is a very robust result that causes a number of different types of studies. Moreover, and this is a beautiful quote from Murray and colleagues so I’ll just read it, ‘intimates who idealise one another appear more prescient than blind, actually creating the relationships they wish for as romances progress’. So

this is the case where you think your partner is actually better than he is, but you act in such a way that you make your partner better than he was and so it’s a self-fulfilling prophecy. Why is optimism so beneficial to relationships? These are three hypotheses Optimism protects us from the potentially disruptive effect of conflict and doubt, positive illusions make it more likely that we deal with problems in an effective way and see the bright side of our partner’s faults, so this is called transformation which is interesting, it’s like someone maybe is, I don’t know, very lazy and you think oh no actually, they have a relaxed attitude towards it! So you kind of re-write the story We are more likely to live up to the high standards of our idealising partners ourselves, increasing our sense of our own self-worth, and coming to perceive ourselves as positively as our partners see us. So being considered better than we are make us want to be better than we are. This is actually, I’m not sure whether this works but this is a reflective appraisal and it’s really the core of the self-fulfilling hypothesis What about health? Just the same happens to health, so optimism can apply to physical health and there are two cases that are quite striking, both classics now, both studied by Shelly Taylor. So the first one is a case of optimism bias, positive prediction. Say a positive man was found to be significantly more optimistic about not getting AIDS than say a negative man, which is clearly an unrealistic prediction, an illusion of control and superiority, so women diagnosed with breast cancer adopt self-enhancing beliefs such as ‘I’m stronger as a result of the illness’, ‘I can cope better than other cancer patients’ – again, the superiority thing – ‘I can control my health conditions from now on’ – illusion of control, and interestingly both groups of people get benefits from that kind of belief So the study on optimistic of a positive man show that even when their optimistic predictions were unrealistic, they brought about positive attitudes as coping techniques and greater practice of health promoting behaviour. And similarly in the case of women diagnosed with breast cancer, they showed that they coped better with their situation when they had self-enhancing beliefs and behaved more responsibly because they believed that they had the power to avoid or delay their illness. So the illusion of control made them take certain precautions and change their lifestyle more, to a greater extent, than women who did not have those self-enhancing beliefs and those illusions So in these cases, optimism led people to making lifestyle changes in order to restore or maintain their health and it helped them manage stress, was conducive to them facing rather than avoiding problems OK, so where have we got so far? In philosophy, rationality is often considered a precondition for agency or for goal satisfaction, but evidence on systematic tendencies in human behaviour suggests that standards of rationality can be, and often are, violated without compromising agency, or successful agency. Moreover, and this is the kind of moral suggestion that needs to be explored in much more depth, some forms of irrationality such as maybe confabulation and unrealistic optimism, seems to support successful agency, seems to improve our performance In what way? What is the mechanism? I think the key word here is ‘motivation’. So when our choices are integrated in a coherent pattern of behaviour, it makes sense to us and can be shared with others, we see ourselves as competent and coherent agents. Thanks to overly optimistic beliefs and predictions, we are more productive, more resilient, better planning and more effective at problem solving We are more likely to persist in pursuing our goals in the face of adversities and obstacles, for instance changing our lifestyle after a diagnosis of a very serious illness, and at least in some domains we are also more likely to perform satisfactorily and fulfil our goals, leading to success, such as in the relationship case where we overcome conflict in a positive way And motivation really seems to be the link between these forms of confabulation and self-enhanced and a successful performance and this link between self-enhancement and motivation in particular, has psychological backing from different research programmes. So psychologists

who like to use different constructs, a sense of coherence, self-efficacy, self-determination, hardiness, optimism, they all seem to agree that confabulation and self-enhancing strategies are beneficial by sustaining motivation. Thinking of myself as a more coherent and competent agent makes me more likely to overcome obstacles and persevere in pursuing my goals. Even if the sense of competence and coherence is illusory to start with, it’s an illusion There is also some interesting studies in neuroscience but this is extremely speculative I just find it quite interesting that there are other ways, other angles that we can get to the same relationship between self-enhancement and motivation. They suggest that one of our self-enhancing strategies, maybe one of the most common, the self-serving bias, interests the same evolutionarily old regions of the brain that are responsible for controlling all directed behaviour. So there is a suggestion that self-enhancement is automatically linked because of our evolutionary past to an increase in motivation and I am not sure whether that’s true or even how to evaluate that idea, but it seems interesting because one implication of that would be that certain forms of self-enhancement are almost automatic and they need quite a lot of deliberation to be counteracted. So our default position is to self-enhance and it’s something that we do without realising that we do it and when we don’t want to do it then we have to spend a lot of computational effort and conscious attention in trying not to do it. So rationality is hard work So implications, I think this kind of literature has a lot of implications. Some are purely theoretical about the relationship between rationality and agency, some are much more practical. But one thing that interests me is how we can revisit the perceived gap between normal and abnormal cognition in the light of this new take on the relationship between rationality and agency. So the presence of irrationality alone does not determine whether some agents are psychologically unwell or they are unsuccessful in achieving their goals, as all agents are irrational and only some are successful. So it’s not legitimate to explain certain forms of mental pathology for instance with irrationality. Irrationality doesn’t explain psychological distress in any way because it’s something much more widespread than something that we find just in the clinical population. Rather the form of irrationality that we are vulnerable to and the extent of our vulnerability may be important factors to consider in establishing how psychologically healthy and agentially successful we are, meaning that certain forms of irrationality may be leading to psychological distress but there are other forms of irrationality that, as I suggested, may actually be good for us and contribute to our mental health Thank you very much