Hindsight bias
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=03:42,07:09
Kausel: Hindsight bias is whenever after an event occurs, you think that it was obvious. Or that you could have predicted [it] all along. If you’re thinking about a soccer game – like before the event, before the game – you’re not sure, maybe you have some preference, but there’s a lot of uncertainty. And after like a soccer team wins, you say, “Oh, yeah, that that was obvious. I mean, of course, it would happen. Right?” And yeah, so in the context of our research, we focus especially on hiring decisions. And so, before – when you’re deciding between which candidate would perform better – you may have, like, some doubts about that. But then after the fact – after you see [their] performance, “Oh, yeah, of course, of course. He or she performed better, because she did a great in her, or his, interview,” and so on.
Howes: Yeah, I always think about it … so, Ed uses soccer because he’s from Chile, and … but I always I love the whole March Madness, and was devastated that we couldn’t have March Madness again so that I could beat Edgar in our bracket. But I think about the hindsight bias with: everybody fills out their brackets, and you pick who you want to go to the Final Four, and to win it all. And after the fact – after some team wins – people start to say, “Oh, I knew it all along, I knew that team was going to go all the way.” And yet they didn’t put it in their bracket. And you just kind of think, “Well, if you knew it all along, you would have put it in your bracket, moron.” But they don’t. That’s that’s what I think of as hindsight bias.
Kausel: Well, and I think Americans have the “Monday Morning Quarterback,” right?
Howes: Yeah.
Kausel: We didn’t have – we don’t have that in Chile, because because we don’t have quarterbacks here in Chile. But anyway.
So, the hindsight bias, I think it’s related to learning or perceived learning, which is what we measured in our paper. This is an example that I give my students. So let’s say you’re studying for, like, a math test. And you’re … you have a set of exercises, and you’re practicing them. And you have the solution. And then you look at the math problem, and then you look at the solution. And you say, “Oh, probably, yeah, I could have done that.” Right? And so you don’t study much, because you could have done that anyway, but then you’re confronted with a test where there’s no solution, right? And then you cannot solve it. So that’s the problem with saying, “Oh, yeah, of course this would happen,” because you put less effort in[to] trying to think about what could have happened. And so that’s how it’s related to less learning.
[ Back to topics ]
Narcissism
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=07:10,09:13
Leigh: Tori and Ed’s study sought to determine if people who exhibited greater narcissism would also more readily fall victim to hindsight bias. So Ryan and I were interested in hearing how they characterized what narcissism is.
Howes: So when we think of a narcissist, it’s the person, that quintessential “the world revolves around me.” And they think that they’re … they have these grandiose views about themselves. They think they’re better than others on every element. They don’t want to seek advice from other people, because they don’t trust … they’re more incompetent than they see themselves, right? From a clinical perspective, if you’re looking at the DSM, there is an element of low self-esteem in there. And what we’re looking at is really more about that over … almost over-confidence and over-arrogance, if you will. And it lies on this continuum of self-perception.
Kausel: Yeah, so the difference between narcissism and like the NPD – narcissistic personality disorder – is that narcissism is a continuum, right? It’s like a personality trait. And everybody has a little bit of it, maybe. It’s like a bell curve – a normal distribution in the population – of narcissism. It’s not necessarily, like, a disorder or something necessarily bad. And probably, you need to have some, a little bit of narcissism in order to deal with life. But the trademark of narcissism is, like, self-enhancement. It’s, like, this view of feeling superior to others, a motivation for constant self-affirmation …
Howes: Yeah. And it’s not like it’s just narcissists who do it. There is the self-protection element. If we didn’t take credit for things we got right – or we blamed ourselves too harshly on everything we got wrong – we’d all be walking around needing Prozac. It is a self-protective thing, but we just see it to an extreme amount with narcissists. And I think that’s a really important point too.
[ Back to topics ]
Should counterfactual thinking
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=09:14,12:03
Watkins: The influence of hindsight bias on some people’s ability to learn from their successes and failures was central to Tori and Ed’s project. And this type of learning can come about by what they refer to as “should counterfactual thinking.” This includes such after-the-fact thoughts as, “I should have done something different.” They contrasted this against “could counterfactual thinking,” which is epitomized by the now all-too-familiar phrase, “No one could have seen this coming, and nothing could have been done.” Doug and I were curious what the benefits and pitfalls of these modes of thinking can be.
Howes: A key point, really, is this notion of the should counterfactual thinking: that after the fact, you know, “I should have done this, I should have done that.” And that’s what we’re finding that narcissists aren’t really doing: which is the … questioning themselves after, you know … if they get something right, they’re not saying “Oh, what should I have done differently?” Because why would they do anything differently? They got it right. And even if they got something wrong, they’re not thinking they should have done something different. And so, they’re not questioning themselves. Because they don’t think that’s necessary. And then it leads to even perceive learning isn’t happening. Does that sound about right, Ed?
Kausel: Yeah, absolutely. So the key – the key issue, what we argue – is should counterfactual thinking, which is, “I should have done this.” So I do poorly in a test, “Oh, I should have studied better,” or “I should have done” this or that. Or, like, you make your hiring decision, and the person you hired doesn’t work very well. You … “Oh, I should have put more emphasis in some aspect of the decision making process.”
Howes: Right. And we’re saying you should even say “should” even if you got the right decision. It’s sort of a – you didn’t make a – you might have had … you might have gotten lucky.
Kausel: Yeah. So one thing is that “could have happened.” And one different thing is to think about, it “should have happened.” In the case of “should,” it means that you ought to have done that in order to have a better outcome. “Could” is “it could have happened.” And it’s like a precondition to think about “should.” But in the case of “should” it’s more of a, maybe, a moral aspect.
Howes: Right? There’s almost, there’s almost a control element. And that if you should have done something, it’s almost like you’re at fault, because you didn’t, whereas “could” almost implies that there’s other possibilities, but there’s not that “ought to” there’s not that need. And so that is even more important when we’re talking about narcissists, because the “should” is all about what you ought to have done, or what you had in your power to have done and you didn’t do it. So there’s almost a blame. If you got it wrong, it was that you should have done something different. Whereas “could” has probably less of that blame component, and more of a, “Well, yep, that was another possibility.”
[ Back to topics ]
Narcissism, hindsight bias, and prediction accuracy
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=12:04,15:28
Leigh: Tori and Ed found that due to their exaggerated self enhancement and tendencies toward self protection, narcissists show stronger hindsight bias when their predictions are accurate … as well as the sort of reverse hindsight bias when their predictions are inaccurate. And in their paper, they elaborate on a few well-known cases of narcissistic behavior getting in the way of learning. So Ryan and I wanted to hear more about some of these infamous examples. We’ll hear what they had to say, after this short break.
ad: Altmetric‘s podcast, available on Soundcloud, Apple Podcasts & Google Podcasts
Leigh: Here again are Tori Howes and Edgar Kausal.
Kausel: Yeah, so this research started, like, nine years ago. Trump was around but was not as a huge figure as he is today. But I have to say that I personally think that it’s a very good example of narcissism. Even if you like him or not … you may like him, but still, we have to recognize that the guy has some narcissistic traits. I mean, there’s no denial about that, right?
So one example that we cite in our paper is that he stated in 2016 or 15, that he had predicted the Iraq War better than anybody. And if you go back to 2004 … I mean, he was not that sure, to be honest. So yeah, so that’s a good example of the hindsight bias, and the person who might have some high degree of narcissism. So what we argue is that people when they’re right, they exhibit some hindsight bias, especially narcissists. But when they’re wrong, they do the opposite, which is like, “Nobody could have predicted this.” So after you’re right, you tend to say, “Oh, yeah, of course, of course, I predicted this before.”
We give another example in our paper in which he said that it would be a very easy thing to make a deal on healthcare. But then he realized it was difficult. And so he said, “Nobody knew healthcare would be, could be so complicated.” Right? So it’s like a reversal of saying, “Hey, I didn’t, but nobody could have done that.” Which is like a reversal. The interesting thing is that, if you have that mindset, you may not learn from things, right? Because it’s … like, every time you’re right you say “Oh, yeah, of course, I was sure that would that would happen.” So, whatever this event was, “I don’t have much to learn from it.” But then when you’re wrong, you say “Eh! Nobody could have predicted this.” And then you cannot learn from that event, either, because if it was some kind of a random thing that nobody could have predicted, then of course, there’s nothing to learn from it.
Howes: There’s almost a question of why would you question yourself if you are right? Why would you ask the “should.” Or, you know, “What should I have done differently?” if you are right. So that’s why we thought well, prediction accuracy moderates the relationship between should counterfactual thinking and hindsight bias. You know, so we have a lot of thinking and theory that went in.
[ Back to topics ]
Mediators and moderators
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=15:30,17:14
Watkins: As Tori just mentioned, should counterfactual thinking mediates the relationship between narcissism and hindsight bias. They also found that this mediation is itself moderated by prediction accuracy, such that the relationship is negative when predictions are accurate, but positive when inaccurate. Doug and I asked Tori and Ed how it is that they describe this type of moderated mediation model to others.
Howes: I guess I always think about, with a mediation is: the mediator is telling you why something is related to the other thing. So with narcissism and hindsight bias, if it’s mediated by should counterfactual thinking, we’re saying why is narcissism and hindsight bias related. And then a moderator just kind of tells you under what conditions: is it high or low, or positive or negative? That’s how I usually conceptualize it.
Kausel: Right. So one mediator could be … students who are more conscientious, they have better grades. And so, why? Well, because people who are more conscientious, they study more. And so conscientiousness is related to, to hours studying, and then hours studying is related to better grades. And so that’s the “why” and that’s a mediator. And so, a moderator is under which condition, the relationship could be different. And so you can say, well, talent, and earnings, for example.
And I have some data showing … well, this is, well, this is not a finished paper, but … so we find that talent – [however] we define talent – it’s more strongly related to earnings for men than for women. And so that means that for a person who’s a male, who has more talent, they tend to earn more compared to women who have more talent and they don’t earn as much as men. And there’s several reasons that happened … because they may be in different jobs, or this might be an issue of gender discrimination. And whatever the reason is, the relationship between – again, talent and earnings is different for women and for men – and that’s a moderator.
[ Back to topics ]
Sampling and recruitment
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=17:15,21:23
Leigh: Though they conducted several other experiments not included in the final manuscript, Tori and Ed’s article describes the results from four research studies related to narcissism, should counterfactual thinking, and hindsight bias … as well as how they impact – or are impacted by – the accuracy of people’s predictions and perceived learning. They tested their various hypotheses by seeing if participants would apply self-critical thinking about what they should have done – or what they should have known – as well as whether narcissist would tend to do so less often. As their experiments involved an array of international participants who were both recruited in person as well as online via the crowdsourcing platform Prolific, Ryan and I were interested in hearing more about what led them to seek out so diversified a sample of respondents.
Howes: We wanted to have the generalizability. And so, we had graduate students or MBA students from Chile as a sample, we had undergraduate students from the United States, we had the Prolific sample, which – really the use of Prolific versus Mturk – it’s sort of one of those – just maybe – personal findings that better data quality from Prolific versus Mturk in terms of just what we get on the output side of things. And maybe it’s just a preference. And the snowball sampling, just trying to get, yeah, those normal individuals, but get them from anybody.
At a certain point nobody wants to fill out surveys anymore, or do studies. And it almost comes to a “desperate times call for desperate measures” and begging and pleading, or just paying in the sense of Prolific. Sometimes it feels just a ridiculous just attempt to get people to answer questions. But when they do, we’re so so happy.
Kausel: Yeah, and I think we used Prolific – maybe you said this, but I was not paying attention. Also, because we could have a like a professional sample, it was easier with Prolific than with MTurk, I think. That was one of the reasons. But every study has its weakness. And so a potential weakness that we have in our studies is that they’re not actual decisions. Right? These are experiments … so this is in the lab, and this is our scenarios, and so on.
Howes: Yeah. So participants were given just this description of some individuals who are going to be hired for a job, and they were asked to rate them. They were given personality scores, some unstructured interview scores, handwriting analysis: so things that weren’t really valuable information, as well as stuff that was. And they were asked to say who should be hired. And then after they made their decision, they were told, “Oh, actually, you know what, we wound up hiring both. And here’s how well they’ve performed.” And in that way, they were either given information that, “Oh, look, you made the right choice, because who you said to hire performed better than the other person,” or they wound up getting something that where their prediction accuracy was wrong, they would have hired the person who actually ended up performing worse.
Kausel: Yeah, so of course, there are some strengths. Reviewers are worried about how generalizable are our results. And so that’s one of the reasons: okay, we do have scenarios, these are not real decisions. But we have, like, a wide variety of different people, and we all converge into the same result. So, even if these are our scenarios, at least we can say that it’s not specific to, like a, you know, Chilean population or just undergrads. We have people from, you know, different contexts, and we find the same result. So at least from that perspective, we were kind of sure that there’s internal validity.
[ Back to topics ]
Measuring narcissism
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=21:24,23:38
Watkins: Psychologists have been interested in measuring narcissism since that initial study from 1975, which I mentioned at the top of the episode. By far, the most common method for assessing narcissism is to use standardized self-report measures. In other words, the types of personality questionnaires that the field of psychology is famous for. Since Ed and Tori, use this method as well, Doug and I were interested in hearing more about how they weighed their options as to which measure to use.
Kausel: The most well known measure is the NPI 40 – Narcissistic Personality Inventory 40 – which is the number of items. And by the time we were conducting these studies, and again, we started this like eight or nine years ago. So this was the most accepted measure of normal narcissism. There are others, like the MMPI 14, I think, or 15. And then others that are newer. It’s not like a one to five Likert scale, it’s … you have to choose between two potential behaviors, on thinkings, and so on. There are some psychometric properties that have been questioned by some researchers. On the other hand, it’s very complete, it has 40 questions, and so on.
So within the paper, we talk about “narcissists” and “non-narcissists” because it’s easier to write about it in that way. But we did use a continuum measure and continuum score. The findings that we have is: they were linear. The more narcissism you had the less should counterfactual thinking you had. And so we didn’t have, like, cut scores or whatever. If you have a huge sample, you might detect that kind of stuff, but it’s harder to do it with less people. And again, we have 170 people or participants per study – a bit more with less – which is fine for detecting the stuff that we were testing. [But] to find [a] nonlinear relationship, you need a bigger sample to do that.
[ Back to topics ]
Non-conscious priming
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=23:39,25:44
Leigh: In their final study, Tori and Ed primed participants to focus on should counterfactual thinking by giving them a sentence unscrambling task. This test was first advocated by John Barge, a Yale social psychologist who published a paper in 1996, with the striking finding that students walked more slowly, when they were primed with elderly-related words, such as “bingo” and “Florida.” As this study has repeatedly failed replication, we were curious about their thoughts about the veracity of such non-conscious priming.
Kausel: In my perspective, it depends on the priming and your dependent variable. If you’re telling me that, because I’m reminded that I might be old or something, and then I’m gonna work slowly. Then I say, “Gee, I’m not sure.” I would be kind of … I’m not saying that it didn’t happen, right? I’m just saying that that’s … this is a huge kind of leap.
But in this case, so there’s two issues. So first is that some people that I know and I trust, conducted [the] experiments. So one is Jochen. So he did a study on regret. And then Lisa Ordóñez and David Welsh, I know them from Arizona. They did this something on moral standards. And so these primings are more into thinking, right? So I might prime you into thinking about moral issues, or in our case to think more should counterfactual thinking. And Jochen’s case was thinking about regret. And then I asked you to conduct a study and see if there is, like, a bias there in the way you’re thinking. And so there’s … I would say that there’s a match between what you’re manipulating or priming, and then the dependent variable, which is like a decision or potential bias. And so my problem with the priming literature is that when there’s these huge leaps that you’re saying … that because if you think about, I don’t know, like a tennis player, you’re going to play better tennis or something. I’m kind of more … I’m not saying that they don’t happen, but I am more suspicious.
[ Back to topics ]
Workplace implications
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=25:44,28:37
Watkins: Evidence from their research suggests that people who assess higher on narcissism are in fact, less likely to apply should counterfactual thinking when their predictions are wrong, and therefore suffer more greatly from erroneous hindsight bias. Given the Tori and Ed are industrial organizational psychologists, Doug and I were interested in learning what applications they feel that these findings might have in the workplace.
Howes: I think of it as the after-action reviews, or “those decisions are made, we know the outcomes. Now go back and check and decide.” You know, “Did you do everything that you should have done? What did you do well, what did you not do well?” And so encouraging people to consider alternatives, consider what they should have done differently, even if the outcome was favorable, or in line with their predictions. And it’s just after major decisions: do that after action review.
Kausel: Yeah, so I think they’re … from my perspective, that there are two things that are linked to our paper. So one is that there’s some evidence suggesting that narcissists or people high on narcissism, tend to climb easier in organizations in terms of promotions and stuff, because they know how to sell their stuff really well, and so on.
Howes: They’re confident …
Kausel: Right, and that doesn’t mean necessarily that they’re good at those tasks. But they’re very good at showing that they’re good, or they’re very good. They tend to do well, and …
Howes: Right, we want a confident person in an interview. We want people to tell us, they’re good, and why not believe them?
Kausel: Right. And so one direct implication is: try to have a good performance appraisal system in which you assess not only how confident people are, but also how they do their jobs really well. And if you need confident people, it’s fine. But then be aware that some type of people have some problems too, right? So that’s one thing. And the other thing, I would say, related to should counterfactual thinking is that, if you think about it, it’s also related to, like, having a learning culture within an organization. A learning culture is that “Oh, we did well. We did well, but maybe we could have done things better anyway. Even if we did well, let’s try to learn from the process,” whatever that was, the hiring decision or whatever. And so, if you have that learning mindset, you are going to have more should counterfactual thinking because it’s, like, related to thinking, “Oh, things could have done different, we should have done something different,” and so on. And so having a learning culture within a team or an organization, I think that’s a really good thing for a team [oriented] organization to have.
[ Back to topics ]
Links to manuscript, bonus audio and other materials
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=28:38,29:21
Watkins: That was Tori Howes and Edgar Kausel, discussing their article “When and why narcissists exhibit greater hindsight bias, and less perceived learning,” co authored with Alex Jackson, and Jochen Reb, and published on June 4, 2020 in the Journal of Management. You’ll find that link to their paper at parsingscience.org/e84, along with transcripts, bonus audio clips, and other materials we discussed during the episode.
Leigh: If you’ve participated in our 2020 listener survey, thanks! If not, and you’ve got five to ten minutes to spare, you can do so until the end of the week at parsingscience.org/survey. You’ll be helping us better understand our listeners and better serve your interests in future episodes. No personally identifiable information is requested or recorded, and you can skip any question you want.
[ Back to topics ]
Preview of next episode
https://www.parsingscience.org/wp-content/uploads/2020/09/ParsingScience084-Howes-Kausel.mp3#t=29:22
Watkins: Next time, in Episode 85 of Parsing Science, we’ll talk with Kyesha Jennings from North Carolina State University about her research into what the wildly popular meme “hot girl summer” – based on the lyrics by hip hop phenomenon Megan Thee Stallion – tells us about changes in the ways in which Black women cultivate community in digital spaces.
Jennings: Black woman were creating lists: “This is what it means to have a hot girl summer.” So we saw all this engagement, whether they were trying to remix, to improve, enhance … just kind of speak to what a hot girl summer meant specifically for them. But there was just a lot of engagement in terms of how it’s related to our identity.
Watkins: We hope that you’ll join us again.
[ Back to topics ]