This is the fifth in a series of blogposts on Peter C. Brown, Henry L. Roediger III & Mark A. McDaniel’s seminal book on the science of learning, Make It Stick: The Science of Successful Learning (Harvard University Press, 2014). This week focuses on the fifth chapter, ‘Illusions of Knowing’. For our blogposts on previous chapters, see our blog. Several of the key terms discussed in what follows are defined in those blogposts.
We recently hosted a webinar panel discussion with the book’s authors. This is available on our podcast.
Jonathan Beale, Researcher-in-Residence, CIRL
This chapter focuses on metacognition: knowledge of our own cognitive processes. The chapter concerns our awareness of what we do and do not know, and what we have and have not learned. It discusses psychological obstacles that stand in the way of effective metacognition, such as perceptual illusions, cognitive biases and distortions of memory to which all human beings are susceptible. These can be particularly problematic in learning, especially in our assessments of our own learning and knowledge.
The authors call such obstacles ‘illusions of knowing’ or ‘knowledge illusions’: cognitive illusions where we believe that we know or have learned more than we know or have learned, which can be brought about through engaging in ineffective learning strategies which produce such beliefs. Knowledge illusions are examples of poor metacognition (pp. 15-16). The authors offer suggestions on how to avoid such illusions, or at least diminish their force (pp. 125-7).
Daniel Kahneman on analytic systems of thought
In his best-selling 2011 book, Thinking, Fast and Slow, Daniel Kahneman, 2002 winner of the Nobel Prize in economic science, puts forward a theory of how human beings think. A central theme of the book is metacognition, in that it argues that all human beings are prone to overestimate their knowledge and understanding of the world. Kahneman’s theory holds that human beings possess two ‘analytic systems’ of mental processing, which he calls ‘system 1’ and ‘system 2’ (p. 105). Kahneman’s theory underpins the arguments the authors put forward in this chapter.
System 1, or the ‘automatic system’, is unconscious, intuitive and immediate. It draws upon our accumulated experience, emotions, sensory information and memories to assess situations instantaneously. Its automaticity is the mastery we reach through practising particular skills for thousands of hours. It gives us reflexes and the ability to make decisions immediately (p. 106).
System 2, or the ‘controlled system’, is our slower mental processing faculty, where we engage in conscious analysis and reasoning. It involves decision-making, analysis and exerting self-control over System 1. System 1 is instinctual, reflexive and detects danger; consequently, it can be very hard to overrule using system 2 (p. 108).
We use System 2 to ‘train’ System 1 to ‘recognise and respond to particular situations that demand reflexive action’ (p. 105). We depend on System 2 to manage our lives: e.g., checking our impulses, planning, and considering the potential outcomes of decisions – for example, through through counterfactual reasoning. Most importantly for the purposes of Make It Stick, we use System 2 when we are learning.
System 1 is automatic and extremely influential in our actions, but it is susceptible to knowledge illusions. When the conclusions we reach through System 1 are a result of such illusions, there are risks to our perceptions, assessments and situational understanding, and, consequently, risks are posed to other mental processes, such as decision-making. For example, we need to learn when to trust our intuitions, when to suspend judgement and what degree of scepticism to apply to our intuitions. An interesting case discussed at length by the authors is pilots, who are trained to beware of certain perceptual illusions they are particularly susceptible to in their line of work and how to adjust their flight instruments accordingly (p. 106).
There is interplay between the systems, in which our reflexive responses to situations are balanced against and often clash with our situational analysis. (On this interplay, the authors refer to Malcolm Gladwell’s 2005 book, Blink.)
The lesson to which the authors primarily draw our attention is that we need to beware of those knowledge illusions that can influence System 1. Examples of such illusions are perceptual illusions, cognitive biases, constructing faulty narratives of events, and distortions of memory (p. 109). To avoid knowledge illusions or at least diminish their force, we need to carefully regulate System 1 using System 2 (pp. 108-9). (For further on Kahneman’s theory, see this New York Times review of Thinking, Fast and Slow, by Jim Holt.)
Illusions of knowing
We are all prone to being ‘readily misled by illusions, cognitive biases, and the stories we construct to explain the world around us and our place within it’ (pp. 104-5). In other words, we are ‘all hardwired to make errors in judgment’. Consequently, the ability to form good judgements is a skill we have to acquire and develop, to become ‘astute observer[s] of [our] own thinking and performance’ (p. 104).
Our susceptibility to knowledge illusions is largely due to subjectivity: the weight we place on our subjective experience for informing our understanding of the world. The authors write:
‘[I]t is nearly impossible to avoid basing one’s judgments on subjective experience. Humans do not give greater credence to an objective record of a past event than to their subjective remembering of it, and we are surprisingly insensitive to the ways our particular construals of a situation are unique to ourselves’ (p. 111).
The authors outline many knowledge illusions, several of which focus on illusions concerning memory. The illusions are summarised below, with those concerning memory under a separate sub-heading later.
When we are incompetent, ‘we tend to overestimate our competence and see little reason to change’ (p. 104). This overestimation leads to a false belief about one’s level of competence which can prevent one from seeing any need to improve (p. 121).
An example of this is the ‘Dunning-Kruger effect’ (p. 121). Coined by the social psychologists David Dunning and Justin Kruger, this is a form of cognitive bias where a person who is incompetent at something fails to recognise their incompetence and greatly overestimates their competence.
Dunning and Kruger argue that one of the reasons this occurs is because incompetent people fail to learn through experience that they are unskilled, which is largely a result of the lack of negative feedback we tend to give one another. Moreover, on the rare occasions when we receive negative feedback, many of us tend to search for alternative explanations for why things went wrong (pp. 122-3).
The authors write that an incompetent person ‘can be taught to raise their competence by learning the skills to judge their own performance more accurately’ – in other words, they can be taught how to improve their metacognition (p. 121). To gain competence, the authors argue, ‘we must learn to recognize competence when we see it in others, become more accurate judges of what we ourselves know and don’t know, adopt learning strategies that get results, and find objective ways to track our progress’ (p. 105).
The curse of knowledge
The ‘curse of knowledge’ is our ‘tendency to underestimate how long it will take another person to learn something new or perform a task that we have already mastered’. The authors suggest that teachers ‘often suffer this illusion’ (p. 115).
‘Hindsight bias’, also known as the ‘knew-it-all-along effect’, is the tendency to ‘view events after the fact as having been more predictable than they were before they occurred’ (p. 116).
Feeling of knowing
The ‘feeling of knowing’ occurs when false information sounds familiar and engenders the feeling that we know the information to be true, which leads to us mistakenly believing it to be true. Examples can be found in repeated campaigns by advertising companies or political parties. The authors give an example from propaganda known as the ‘big lie’ technique: where ‘even a big lie told repeatedly can come to be accepted as truth’ (p. 116).
False consensus effect
The ‘false consensus effect’ is a commonplace illusion which occurs as a result of our tendency to ‘generally fail to recognize the idiosyncratic nature of our personal understanding of the world and interpretation of events’, and how this differs from the experiences of others (p. 117).
‘Fluency illusions’ occur when we mistake our fluency with a text for mastery of its content (p. 116).
This notion sheds light on one of the questions raised in previous blogposts. We have seen that the authors argue that there is a danger in making explanations too clear, because it risks makes learning less effortful. For example, ‘When [students] hear a lecture or read a text that is a paragon of clarity, the ease with which they follow the argument gives them the feeling that they already know it and don’t need to study it … [but] when put to the test, they find they cannot recall the critical ideas or apply them …’ (p. 17). One reason why this may be a problem emerges in this chapter: such clarity can generate ‘illusions of fluency’, where students mistake their fluency in hearing a crystal clear lecture with mastery of the lecture’s content.
Fluency illusions also offer a possible answer to a question raised in the previous blogpost, on what the previous chapter called ‘interference’: a counterintuitive strategy where learning is interfered with such that while the interference is unwelcome, it is nonetheless beneficial for learning (p. 86). Interference may be a useful strategy for preventing illusions of fluency or at least diminishing their force. An example of interference would be constructing a lecture in a different order to the textbook. This can be beneficial because ‘the effort to discern the main ideas and reconcile the discrepancy produces better recall of the content’ (p. 87).
The other knowledge illusions concern memory. There are various ways in which our memories can be distorted and unreliable (p. 112), some of which can be identified as specific cognitive illusions. For example, our confidence in a memory ‘is not a reliable indication of its accuracy’ (p. 117), although we often take it to be so. We also remember things that are implied but not explicitly stated (pp. 112-3).
A general strategy for improving our memories and avoiding or diminishing the force of memory illusions is to create more retrieval cues for retrieving memories, by connecting what we learn with what we already know:
‘[T]he more you connect what you learn to what you already know, and the more associations you make to a memory … the more mental cues you have through which to find and retrieve the memory again later. This capacity expands our agency’ (p. 112).
On retrieval cues, see our previous blogpost.
Narrative and memory
One of the reasons we fall into memory illusions is that we construct narratives of our lives and the things we learn. ‘Our understanding of the world is’, the authors write, ‘shaped by a hunger for narrative’ (p. 109). For example, we tend to ‘remember those elements that have greatest emotional significance for us, and we fill in the gaps with details of our own that are consistent with our narrative’ (p. 112). Narrative and memory are so intertwined that the authors describe the two as becoming one and the same:
‘We gravitate to the narratives that best explain our emotions. In this way, narrative and memory become one. The memories we organize meaningfully become those that are better remembered’ (p. 110).
Our discomfort with ambiguity and the arbitrariness of certain events often leads us to try to construct a rational, narrative understanding of events, which cannot always be mapped onto the actual course of events (p. 110).
A striking example the authors give is of an experiment where subjects believed they were being assessed for their reading comprehension and their ability to solve anagrams, but were simultaneously exposed to a background phone conversation where they could only hear one of the two people on the phone. The same study was done but the subjects were exposed to both people participating in a conversation while taking the assessment – so in the latter case, the subjects could hear the full conversation. The subjects who were exposed to one half of the phone conversation remembered the conversation better than those exposed to the full conversation, because their minds constructed the narrative for the other half of the conversation. But inferring this narrative impeded their focus on the task at hand more than the other group (pp. 109-110).
‘Imagination inflation’ is the ‘tendency of people who, when asked to imagine an event vividly, will sometimes begin to believe, when asked about it later, that the event actually occurred’ (p. 113). The authors give several fascinating (but shocking) examples (p. 112) where hypothetical examples which are imagined vividly by people are able to ‘seat themselves in the mind as firmly as memories of actual events’ (p. 113).
‘Suggestion’ is a type of memory illusion which can arise in the way questions are asked (p. 113). For example, it can be brought about as a result of being asked leading questions (p. 114).
‘Interference’ is where memory is distorted by a similar event or something that clouds our judgement of what happened. The authors give an example of a witness who saw a photo and believed the person in the photo committed the crime (pp. 114-5).
This concept of ‘interference’ is different to the concept discussed in the previous chapter, where it was used to denote a desirable difficulty: interfering with a learning process in order to make learning more effortful and hence more effective. Examples of that form of ‘interference’ were making explanations less clear or making the font of lecture slides harder to decipher.
‘Social influence’ denotes the ways in which our memories are influenced by the people with whom we interact. Our memories are ‘subject to social influence and tend to align with the memories of the people around us’ (p. 116). This psychological process is known as ‘memory conformity’ or the ‘social contagion of memory’ (p. 117).
Here are four questions we might consider.
1. Should we spend a lot more time teaching students about knowledge illusions and how to avoid them?
As the authors point out, pilots receive training in particular perceptual illusions they are especially susceptible to in their line of work (p. 106). While there may not be any particular illusions students are more susceptible to just in virtue of being learners, it would surely be beneficial to students to learn more about knowledge illusions and how to avoid them, to improve metacognition.
2. Incompetence and negative feedback
We saw above that Dunning and Kruger, coiners of the Dunning-Kruger effect, argue that a reason this phenomenon occurs is that incompetent people fail to learn through experience that they are unskilled, which happens partly because we rarely give one another negative feedback. What are the implications of this for how we give feedback? Should we give more negative feedback?
3. On the importance of peer learning and assessment
The authors discuss the views of physicist and educator Eric Mazur. Mazur argues that ‘the person who knows best what a student is struggling with in assimilating new concepts is not the professor, it’s another student’ (p. 119). If this is correct, what are its implications for peer learning and assessment?
The authors recommend making more use of peer assessment and giving corrective feedback (pp. 126-7) and recommend strategies for peer instruction (pp. 125-6). (For more on this, see the earlier blogpost on formative assessment, which includes discussion of peer assessment.)
This is related to the previous question. One example of negative feedback the authors give is of negative peer assessment, where team captains select players from among their peers for their teams.
4. On ‘mental models’ and the ‘curse of knowledge’
Another point from Mazur to which the authors draw attention is Mazur’s view that the ‘better you know something, the more difficult it becomes to teach it’ (p. 119). The authors argue that this is because as you increase your expertise in a complex area, ‘your [mental] models in those areas grow more complex, and the component steps that compose them fade into the background of memory’. This is an example of the illusion they call the ‘cure of knowledge’. As an example, they consider an expert physicist teaching a first-year undergraduate class, who, when teaching, forgets ‘that her students have yet to master the underlying steps she has long ago bundled into one unified mental model’. This is a metacognitive error: ‘a misjudgment of the matchup between what she knows and what her students know’ (ibid.).
We can well imagine this happening for an academic researcher who spends little time teaching. But anyone who teaches regularly – including academic researchers with regular teaching commitments – would surely often be reminded of the fundamental steps in some area of complex knowledge by repeatedly returning to and re-teaching them, such that they maintain contact with the basics. If they were not reminded of these steps through their teaching, rather than this being a metacognitive error it may instead be due to a lack of recognition of important and obvious aspects of teaching.
Relatedly, on Mazur’s claim that the ‘better you know something, the more difficult it becomes to teach it’: surely this is the case only if you teach it rarely, rather than often. If you regularly teach an area on which you are continually developing higher levels of knowledge and expertise, surely it can become (and usually does become) easier to teach it.
As we explore the book further, we may return to address these questions.