Virtually everyone would agree that a primary, yet insufficiently met, goal of schooling is to enable students to think critically. In layperson’s terms, critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth. Then too, there are specific types of critical thinking that are characteristic of different subject matter: That’s what we mean when we refer to “thinking like a scientist” or “thinking like a historian.”
This proper and commonsensical goal has very often been translated into calls to teach “critical thinking skills” and “higher-order thinking skills” and into generic calls for teaching students to make better judgments, reason more logically, and so forth. In a recent survey of human resource officials1 and in testimony delivered just a few months ago before the Senate Finance Committee,2 business leaders have repeatedly exhorted schools to do a better job of teaching students to think critically. And they are not alone. Organizations and initiatives involved in education reform, such as the National Center on Education and the Economy, the American Diploma Project, and the Aspen Institute, have pointed out the need for students to think and/or reason critically. The College Board recently revamped the SAT to better assess students’ critical thinking and ACT, Inc. offers a test of critical thinking for college students.
These calls are not new. In 1983, A Nation At Risk, a report by the National Commission on Excellence in Education, found that many 17-year-olds did not possess the “ ‘higher-order’ intellectual skills” this country needed. It claimed that nearly 40 percent could not draw inferences from written material and only onefifth could write a persuasive essay.
Following the release of A Nation At Risk, programs designed to teach students to think critically across the curriculum became extremely popular. By 1990, most states had initiatives designed to encourage educators to teach critical thinking, and one of the most widely used programs, Tactics for Thinking, sold 70,000 teacher guides.3 But, for reasons I’ll explain, the programs were not very effective — and today we still lament students’ lack of critical thinking.
After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge). Thus, if you remind a student to “look at an issue from multiple perspectives” often enough, he will learn that he ought to do so, but if he doesn’t know much about an issue, he can’t think about it from multiple perspectives. You can teach students maxims about how they ought to think, but without background knowledge and practice, they probably will not be able to implement the advice they memorize. Just as it makes no sense to try to teach factual content without giving students opportunities to practice using it, it also makes no sense to try to teach critical thinking devoid of factual content.
In this article, I will describe the nature of critical thinking, explain why it is so hard to do and to teach, and explore how students acquire a specific type of critical thinking: thinking scientifically. Along the way, we’ll see that critical thinking is not a set of skills that can be deployed at any time, in any context. It is a type of thought that even 3-year-olds can engage in — and even trained scientists can fail in. And it is very much dependent on domain knowledge and practice.
Why is thinking critically so hard?
Educators have long noted that school attendance and even academic success are no guarantee that a student will graduate an effective thinker in all situations. There is an odd tendency for rigorous thinking to cling to particular examples or types of problems. Thus, a student may have learned to estimate the answer to a math problem before beginning calculations as a way of checking the accuracy of his answer, but in the chemistry lab, the same student calculates the components of a compound without noticing that his estimates sum to more than 100%. And a student who has learned to thoughtfully discuss the causes of the American Revolution from both the British and American perspectives doesn’t even think to question how the Germans viewed World War II. Why are students able to think critically in one situation, but not in another? The brief answer is: Thought processes are intertwined with what is being thought about. Let’s explore this in depth by looking at a particular kind of critical thinking that has been studied extensively: problem solving.
Imagine a seventh-grade math class immersed in word problems. How is it that students will be able to answer one problem, but not the next, even though mathematically both word problems are the same, that is, they rely on the same mathematical knowledge? Typically, the students are focusing on the scenario that the word problem describes (its surface structure) instead of on the mathematics required to solve it (its deep structure). So even though students have been taught how to solve a particular type of word problem, when the teacher or textbook changes the scenario, students still struggle to apply the solution because they don’t recognize that the problems are mathematically the same.
Thinking tends to focus on a problem's "surface structure"
To understand why the surface structure of a problem is so distracting and, as a result, why it’s so hard to apply familiar solutions to problems that appear new, let’s first consider how you understand what’s being asked when you are given a problem. Anything you hear or read is automatically interpreted in light of what you already know about similar subjects. For example, suppose you read these two sentences: “After years of pressure from the film and television industry, the President has filed a formal complaint with China over what U.S. firms say is copyright infringement. These firms assert that the Chinese government sets stringent trade restrictions for U.S. entertainment products, even as it turns a blind eye to Chinese companies that copy American movies and television shows and sell them on the black market.” Background knowledge not only allows you to comprehend the sentences, it also has a powerful effect as you continue to read because it narrows the interpretations of new text that you will entertain. For example, if you later read the word “Bush,” it would not make you think of a small shrub, nor would you wonder whether it referred to the former President Bush, the rock band, or a term for rural hinterlands. If you read “piracy,” you would not think of eye-patched swabbies shouting “shiver me timbers!” The cognitive system gambles that incoming information will be related to what you’ve just been thinking about. Thus, it significantly narrows the scope of possible interpretations of words, sentences, and ideas. The benefit is that comprehension proceeds faster and more smoothly; the cost is that the deep structure of a problem is harder to recognize.
The narrowing of ideas that occurs while you read (or listen) means that you tend to focus on the surface structure, rather than on the underlying structure of the problem. For example, in one experiment,4 subjects saw a problem like this one:
Members of the West High School Band were hard at work practicing for the annual Homecoming Parade. First they tried marching in rows of 12, but Andrew was left by himself to bring up the rear. Then the director told the band members to march in columns of eight, but Andrew was still left to march alone. Even when the band marched in rows of three, Andrew was left out. Finally, in exasperation, Andrew told the band director that they should march in rows of five in order to have all the rows filled. He was right. Given that there were at least 45 musicians on the field but fewer than 200 musicians, how many students were there in the West High School Band?
Earlier in the experiment, subjects had read four problems along with detailed explanations of how to solve each one, ostensibly to rate them for the clarity of the writing. One of the four problems concerned the number of vegetables to buy for a garden, and it relied on the same type of solution necessary for the band problem-calculation of the least common multiple. Yet, few subjects — just 19 percent — saw that the band problem was similar and that they could use the garden problem solution. Why?
When a student reads a word problem, her mind interprets the problem in light of her prior knowledge, as happened when you read the two sentences about copyrights and China. The difficulty is that the knowledge that seems relevant relates to the surface structure — in this problem, the reader dredges up knowledge about bands, high school, musicians, and so forth. The student is unlikely to read the problem and think of it in terms of its deep structure — using the least common multiple. The surface structure of the problem is overt, but the deep structure of the problem is not. Thus, people fail to use the first problem to help them solve the second: In their minds, the first was about vegetables in a garden and the second was about rows of band marchers.
With deep knowledge, thinking can penetrate beyond surface structure
If knowledge of how to solve a problem never transferred to problems with new surface structures, schooling would be inefficient or even futile — but of course, such transfer does occur. When and why is complex,5 but two factors are especially relevant for educators: familiarity with a problem’s deep structure and the knowledge that one should look for a deep structure. I’ll address each in turn. When one is very familiar with a problem’s deep structure, knowledge about how to solve it transfers well. That familiarity can come from long-term, repeated experience with one problem, or with various manifestations of one type of problem (i.e., many problems that have different surface structures, but the same deep structure). After repeated exposure to either or both, the subject simply perceives the deep structure as part of the problem description. Here’s an example:
A treasure hunter is going to explore a cave up on a hill near a beach. He suspected there might be many paths inside the cave so he was afraid he might get lost. Obviously, he did not have a map of the cave; all he had with him were some common items such as a flashlight and a bag. What could he do to make sure he did not get lost trying to get back out of the cave later?
The solution is to carry some sand with you in the bag, and leave a trail as you go, so you can trace your path back when you’re ready to leave the cave. About 75% of American college students thought of this solution — but only 25% of Chinese students solved it.6 The experimenters suggested that Americans solved it because most grew up hearing the story of Hansel and Gretel which includes the idea of leaving a trail as you travel to an unknown place in order to find your way back. The experimenters also gave subjects another puzzle based on a common Chinese folk tale, and the percentage of solvers from each culture reversed. www.aft.org/pubs-reports/american_educator/index.htm”>Read the puzzle based on the Chinese folk tale, and the tale itself.
It takes a good deal of practice with a problem type before students know it well enough to immediately recognize its deep structure, irrespective of the surface structure, as Americans did for the Hansel and Gretel problem. American subjects didn’t think of the problem in terms of sand, caves, and treasure; they thought of it in terms of finding something with which to leave a trail. The deep structure of the problem is so well represented in their memory, that they immediately saw that structure when they read the problem.
Looking for a deep structure helps, but it only takes you so far
Now let’s turn to the second factor that aids in transfer despite distracting differences in surface structure — knowing to look for a deep structure. Consider what would happen if I said to a student working on the band problem, “this one is similar to the garden problem.” The student would understand that the problems must share a deep structure and would try to figure out what it is. Students can do something similar without the hint. A student might think “I’m seeing this problem in a math class, so there must be a math formula that will solve this problem.” Then he could scan his memory (or textbook) for candidates, and see if one of them helps. This is an example of what psychologists call metacognition, or regulating one’s thoughts. In the introduction, I mentioned that you can teach students maxims about how they ought to think. Cognitive scientists refer to these maxims as metacognitive strategies. They are little chunks of knowledge — like “look for a problem’s deep structure” or “consider both sides of an issue” — that students can learn and then use to steer their thoughts in more productive directions.
Helping students become better at regulating their thoughts was one of the goals of the critical thinking programs that were popular 20 years ago. These programs are not very effective. Their modest benefit is likely due to teaching students to effectively use metacognitive strategies. Students learn to avoid biases that most of us are prey to when we think, such as settling on the first conclusion that seems reasonable, only seeking evidence that confirms one’s beliefs, ignoring countervailing evidence, overconfidence, and others.7 Thus, a student who has been encouraged many times to see both sides of an issue, for example, is probably more likely to spontaneously think “I should look at both sides of this issue” when working on a problem.
Unfortunately, metacognitive strategies can only take you so far. Although they suggest what you ought to do, they don’t provide the knowledge necessary to implement the strategy. For example, when experimenters told subjects working on the band problem that it was similar to the garden problem, more subjects solved the problem (35% compared to 19% without the hint), but most subjects, even when told what to do, weren’t able to do it. Likewise, you may know that you ought not accept the first reasonable-sounding solution to a problem, but that doesn’t mean you know how to come up with alterative solutions or weigh how reasonable each one is. That requires domain knowledge and practice in putting that knowledge to work.
Since critical thinking relies so heavily on domain knowledge, educators may wonder if thinking critically in a particular domain is easier to learn. The quick answer is yes, it’s a little easier. To understand why, let’s focus on one domain, science, and examine the development of scientific thinking.
Is thinking like a scientist easier?
Teaching science has been the focus of intensive study for decades, and the research can be usefully categorized into two strands. The first examines how children acquire scientific concepts; for example, how they come to forgo naive conceptions of motion and replace them with an understanding of physics. The second strand is what we would call thinking scientifically, that is, the mental procedures by which science is conducted: developing a model, deriving a hypothesis from the model, designing an experiment to test the hypothesis, gathering data from the experiment, interpreting the data in light of the model, and so forth. Most researchers believe that scientific thinking is really a subset of reasoning that is not different in kind from other types of reasoning that children and adults do.8 What makes it scientific thinking is knowing when to engage in such reasoning, and having accumulated enough relevant knowledge and spent enough time practicing to do so.
Recognizing when to engage in scientific reasoning is so important because the evidence shows that being able to reason is not enough; children and adults use and fail to use the proper reasoning processes on problems that seem similar. For example, consider a type of reasoning about cause and effect that is very important in science: conditional probabilities. If two things go together, it’s possible that one causes the other. Suppose you start a new medicine and notice that you seem to be getting headaches more often than usual. You would infer that the medication influenced your chances of getting a headache. But it could also be that the medication increases your chances of getting a headache only in certain circumstances or conditions. In conditional probability, the relationship between two things (e.g., medication and headaches) is dependent on a third factor. For example, the medication might increase the probability of a headache only when you’ve had a cup of coffee. The relationship of the medication and headaches is conditional on the presence of coffee.
Understanding and using conditional probabilities is essential to scientific thinking because it is so important in reasoning about what causes what. But people’s success in thinking this way depends on the particulars of how the question is presented. Studies show that adults sometimes use conditional probabilities successfully,9 but fail to do so with many problems that call for it.10 Even trained scientists are open to pitfalls in reasoning about conditional probabilities (as well as other types of reasoning). Physicians are known to discount or misinterpret new patient data that conflict with a diagnosis they have in mind,11 and Ph.D.- level scientists are prey to faulty reasoning when faced with a problem embedded in an unfamiliar context.12
And yet, young children are sometimes able to reason about conditional probabilities. In one experiment,13 the researchers showed 3-year-olds a box and told them it was a “blicket detector” that would play music if a blicket were placed on top. The child then saw one of the two sequences shown below in which blocks are placed on the blicket detector. At the end of the sequence, the child was asked whether each block was a blicket. In other words, the child was to use conditional reasoning to infer which block caused the music to play.
Note that the relationship between each individual block (yellow cube and blue cylinder) and the music is the same in sequences 1 and 2. In either sequence, the child sees the yellow cube associated with music three times, and the blue cylinder associated with the absence of music once and the presence of music twice. What differs between the first and second sequence is the relationship between the blue and yellow blocks, and therefore, the conditional probability of each block being a blicket. Three-year-olds understood the importance of conditional probabilities.For sequence 1, they said the yellow cube was a blicket, but the blue cylinder was not; for sequence 2, they chose equally between the two blocks.
This body of studies has been summarized simply: Children are not as dumb as you might think, and adults (even trained scientists) are not as smart as you might think.What’s going on? One issue is that the common conception of critical thinking or scientific thinking (or historical thinking) as a set of skills is not accurate. Critical thinking does not have certain characteristics normally associated with skills — in particular, being able to use that skill at any time. If I told you that I learned to read music, for example, you would expect, correctly, that I could use my new skill (i.e., read music) whenever I wanted. But critical thinking is very different. As we saw in the discussion of conditional probabilities, people can engage in some types of critical thinking without training, but even with extensive training, they will sometimes fail to think critically. This understanding that critical thinking is not a skill is vital. It tells us that teaching students to think critically probably lies in small part in showing them new ways of thinking, and in large part in enabling them to deploy the right type of thinking at the right time.
Returning to our focus on science, we’re ready to address a key question: Can students be taught when to engage in scientific thinking? Sort of. It is easier than trying to teach general critical thinking, but not as easy as we would like. Recall that when we were discussing problem solving, we found that students can learn metacognitive strategies that help them look past the surface structure of a problem and identify its deep structure, thereby getting them a step closer to figuring out a solution. Essentially the same thing can happen with scientific thinking. Students can learn certain metacognitive strategies that will cue them to think scientifically. But, as with problem solving, the metacognitive strategies only tell the students what they should do — they do not provide the knowledge that students need to actually do it. The good news is that within a content area like science, students have more context cues to help them figure out which metacognitive strategy to use, and teachers have a clearer idea of what domain knowledge they must teach to enable students to do what the strategy calls for.
For example, two researchers14 taught second-, third-, and fourth-graders the scientific concept behind controlling variables; that is, of keeping everything in two comparison conditions the same, except for the one variable that is the focus of investigation. The experimenters gave explicit instruction about this strategy for conducting experiments and then had students practice with a set of materials (e.g., springs) to answer a specific question (e.g., which of these factors determine how far a spring will stretch: length, coil diameter, wire diameter, or weight?). The experimenters found that students not only understood the concept of controlling variables, they were able to apply it seven months later with different materials and a different experimenter, although the older children showed more robust transfer than the younger children. In this case, the students recognized that they were designing an experiment and that cued them to recall the metacognitive strategy, “When I design experiments, I should try to control variables.” Of course, succeeding in controlling all of the relevant variables is another matter-that depends on knowing which variables may matter and how they could vary.
Why scientific thinking depends on scientific knowledge
Experts in teaching science recommend that scientific reasoning be taught in the context of rich subject matter knowledge. A committee of prominent science educators brought together by the National Research Council put it plainly: “Teaching content alone is not likely to lead to proficiency in science, nor is engaging in inquiry experiences devoid of meaningful science content.”
The committee drew this conclusion based on evidence that background knowledge is necessary to engage in scientific thinking. For example, knowing that one needs a control group in an experiment is important. Like having two comparison conditions, having a control group in addition to an experimental group helps you focus on the variable you want to study. But knowing that you need a control group is not the same as being able to create one. Since it’s not always possible to have two groups that are exactly alike, knowing which factors can vary between groups and which must not vary is one example of necessary background knowledge. In experiments measuring how quickly subjects can respond, for example, control groups must be matched for age, because age affects response speed, but they need not be perfectly matched for gender.
More formal experimental work verifies that background knowledge is necessary to reason scientifically. For example, consider devising a research hypothesis. One could generate multiple hypotheses for any given situation. Suppose you know that car A gets better gas mileage than car B and you’d like to know why. There are many differences between the cars, so which will you investigate first? Engine size? Tire pressure? A key determinant of the hypothesis you select is plausibility. You won’t choose to investigate a difference between cars A and B that you think is unlikely to contribute to gas mileage (e.g., paint color), but if someone provides a reason to make this factor more plausible (e.g., the way your teenage son’s driving habits changed after he painted his car red), you are more likely to say that this now-plausible factor should be investigated.16 One’s judgment about the plausibility of a factor being important is based on one’s knowledge of the domain.
Other data indicate that familiarity with the domain makes it easier to juggle different factors simultaneously, which in turn allows you to construct experiments that simultaneously control for more factors. For example, in one experiment, 17 eighth-graders completed two tasks. In one, they were to manipulate conditions in a computer simulation to keep imaginary creatures alive. In the other, they were told that they had been hired by a swimming pool company to evaluate how the surface area of swimming pools was related to the cooling rate of its water. Students were more adept at designing experiments for the first task than the second, which the researchers interpreted as being due to students’ familiarity with the relevant variables. Students are used to thinking about factors that might influence creatures’ health (e.g., food, predators), but have less experience working with factors that might influence water temperature (e.g., volume, surface area). Hence, it is not the case that “controlling variables in an experiment” is a pure process that is not affected by subjects’ knowledge of those variables.
Prior knowledge and beliefs not only influence which hypotheses one chooses to test, they influence how one interprets data from an experiment. In one experiment,18 undergraduates were evaluated for their knowledge of electrical circuits. Then they participated in three weekly, 1.5-hour sessions during which they designed and conducted experiments using a computer simulation of circuitry, with the goal of learning how circuitry works. The results showed a strong relationship between subjects’ initial knowledge and how much subjects learned in future sessions, in part due to how the subjects interpreted the data from the experiments they had conducted. Subjects who started with more and better integrated knowledge planned more informative experiments and made better use of experimental outcomes.
Other studies have found similar results, and have found that anomalous, or unexpected, outcomes may be particularly important in creating new knowledge-and particularly dependent upon prior knowledge.19 Data that seem odd because they don’t fit one’s mental model of the phenomenon under investigation are highly informative. They tell you that your understanding is incomplete, and they guide the development of new hypotheses. But you could only recognize the outcome of an experiment as anomalous if you had some expectation of how it would turn out. And that expectation would be based on domain knowledge, as would your ability to create a new hypothesis that takes the anomalous outcome into account.
The idea that scientific thinking must be taught hand in hand with scientific content is further supported by research on scientific problem solving; that is, when students calculate an answer to a textbook-like problem, rather than design their own experiment. A meta-analysis20 of 40 experiments investigating methods for teaching scientific problem solving showed that effective approaches were those that focused on building complex, integrated knowledge bases as part of problem solving, for example by including exercises like concept mapping. Ineffective approaches focused exclusively on the strategies to be used in problem solving while ignoring the knowledge necessary for the solution.
What do all these studies boil down to? First, critical thinking (as well as scientific thinking and other domain-based thinking) is not a skill. There is not a set of critical thinking skills that can be acquired and deployed regardless of context. Second, there are metacognitive strategies that, once learned, make critical thinking more likely. Third, the ability to think critically (to actually do what the metacognitive strategies call for) depends on domain knowledge and practice. For teachers, the situation is not hopeless, but no one should underestimate the difficulty of teaching students to think critically.