heuristic

psychology
print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

heuristic, in cognitive psychology, a process of intuitive judgment, operating under conditions of uncertainty, that rapidly produces a generally adequate, though not ideal or optimal, decision, solution, prediction, or inference. Heuristics function as mental shortcuts that produce serviceable results quickly and with little if any effort on the part of the thinking subject, who is most often unaware of their operation. Because they do not determine an ideal or optimal result but merely one that is “good enough,” heuristics significantly reduce the cognitive load of the problems to which they are applied. Although the options they generate are usually adequate, in some circumstances heuristics may constitute a cognitive bias and may lead to greatly suboptimal or even incorrect results.

(Read Steven Pinker’s Britannica entry on rationality.)

Heuristics reduce the complexity of a decision, problem, or question by neglecting to take into account all relevant and available information. Often called “mental shortcuts” or “rules of thumb,” heuristics are catchall strategies used in a variety of scenarios. They are commonly contrasted with algorithms, which are comprehensive step-by-step processes that reliably end with a correct solution for specific situations. For example, when baking a pie, following a recipe exactly would be considered an algorithmic approach, whereas using trial and error to determine proper ingredients or baking time would involve the use of one or more heuristics.

position of chessmen at the beginning of a game
More From Britannica
chess: Master search heuristics

Although heuristics are generally applied unconsciously or automatically, they are also sometimes used deliberately as an explicit strategy of judgment.

History

The term heuristic is derived from the Greek heuriskein, which means “to discover.” An early figure in the study of heuristics was the American social scientist Herbert A. Simon, who was awarded the Nobel Prize for Economics in 1978 for his research into decision making within economic organizations. Simon coined the term satisficing to describe the process of finding the first solution that is satisfactory and sufficient instead of one that is fully optimized, and he characterized the limited reasoning capacities of the human mind as “bounded rationality.”

The bulk of the foundational research on heuristics in the 20th century was conducted by the Israeli American psychologist Daniel Kahneman and the Israeli psychologist Amos Tversky beginning in the 1960s. The pair outlined the mental shortcuts used in unconscious decision making and the cognitive biases that result from their operation. Kahneman and Tversky’s research had an immense impact on psychology and other social sciences, particularly economics, law, and political science. The theory of heuristics and the notion of bounded rationality offered a formidable challenge to the traditional model of rational choice, presupposed in classical economic theory and other social sciences, according to which human beings are rational actors whose decisions are determined by informed assessments of the probability and utility of possible outcomes. Kahneman was a corecipient of the 2002 Nobel Prize for Economics “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty,” in the words of the Royal Swedish Academy of Sciences. Kahneman shared the award with the American economist Vernon L. Smith.

The heuristic theory of Kahneman and Tversky

In their influential paper “Judgment Under Uncertainty: Heuristics and Biases” (1974) and in later works, Kahneman and Tversky examined the various biases that can result from three main heuristics, which they called representativeness, availability, and anchoring and adjustment.

Get Unlimited Access
Try Britannica Premium for free and discover more.

The representativeness heuristic is a mental shortcut that is used in judgments of the probability that a particular object belongs to a particular class, that a particular event resulted from a particular process, or that a particular process will result in a particular event. The basis of the judgment is the degree to which the object or event is representative or prototypical of, or resembles, the corresponding class or process—the greater the degree of perceived representativeness or resemblance, the higher the probability. For example, an individual who is perceived as being shy, withdrawn, meek, and tidy and having a need for order and a passion for detail will be judged more likely to be a librarian than to be, say, a salesperson, because the individual’s personality more closely resembles the stereotype of a librarian than it does the stereotype of a salesperson. The representative heuristic can lead to biased judgments in some contexts, because it does not take into account objective factors that affect the probability of the outcome in question, such as, in the present case, the small proportion of librarians to salespersons in the general population. Other relevant factors that tend to be ignored or misunderstood in judgments reliant upon representativeness are sample sizes, the nature of chance (specifically, the fact that a short sequence of events produced by a random process may not appear to have been randomly produced), and varied evidence affecting the predictability of certain kinds of events, such as a change in the value of a stock or the outcome of a sporting event.

The availability heuristic is used in judgments of the size or frequency of a class of objects or the probability of an event. The basis of the judgment is the ease or accuracy with which instances of the class or occurrences of the event are recalled, imagined, or otherwise brought to mind. Classes whose instances are easily brought to mind tend to be judged as being larger or more numerous than classes whose instances are less retrievable, and salient or recent events tend to be judged as being more common or more likely to recur than less impressive or earlier events. For example, experiments have shown that people will judge words whose first letter is r to be more numerous in the English language than words whose third letter is r—because it is easier to think of or search for words by their first letter than by their third letter—even though words of the latter type are in fact more common in English than words of the former type. Kahneman and Tversky also note, regarding the influence of salience and recency, that “the impact of seeing a house burning on the subjective probability of such accidents is probably greater than the impact of reading about a fire in the local paper” and “it is a common experience that the subjective probability of traffic accidents rises temporarily when one sees a car overturned by the side of the road.” The availability heuristic leads to biased judgments of the foregoing sorts as well as to miscalculations of the probability of imaginable events—the more easily imaginable events being judged more likely to occur than those less easily imagined—and to mistaken beliefs in the natural correlation of events that have been associated in the limited experience of individuals.

Kahneman and Tversky’s final heuristic, anchoring and adjustment, is used in judgments or estimates of unknown quantities in a variety of contexts. The judgment begins with an initial numerical value (the anchor) that is tentatively established by the context or the nature of the problem, by a partial computation, or arbitrarily; it then proceeds through a series of adjustments by which the number is raised or lowered until a final value is reached. However, the adjustments are typically incomplete, particularly when the anchor is arbitrary, with the result that estimates are systematically biased toward their initial values. Judgments of the probability of a series of conjunctive or disjunctive events are also consistently distorted.

In later research several additional and alternative general-purpose heuristics were proposed, including affect (a valuation based on the subject’s positive or negative mood or emotional reaction), scarcity (an excessive valuation of desirable objects perceived to be in short supply), familiarity (an excessive valuation of familiar objects, people, or places), and trial and error.

Heuristics and mental systems

The average person makes thousands of decisions every day. The vast majority of these choices are made unconsciously on the basis of heuristics. In his later work Thinking, Fast and Slow (2011), Kahneman argued that these decisions are generally completed in one of two parallel and simultaneously operating mental systems for thinking, problem solving, and decision making. In system 1, thinking relies on general observations and unconscious processing to arrive at conclusions more or less effortlessly and automatically. Such decisions are monitored and possibly supplemented, revised, or overridden in system 2, where thinking is slow and effortful and draws upon conscious evaluations of evidence in order to arrive at sound conclusions. Other two-system models have posited that the heuristics of system 1 are intentionally employed in cases where the subject regards the decision as being relatively less interesting or less important and that system 2 is reserved for judgments that are compelling and consequential.

Kahneman and Tversky’s theory of heuristics and biases has prompted a variety of criticisms. Among them are that the theory undervalues the “ecological validity,” or overall usefulness, of heuristic-guided reasoning—a characteristic understood to have developed through the course of human evolution; that the biased judgments experimentally identified by heuristics researchers amount to laboratory-created curiosities that do not accurately reflect the functioning of heuristics in the real world; and that the theory misrepresents human rationality itself by treating it as independent of human intuitions.

Michael McDonough The Editors of Encyclopaedia Britannica