The role of chaos and fractals
Our editors will review what you’ve submitted and determine whether to revise the article.
- Key People:
- John Henry Holland
- Edward Lorenz
- Related Topics:
- system
One of the most pernicious misconceptions about complex systems is that complexity and chaotic behavior are synonymous. On the basis of the foregoing discussion of emergence, it is possible to put the role of chaos in complex systems into its proper perspective. Basically, if one focuses attention on the time evolution of an emergent property, such as the price movements of a stock or the daily changes in temperature, then that property may well display behavior that is completely deterministic yet indistinguishable from a random process; in other words, it is chaotic. So chaos is an epiphenomenon, one that often goes hand-in-hand with complexity but does not necessarily follow from it in all cases. What is important from a system-theoretic standpoint are the interactions of the lower-level agents—traders, drivers, molecules—which create the emergent patterns, not the possibly chaotic behavior that these patterns display. The following example illustrates the difference.
Chaos at the El Farol bar
El Farol was a bar in Santa Fe, New Mexico, U.S., at which Irish music was played every Thursday evening. As a result, the same Irish economist, W. Brian Arthur, who created the artificial stock market described above, was fond of going to the bar each Thursday. But he was not fond of doing so amid a crowd of pushing and shoving drinkers. So Arthur’s problem each Thursday was to decide whether the crowd at El Farol would be too large for him to enjoy the music. Arthur attacked the question of whether to attend by constructing a simple model of the situation.
Assume, said Arthur, that there are 100 people in Santa Fe who would like to listen to the music at El Farol, but none of them wants to go if the bar is going to be too crowded. To be specific, suppose that all 100 people know the attendance at the bar for each of the past several weeks. For example, such a record might be 44, 78, 56, 15, 23, 67, 84, 34, 45, 76, 40, 56, 23, and 35 attendees. Each individual then independently employs some prediction procedure to estimate how many people will come to the bar on the following Thursday evening. Typical predictors might be:
- the same number as last week (35);
- a mirror image around 50 of last week’s attendance (65);
- a rounded-up average of the past four weeks’ attendance (39);
- the same as two weeks ago (23).
Suppose each person decides independently to go to the bar if their prediction is that fewer than 60 people will attend; otherwise, they stay home. Once each person’s forecast and decision to attend has been made, people converge on the bar, and the new attendance figure is published the next day in the newspaper. At this time, all the music lovers update the accuracies of the predictors in their particular set, and things continue for another round. This process creates what might be termed an ecology of predictors.
The problem faced by each person is then to forecast the attendance as accurately as possible, knowing that the actual attendance will be determined by the forecasts others make. This immediately leads to an “I-think-you-think-they-think” type of regress—a regress of a particularly nasty sort. For suppose that someone becomes convinced that 87 people will attend. If this person assumes other music lovers are equally smart, then it is natural to assume they will also see that 87 is a good forecast. But then they all stay home, negating the accuracy of that forecast! So no shared, or common, forecast can possibly be a good one; in short, deductive logic fails.
It did not take Arthur long to discover that it is difficult to formulate a useful model in conventional mathematical terms. So he decided to create the world of El Farol inside his computer. He hoped to obtain an insight into how humans reason when deductive logic offers few guidelines. As an economist, his interest was in self-referential problems—situations in which the forecasts made by economic agents act to create the world they are trying to forecast. Traditionally, economists depend on the idea of rational expectations, where homogeneous agents agree on the same forecasting model and know that others know that others know that, etc. The classical economist then asks which forecasting model would be consistent, on average, with the outcome that it creates. But how agents come up with this magical model is left unsaid. Moreover, if the agents use different models, a morass of technical—and conceptual—difficulties will arise.
Arthur’s experiments showed that, if the predictors are not too simplistic, the number of people who attend will fluctuate around an average of 60. And, in fact, whatever threshold level Arthur chose seemed always to be the long-run attendance average. In addition, the computational experiments turned up an even more intriguing pattern—at least for mathematicians: The number of people going to the bar each week is a purely deterministic function of the individual predictions, which themselves are deterministic functions of the past attendance. This means that there is no inherently random factor dictating how many people show up. Yet the actual number going to hear the music in any week looks more random than deterministic. The graph in the shows a typical record of attendance for a 100-week period when the threshold level was set at 60.
These experimental observations lead to fairly definite and specific mathematical conjectures:
Under suitable conditions (to be determined) on the set of predictors, the average number of people who actually go to the bar converges to the threshold value over time.
Under the same set of suitable conditions on the sets of predictors, the attendance is a “deterministically-random” process; that is, it is chaotic.
Here then is a prototypical example of a complex, adaptive system in which a global property (bar attendance) emerges from the individual decisions of lower-level agents (the 100 Irish-music fans). Moreover, the temporal fluctuation of the emergent property appears to display chaotic behavior. But it is not hard to remove the chaos. For example, if every person always uses the rule, “Go to the bar regardless of the past record of attendance,” then the attendance will stay fixed at 100 for all time. This is certainly not chaotic behavior.
Fractals
A common first step in analyzing a dynamical system is to determine which initial states exhibit similar behavior. Because nearby states often lead to very similar behaviour, they can usually be grouped into continuous sets or graphical regions. If the system is not chaotic, this geometric decomposition of the space of initial states into discrete regions is rather straightforward, with the regional borders given by simple curves. But when the dynamical system is chaotic, the curves separating the regions are complicated, highly irregular objects termed fractals.
A characteristic feature of chaotic dynamical systems is the property of pathological sensitivity to initial positions. This means that starting the same process from two different—but frequently indistinguishable—initial states generally leads to completely different long-term behavior. For instance, in American meteorologist Edward Lorenz’ weather model (see the ), almost any two nearby starting points, indicating the current weather, will quickly diverge trajectories and will quite frequently end up in different “lobes,” which correspond to calm or stormy weather. The Lorenz model’s twin-lobed shape gave rise to the somewhat facetious “butterfly effect” metaphor: The flapping of a butterfly’s wings in China today may cause a tornado in Kansas tomorrow. More recent work in engineering, physics, biology, and other areas has shown the ubiquity of fractals throughout nature.