Kodi Arfer / Writing / Introduction to Chaos

Some Philosophy and Definitions

How much hope do we mortals have to achieve true understanding and mastery of nature? Doubtlessly our own strengths and limitations are what primarily determine the answer to this question. That, essentially, is why I'm a psychology major. But while I have my math-major hat on, I'd like to approach this issue from the other end and ask: how tractable is the universe?

The outlook is actually quite good. After millennia of sluggish progress, humans have made great strides since the seventeenth century in physics, technology, medicine, and biology. Science is finally catching up with mathematics. Mathematics played a large role in making all this possible, of course. We mustn't neglect to be grateful to the universe for being so wonderfully mathematical, for accomplishing all its dizzying complexity with a bunch of unyieldingly consistent and hence eminently knowable rules. Even in the nastiest cases, when a natural process is nondeterministic down to the participating elementary particles, the process remains governed by laws of probability.

Yet actually, there is no guarantee that mathematics is always tractable. The existence of unprovable true statements, undecidable formal languages, and undefinable real numbers demonstrates how any formal system strong enough to be useful is also strong enough to formulate questions that it's too weak to answer. This may be understood as an intractable characteristic of mathematical entities in theory. My concern in these pages is with an intractable characteristic of mathematical entities in practice, a way in which the properties of certain systems interact with the unavoidable shortcomings of applied mathematics to royally screw us over. We're going to talk about chaotic functions, functions concerning which, in a particular but important sense, approximation is useless.

Okay, Can We Do Some Actual Mathematics Now?

If you insist.

Defintion
Given f:XX, fn is defined as fff for n instances of f; that is, f composed with itself n-1 times. f0 is defined to be the identity function.
Defintion
Given f:XX, the dynamical system defined by f is the sequence (fn)n=0.

Why is it called "dynamical"? Because we imagine that the successive images of X under f show how X changes over time. Indeed, many real-world applications of dynamical systems are models of how things actually do change over time. For example, the doubling function on [0,) defines a dynamical system that models unconstrained population growth. Each application of f represents the passage of a certain amount of time in the model. In the population-growth example, this amount is equal to the doubling time of the population.

Given that a dynamical system is defined recursively, some can be expressed in a simple closed form and others can't. The dynamical system on defined by f(x)=x2 is just fn(x)=x2n. But given f(x)=3x2+sinx,

f0(x)=x,
f1(x)=3x2+sinx,
f2(x)=27x4+18x2sinx+3(sinx)2+sin(sinx+3x2),
etc.

We can learn about the global behavior of a dynamical system by examining what happens to individual points as f is repeatedly applied to them.

Defintion
Given f:XX and xX, the orbit of x under f, denoted O(x), is the sequence (fn(x))n=0. We call x a periodic point if fm(x)=x for some m. The least such m is called the period of x.

For example, under the function f: defined by f(x)=-32x2+52x+1, 0 is a periodic point with period 3, since f(0)=1, f(1)=2, and f(2)=0.

Periodicity is a nice property. If we know a point is periodic, we can predict its entire orbit from finitely many of its successive images. You might think, then, that if the set of periodic points P is dense in X—that is, if X is just the closure of P—and if f is continuous, then f must be exceptionally well behaved. However, you would be dead wrong.

Consider the tent function, which is the function T:[0,1][0,1] defined by T(x)=2x for x12 and T(x)=2-2x for x12. (This example is from Adams and Franzosa, 2008.) I hope you agree that T is continuous. You can think of T's effect on [0,1] as stretching the interval out to twice its length and then folding it back onto itself.

[An illustration thereof]

Here's a graph of T:

[Graph of T]

Here's a graph of T3:

[Graph of T3]

You get the idea: for each n, Tn has 2n-1 peaks. As n increases, Tn gets more and more crowded. And yet, periodic points of this dynamical system are dense in [0,1]. A theorem I won't prove here says that any number whose binary representation is of the form 0.a1a2an0¯ is a periodic point. The idea is that for any repeating binary x, the effect of T on x is as follows:

For example, 0.10110¯=2231 is a periodic point:

T0(0.10110¯)=0.10110¯;
T1(0.10110¯)=0.10010¯;
T2(0.10110¯)=0.11010¯;
T3(0.10110¯)=0.01010¯;
T4(0.10110¯)=0.10100¯;
T5(0.10110¯)=0.10110¯.

It's easy to show that numbers of the form 0.a1a2an0¯ are dense in [0,1].

The graphs of T make plain another important property of T: topological transitivity. This means that, given any open U,V[0,1], there are some xU and n such that Tn(x)V. Essentially, the image of any open set, no matter how small, is eventually the entire space. We'd show this for T by picking an n large enough that an interval for which Tn is surjective is a subset of U.

This dynamical system has predictable aspects everywhere you look, and it's certainly deterministic, but its images are all over the place: no two open sets can be reliably distinguished. In a nutshell, it's chaotic.

Defintion
A function f:XX is chaotic if its periodic points are dense in X and it's topologically transitive.

There are many competing definitions of topological chaos, but I like this one because it makes explicit the strange duality of a chaotic function. Also, the requirements it places on f aren't too strong. Consider the following property.

Defintion
Suppose (X,d) is a metric space. A function f:XX sensitively depends on initial conditions if there exists δ>0 such that for any xX and ε>0, there exists yBε(x) and n with d(fn(x),fn(y))>δ.

Sensitive dependence on initial conditions is what I was referring to when I spoke earlier of approximation being useless. You can think of δ as a kind of minimum error of approximating a point in X: no matter how good your approximation is, so long as it isn't exact, f may eventually bring x and your approximation more than δ units apart. Look how widely T scatters a few numbers in the narrow interval [0.29997,0.30003]:

T00.299970.299980.299990.300000.300010.300020.30003
T10.599940.599960.599980.600000.600020.600040.60006
T20.800120.800080.800040.800000.799960.799920.79988
T30.399760.399840.399920.400000.400080.400160.40024
T40.799520.799680.799840.800000.800160.800320.80048
T50.400960.400640.400320.400000.399680.399360.39904
T60.801920.801280.800640.800000.799360.798720.79808
T70.396160.397440.398720.400000.401280.402560.40384
T80.792320.794880.797440.800000.802560.805120.80768
T90.415360.410240.405120.400000.394880.389760.38464
T100.830720.820480.810240.800000.789760.779520.76928
T110.338560.359040.379520.400000.420480.440960.46144
T120.677120.718080.759040.800000.840960.881920.92288
T130.645760.563840.481920.400000.318080.236160.15424
T140.708480.872320.963840.800000.636160.472320.30848
T150.583040.255360.072320.400000.727680.944640.61696
T160.833920.510720.144640.800000.544640.110720.76608
T170.332160.978560.289280.400000.910720.221440.46784
T180.664320.042880.578560.800000.178560.442880.93568
T190.671360.085760.842880.400000.357120.885760.12864
T200.657280.171520.314240.800000.714240.228480.25728

It turns out that for continuous functions, chaos implies sensitivity. But I won't ask you to take my word for it this time. We'll prove it.