The Mathematica Journal
Departments
Current Issue
Tricks of the Trade
In and Out
Riemann Surfaces IIc
The Mathematica Programmer
New Products
New Publications
Classifieds
Calendar
News Bulletins
Library
Editor's Pick
Mailbox
FAQ
Write Us
About the Journal
Staff and Contributors
Submissions
Subscriptions
Advertising
Back Issues
Home
Download This Issue

Elegance and Complexity

Chaitin and others refer to the minimal representation issue in AIT as "elegance"--the situation where we have an expression with the property that no smaller expressions yield the same result.

In cognitive science we're often interested in the complexity of a stimulus or system. Complexity in this particular case is roughly equivalent to the inverse of elegance, in an Occam's razor fashion. The (very, very rough) idea is that as something becomes more "complex," it becomes more difficult to do / harder to remember / harder to interpret / insert your limitological position here. It caught on quite well in the fields of attention, perception, and, oddly enough, aesthetics. Mathematician George Birkhoff believed that the aesthetic value of an object could be determined as a simple ratio of where represents the amount of order in the object and represents the amount of complexity [10]. The rather obvious problem with this ratio is that its two constituents are quite difficult to define in a truly operational and meaningful way. Complexity in this sense has been defined in an almost countless number of ways. One example from the world of shape is the number of vertices in a polygon.

Two shapes, four vertices each. Is one more "complex" than the other? Depends how you look at it. Is one more aesthetically pleasing than the other? Again, perspective is everything. (See Komar and Melamid's Scientific Guide to Art for further details on [11].) I'm sure you'd agree that this probably isn't quite an adequate description of the situation. There have been coding-based approximations. In the above case, you may have a better chance of the measure working out if you described the rectangle in a raster-encoding sort of way, sort of like the following.

Thickness 1
Draw a black line from 0 to .

The economy is clear when you look at the metacode you'd need to encode the polygon on the right.

Draw a black line from .2 to .3.
Draw a black line from .21 to .32.
Draw a black line from .22 to .35.
...
Draw a black line from 0 to 0.01.

But what about the following?

Where does the complexity lie now? Is it in the shape? It would depend on whether the shape included the ideas of color. If I were to turn them both into monochrome images and apply a little Gaussian blur to the right-hand image, as a near sighted person might see it, all of a sudden the difference in apparent complexity vanishes. We have two different potential measures depending on which representation we choose. This is why information theory has had tough times in cognitive science--it is difficult to determine it in a meaningful way.

But what of Chaitin and his ideas of algorithmic information theory? Luckily, it is a far better defined concept in the case of an algorithm--how many symbols do you need to define the algorithm? To simplify matters, we can define some arbitrary Turing machine that interprets some arbitrary symbol-system to give us a sort of starting point here.


Copyright © 2002 Wolfram Media, Inc. All rights reserved.

[Article Index][Prev Page][Next Page]