The Mathematica Journal
Departments
Current Issue
Tricks of the Trade
In and Out
Riemann Surfaces IIc
The Mathematica Programmer
New Products
New Publications
Classifieds
Calendar
News Bulletins
Library
Editor's Pick
Mailbox
FAQ
Write Us
About the Journal
Staff and Contributors
Submissions
Subscriptions
Advertising
Back Issues
Home
Download This Issue

The Chaitin Trilogy

I was initially made aware of the work of Chaitin while reading John Horgan's excellent book, The End of Science [2]. In it, Horgan acts the part of a science reporter cum voyeur and, in the chapter "The End of Limitology," he attends a Santa Fe Institute conference titled "The Limits of Scientific Knowledge," a gathering that features, among its participants, Chaitin in the part of an agent provocateur. The way it reads, it appears to have been a superb encounter, full of the usual scientific angst and posturing that is most entertaining to observe. From this introduction it was only a matter of time before I started down the unknowable road myself.

Chaitin's latest three books [3, 4, 5] form a nice triangular base to support and explore the concepts underlying algorithmic information theory (AIT)--a clever blend of Gödel, Turing, and Shannon that Chaitin developed in his late teens, independent of the similar work of Kolmogorov and Solomonoff. Indeed, there are several prequel works [6, 7, 8], but this set of three volumes packages the material in a nice, quite digestible fashion--even for an easily addled cognitive scientist.


For the uninitiated, the general ideas of information theory revolve around representations--how much information do you need to represent a given thing? It goes beyond the mere semiotics of the situation by also looking at compression--how much can we compress something and still have the original thing? Does the information rely on the accuracy of the sender, receiver, or both? Can the compression be "lossy," as in JPEG image compression, where the original information is discarded in favor of a visually adequate re-representation? Or must the compression be "lossless," as in an algorithm that generates the digits of π? Some of the areas of coding theory and complexity theory fall into play here as well.

From the viewpoint of a cognitive scientist, "information theory" is an all too familiar subject, having been one of the recent paradigm shifts (more recently overtaken by connectionism, which is just behaviorism revisited, albeit at a smaller scale) in my field of choice. If you've dialed a phone recently (or ever actually), you've had to deal with the concepts from the field, writ large. Claude Shannon of Bell Labs pretty much originated the field back when they were trying to figure out how much stuff the phone company could cram down their wires, the definition of stuff being basically that of information [9]. His work was wildly influential in many fields besides that of signals and systems and, to be sure, invaded cognitive science as well.


Copyright © 2002 Wolfram Media, Inc. All rights reserved.

[Article Index][Prev Page][Next Page]