Overweening Generalist

Friday, April 5, 2013

Stephen Wolfram's Model of Information

In the 1940s, John von Neumann and Stanislaw Ulam began playing around with the idea of natural systems being extrapolated to initial conditions, then playing out as a sort of cellular automata. And I remember when I first read about cellular automata - James Gleick's book Chaos: Making A New Science had just come out - and it was filled with mind-spaghettifying ideas. Ideas like artificial life, the now-famous "Butterfly Effect," chaos mathematics and Benoit Mandelbrot and fractal geometry and fractal art, and it was  - much of it - way over my Generalist's head, but exciting. Cellular automata was in there. I had never heard of it.

                                                           Wolfram

Years later I picked up Stephen Wolfram's book after it came out in 2002: A New Kind of Science was about 1300 pages long, and was the manifesto of a guy who graduated with a PhD in particle physics from CalTech when he was 20, then received one of the first MacArthur "Genius" awards at age 21. This guy had a way to model just about everything: syntactic structures, social systems, particle physics.  Just about everything. It turns out he was a big-time guy in cellular automata, carrying on in the tradition of another "Martian," John von Neumann.

Wolfram's math was over my head, but books like this make me excited just to be in the presence of this sort of compendious mind. It's the kind of book I take off the shelf and open at random and read, hoping for some sort of inspiration. It usually works. Wolfram models information in our world upon his forays into cellular automata, in which you have a very basic system under initial conditions, and watch it evolve. He developed a taxonomy of the sorts of systems that arise, that he called "Class 1" "Class 2," and so on. These first two classes exhibit a low order of complexity; they tend to reach a level of constancy and repetition that's sorta boring. There's no surprises. They go on and on, ad nauseum, or die. A system like this? A clock.

His Class 3 level I think of as "noise." You can't predict anything. It's seemingly entirely random, like being bombarded by cosmic rays. If there's any structure at all, it's too complex. It seems akin to entropy. A system like this? Your TV tuned to a dead channel: all static and noise.

                                        cellular automata being simulated, played out. 

Wolfram's Class 4 is where the action is: these systems turn out lots of surprises. They're complex but there's structure; you can model from them and make a certain sense out of what's going on. Systems like this are intellectually exciting and basically describe any theory or "law" in the sciences. They're surfing the edge, almost falling into "noise" but never quite. It reminded me of Ilya Prigogine's ideas about complex adaptive systems and negative entropy, how life flourishes despite how "hot" it burns and uses resources. It creates information, structure, patterns, complexity. Indeed, Prigogine and Wolfram seem compatible enough to me...

                                 Shannon's basic equation for information theory:
                                 world-shattering stuff, turns out


My Other Information Systems
Probably because of my intellectual temperament - which includes not being particularly adept at math - I had always been very impressed with guys like Wolfram and what they were able to do with math, but I have also been suspicious that they're somehow operating from the conceit...or rather, flawed assumption that numbers can describe everything and that everything that's interesting to us is really just stuff that's interacting with the environment and doing computations. I thought these weird math geniuses had become hamstrung by the computer metaphor, and as I saw how different the human brain was from what they had asserted it was - a "biological computer" - I felt my suspicions confirmed.

I remember Timothy Leary giving a talk in Hollywood. He had been reading a recent book and was very enthusiastic about it. It was titled Three Scientists and Their Gods, by Robert Wright. So of course I had to read it. It's about Ed Fredkin, E.O. Wilson, and Kenneth Boulding. Leary seemed taken by Fredkin especially. This was an Everything Is Information Processing in a digital way stuff. Leary's psychedelic intellectual friend Robert Anton Wilson seemed interested in this view too, but never committed to it. RAW always seemed more committed to Claude Shannon's mathematical theory of communication - which is the gold standard for quantifying information - but Shannon's theory has information with no necessary semantic component; RAW made a heady brew from combining Shannon with Korzybski, who was all about semantics and our environment and how we make meanings.

Earlier, the originator of pragmatism, Charles Sanders Peirce, had developed a theory of semiotics that took into consideration the content of information using signs and a mind interacting with signs; he had begun to work out a system of defining, quantifying, and taking into account the evolution of a piece of information. This was the "pragmatic theory of information," but it hasn't gone all that far. Shannon's 1948 paper blew it off the map. But still, "information" had to have some sort of semantic component to it, or I had difficulty grasping it. Shannon's and Von Neumann's and Fredkin's and Wolfram's and Leary's ideas about "information" felt too disembodied to me; my intuition told me this couldn't be right. But I'm starting to come over to their side. Let me explain.


Modeling Natural Processes
Via cellular automata theory and the gobs of other stuff a Mind like Wolfram has, he said you can only get so far by modeling life as atoms, or genes or natural laws or as matter existing in curved space at the bottom of a gravity well. More fruitfully, we can model any natural process as computation. Big deal, right? Yea, but think of what this implies: Wolfram thought we can model a redwood tree as a human mind as a dripping faucet as a wild vine growing along a series of trees in a dense jungle thicket in the Amazon. Why? Because all of these systems were "Class 4" systems, and these are the only really interesting things going on. All of these systems exhibit the behavior of "universal computation" systems. (If this reminds you of fractals and art and Jackson Pollock, you're right: I see all of this stuff as a Piece. And so, apparently, does the math.)

Also: you cannot develop an algorithm that can jump ahead to predict where the system will be at Time X; this was proven by Alan Turing in 1936. You can't predict faster than the natural process itself. You had to wait to see what the system did; this blows to smithereens any Laplacian Demonic idea about knowing all the initial conditions and being able to predict everything. So guys like Ray Kurzweil - who has become more and more a sort of Prophet for quantifying the acceleration of information and making bold, even bombastic prediction about what will happen to our world, our society? Wolfram/Turing say no. There are no short cuts and our natural world is irreducible to anything close to Laplace's Demon. The system is too robust to reduce to even what Kurzweil seems to think it is. Robert Anton Wilson used the term "fundamentalist futurism" to criticize those groups of intellectuals in history that Karl Popper had called the enemies of the Open Society. I think the term may apply to Kurzweil too, but I'm not sure. Certainly it seems to apply to Hegelian historicism, most varieties of Marxism, Plato's Republic, and Leo Strauss and the Wolfowitz/Bush/Cheney Neo-Cons.

As I read Wolfram and Kurzweil, the latter seems to see our world as modeled within Wolfram's classificatory scheme as something like a Class 2 system: complex, but if you know enough about the algorithm that undergirds the whole schmeer: fairly predictable.

Arrogance? Aye, but human, all-too human, as Fred N wrote.

                                                    Drew Endy, now at Stanford

Synthetic Biology
Leary, with his penchant for neo-logizing, had in his 1970s book Info-Psychology, defined "contelligence" as "the conscious reception, integration and transmission of energy signals." There were eight fairly discrete levels of this reception--->integration----> transmission dynamic (modeled on the syntactic actions of the neuron). All well and good and trippy, but a team at Stanford led by Drew Endy has made a computer out of living cells.

Engineers at Stanford, MIT, and a bunch of other places have made biological computers. Do you know how a computer must be able to store lots of data? Well, it turns out storing data in DNA is insanely, wildly do-able and has more storage space than you can imagine. Perhaps you heard that some more of these everything-is-a-computer types stored all of Shakespeare's Sonnets in DNA. But that's small taters: it looks like we'll be able to store entire libraries, TV shows, movies, and CDs in DNA. Read THIS and see if you don't feel your mind getting a tad spaghettified.

So: a silicon chip uses transistors to control the flow of electrons along a path; Endy and his team at Stanford have developed a "transcriptor" to control the flow of proteins from one place to another, using Boolean Integrase Logic gates (or "BIL gates" so there's your geek humor for the day!). Endy says their biological computers are not going to replace the thing you're using to read this, but they will be able to get into a tiny, tight quarters and feedback info and manipulate data inside and between cells...something your Smart Phone cannot do.

Endy sees his biological computers as inhabiting a cell and telling us if a certain toxin is present there. It could also tell us how often that cell has divided, giving us early info on cancer, for example. It could also tell us how a drug is interacting with the cell, and make therapeutic drugs more individually tailor-made.

In a line that reminded me of dear old Crazy Uncle Tim, Endy told NPR that, "Any system that's receiving information, processing information, and then using that activity to control what happens next, you can think of as a computing system."

For more on bio-computing, see HERE and HERE.

I'm starting to swing more with Wolfram. But there are many other little snippets that are swaying me. I still like older forms of "information," more human-scaled and poetic and embodied.

But then there are the intelligent slime-molds, which I will leave you with. Grok in their fullness. Don't say I ain't never gave ya nuthin'!

How Brainless Slime Molds Redefine Intelligence.

No comments: