Sunday, June 29, 2008

Singularity follies

I saw Disney/Pixtar's "WALL-E" yesterday with my son. Fun movie, excellent animation, some good laughs. A bit heavy-handed on the overarching messages about society side... but that's Disney for ya. B+

Based on the film, I was going to write a quick post about how, apparently, in the film, singularity is achieved through waste management. Go read the Wikipedia article on "technological singularity" so I don't have to do a crappy job summarizing here. [pause] Thanks.

Machine intelligence is a wonderful topic for when you're hanging out waiting for a movie to start, or sitting around drinking wine coolers on the deck on a nice, early summer evening. It's fun to discuss the differences between creativity, computation, cognition, recognition, etc. and go on about how men and machines may differ -- both now and in the future -- in terms of thinking-type activities.

My point, from watching WALL-E, was going to be that we equate (especially as children) emotional goals very specifically with self-awareness. You can have an animal (or a plant, a teapot, a statue, a car, etc.) in a movie be, essentially, a prop, and have no "feelings." Or they may be rudimentary feelings that reflect back from the main characters. But for a creature to be "alive," it needs to do thinky things that have more to do with its own well-being (usually emotional) than with sheer computing power. Thus, though WALL-E may be able to do many computational things, what makes him "thinking," what has pushed him beyond the singularity, is his ability to formulate his own goals.

Interestingly, the "bad guy" in the movie [very minor spoiler] seems alive, too... but has received his goals as part of a program; ie, they are not his own goals, per se, but are direct instructions from a human.

That was about it for my original post idea... the thought that we base our idea (at least in a shallow, entertaining sense) on what is "real person thinking" on the ability not to solve problems, but to come up with them. To decide, "This situation isn't ideal for me... I can envision another possibility." Person-hood based not on survival (which requires all kinds of problem solving, and which animals do all the time), but on idealism.

That was the extent of it. But then I read a new post at Kevin Kelly's The Technium about "The Google way of science." The basic idea being that a new kind of cognition (or at least, though-work) is being done through super-fast evaluations of super-huge data sets. The example I like is the one about how Google provides on-the-fly Web site translation. They don't have an translation algorithm, they just compare enormous sets of currently translated documents.

This is, as Kelly and other point out, a fantastic way to solve problems. You don't worry about a model, you don't worry about a theory or an equation. You just put trillions of cycles of computing power to work examining billions of data points, and then you figure out where new data points would line up.

Fascinating, important stuff, yes. But Kelly goes on to suggest that this kind of computation disproves Searle's riddle of the Chinese room,  whereas I think it actualy *proves* Searle's point in that thought experiment. If I had access to all the (let's say) Chinese-to-English-and-back documents that Google does, I, too, could translate between the languages without understanding both. Maybe even neither. If you've ever tried Google's spot-translation facilities and seen what it does to metaphor, you know that quite a bit of understanding is lost (ahem) in translation.

Kelly goes on to quote George Dyson in a response he (Dyson) made to an article Chris Andersen wrote in Wired on this subject:
For a long time we were stuck on the idea that the brain somehow contained a "model" of reality, and that AI would be achieved by constructing similar "models." What's a model? There are 2 requirements: 1) Something that works, and 2) Something we understand. Our large, distributed, petabyte-scale creations, whether GenBank or Google, are starting to grasp reality in ways that work just fine but that we don't necessarily understand. Just as we will eventually take the brain apart, neuron by neuron, and never find the model, we will discover that true AI came into existence without ever needing a coherent model or a theory of intelligence. Reality does the job just fine.

By any reasonable definition, the "Overmind" (or Kevin's OneComputer, or whatever) is beginning to think, though this does not mean thinking the way we do, or on any scale that we can comprehend. What Chris Anderson is hinting at is that Science (and some very successful business) will increasingly be done by people who are not only reading nature directly, but are figuring out ways to read the Overmind.

Now... I love science fiction. But I really don't buy that dipping into enormous pools of data to look for correlations counts as any kind of "thinking" that we would recognize as being of an order even close to that of animals, to say nothing of the cute (yet not cuddly) WALL-E. Dyson himself says, "... though this does not mean thinking the way we do, or on any scale that we can comprehend." Well... why call it "thinking" if it's something completely different than what we call "thinking," and on a totally different scale... Mama always said, "Life is like a box of semantics." If I can call what the weather does "thinking" because it moves enormous numbers of things around and exacts changes and is involved in activities based on ultra-complex rules, then OK. What Google etc. does could be called "thinking," too. If we open it up that far, though, we've lost the original intention of what we mean when we use the term to apply to us man-apes.

When you challenge a child who has done something stupid or dangerous and ask, "What were you thinking?" you're not looking for an answer in terms of their problem solving abilities. If the boy-child has emptied 25 cans of shaving cream into the kiddie pool and is making "summer-time snow angels," you may love the creative spirit, hate the waste of money (and how he smells afterward), but your chat with him afterward will be about making choices, not about air pressure and aroma. You want to know what led him to the choice to do the unwise thing, so that you can teach him not to lead himself there. You want to help him create better problems for himself, not, in many cases, solve them.

I can't tell time anywhere near as accurately as a watch. But that doesn't mean that a watch is thinking. Or, if want to say it is, it is only ever thinking about what time it is.

* * * * *

PS: Irony of the week. The last line of dialogue in WALL-E was clipped slightly at my showing by the "pop" you get during a slightly crappy jump from one reel to another. A movie created using advanced, computerized digital effects about an advanced, computerized digital creature... partly f'd up by an analog zit. I was amused.

No comments:

Post a Comment