Giant, long introduction to the point I'll get around to making eventually...
Alan Turing -- who invented the idea of the modern computer (sometimes called a "Turing Machine"), and whose first real stab at which is shown here -- basically said that given a recording medium big enough, and enough time, you could record and solve any problem that could be stated clearly. That's a gross oversimplification, but it'll do for my modest blog.
Last October, we hit the 60th anniversary of John von Neumann's initial proposal for the "universal computing machine" -- i.e., the computer. Von Neumann took Turing's ideas and turned them into a reality -- a machine that could process different equations, rather than solving only one. A programmable computer, that is. The first use for which was to work on the equations for the atomic bomb.
60 years ain't a very long time, and look what we've got today?
George Dyson has written two really interesting articles, one called Turing's Cathedral and one called The Universal Library that deal with how far we've come since Turing's initial thoughts on the subject, and where we're headed; specifically with regards to what Google, and Internet search in general, is doing with our "thoughts" on the Web.
Dyson does a good -- and admirably brief -- job of describing the history of how Turing and von Neumann's ideas have gotten us to modern computing, the Web and Google. But I want to pull out a couple passages in order to make a point.
First:Google is building a new, content-addressable layer overlying the von Neumann matrix underneath. The details are mysterious but the principle is simple: it's a map. And, as Dutch (and other) merchants learned in the sixteenth century, great wealth can be amassed by Keepers of the Map.
OK. So Google is mapping the Web. Big deal. We knew that. I've heard the metaphor before, and ain't really surprised to hear it again. I'm not sure I buy it, as a map is fixed, and search results change, not just based on criteria, but daily, based on changes to the data landscape. But Dyson goes on to talk about the three types of computing calculations that can be done, and how most computers are built to deal with "computable problems;" those with questions that can be easily asked and solved (if not easily solved, at least being predictably solveable). The second type, "non-computable problems" have questions that can be asked, but where we know we have no way to solve them.
The third type are most interesting, most fecund, and most appropriate for creative types like us:
...questions whose answers are, in principle, computable, but that, in practice, we are unable to ask in unambiguous language that computers can understand.
The example he gives is the question, "What makes something look like a cat?" A child can draw a circle, six lines and a couple dots, and almost anyone will say, "That's a nice cat." But to get a finite answer that would distinguish that solution from, say, "What makes something look like a mouse," would be very hard. In this case, as Dyson puts it:
A solution finds the problem, not the other way around. The world starts making sense, and the meaningless scribbles (and a huge number of neurons) are left behind. This is why Google works so well. All the answers in the known universe are there, and some very ingenious algorithms are in place to map them to questions that people ask. (emphasis mine)
And at this point, while reading his essay, my brain had a "Rubix Cube" moment. Which is what I call it when various things all start twisting around and reassembling in a different array than they were a few moments ago. I'm not saying all the colors line up (in my brain, sometimes the yellow side and green side do, but rarely any more than that), but something certainly changes.
I studied some child psychology and development in school. Not much. Just a few courses. But I do remember that the human brain starts out with lots more open neual pathways than it ends up with. Babies have (if I remember correctly) something like 10-times as many neural connections as adults. As they grow and learn and try to do things, certain pathways become strengthened -- i.e., "putting spoon in mouth to get food" beats out "putting spoon in ear to get food" and the latter set of neural paths eventually dies out.
[Aside: we also learned that the part of the brain responsible for processing the "don't do that!" response to painful activities is the same part that processes the response to trying do do things in a new way after all those initial, baby-to-youngster, extra neural pathways have died out. That is, our response to change is physiologically very similar to our response to pain. We don't want to do things that might hurt us, and we don't want to do things in a new way, because it might hurt us. My prof explained that this is a survival mechanism; if you do things in the way you've done them before, it probably won't kill you, because it hasn't already. Problem is, from a creativity standpoing, doing the same thing might as well be death.]
Dyson goes on to talk about machine intelligence, the possibility that Google may be the basis for the first worldwide artificial intelligence, dogs and cats living together... mass hysteria. OK. Not those last two. But, while I think it's interesting and, as a sci-fi fan, not boring or laughable, AI is not something innately predictable or that I want to focus on.
I do, however, want to focus on the idea that Google is providing a worldwide brain already. Not an intelligence, per se. But a digital analog (I love saying that) to the physical, juicy meat and chemicals that make up our own grey matter and allow us to process our own biological questions, searches, answers and thoughts.
For the love of Pete, Havens... Get to the point.
OK. OK. Calm down. Here's the point.
I have a friend at work whom I respect very much. She's one of our web team managers. Loves wikis (as do I), but hates blogs. Because, to her, they are, basically, unrestrained "thoughts," posted to the Web. I'm paraphrasing her, but she finds most of what she reads on blogs to be drivel.
As do I. But I love blogs. Why?
Because they let everyone post their drivel, and some of that drivel ain't drivel to me. I don't care about the 9,000,000 teens who are blogging on what they wore to the blah blah blah. Or about many entertainment blogs. Or about hundreds of millions of other blogs out there. But I care what John Battelle says abourt search. And I care what Bill Ives says about KM. And I care what my friend Jenn writes in her poetry.
Again, I hear you say, "Get on with it. What's the point here? That everybody likes different stuff on the Web? We knew that."
Yes, but...
If Google, by searching the Web in finer and finer increments -- and, more recently, printed materials and other media -- provides a methodology for me to determine which thoughts (for words, which are mostly what we're searching for and through, are thoughts) out there are going to help me be more productive, creative, happy, healthy, etc... then isn't Google acting as a kind of meta brain, by which everyone will be connected to those thoughts?
I'm not positing artificial intelligence here. I'm not imagining some great, Ozymandian force, rising up under the Google campus and causing us to buy more porn, redo our mortgages and connect with classmates. I'm theorizing that this "new brain" is making the aggregate cognitive abilities of everyone connected to it more... something.
Faster? Happier? Productive? Worried? Distracted? Creative?
I'm not sure yet. Some people I know are very distracted by the Web. I know I can be. Some are very empowered in their jobs and personal life and hobbies. There's so much more that we can know in a few seconds or minutes than we could even a few years ago at all, or in a time span that was prohibitive. And it keeps getting better. Or at least faster, more, funkier, distractiver, etc.
Was that the big point?
Almost. Sort of. Yeah. In brief:
- Google (and search in general) is a way to connect our thoughts across time and distance
- Tools like blogs and wikis allow more people to put thoughts out there.
- As Dyson says, providing a robust manner to search a "von Neumann matrix" (the Web) in a random fashion is a good way to solve the "third kind" of logic problem; i.e., rather than try to program a computer to answer the asked question, "What does a cat look like?" you search the Web for decriptions or pictures of cats until you have an idea in your head that satisfies your personal contextual need.
- By searching others' thoughts, we find ways to use them to solve our own problems
- By posting our thoughts, we incrementally improve the matrix (i.e., we help the Web "learn")
But here's an ancillary point.
How appropriate is it that advertising is Google's "dopamine" in this "big brain" metaphor?
I know, I know. The big search window isn't "advertising supported." The "natural" search results are based on some insanely complex calculations that are based on key words, inbound links, how often your page changes, etc. etc. That being said, Google pays for all that with advertising. That and that being said, many of the best "natural" links are buoyed by SEO strategy that is, essentially, advertising (or at least marketing) supported.
In our metaphor, then: advertising/marketing = positive neural reinforcement.
Which is true in reality when we examine how the Web works. Those sites that are visited more often are more likely to survive. More traffic equates to either more revenue -- for commercial sites, that's the definition of life -- or more interest. If you have readers, sponsors, friends, authors, contributors... whatever... your site will be much more likely to flourish than if you have fewer.
I'm not saying that the model is bad. I use Google a couple dozen times a day at least. It's a great tool. Once you learn about how to narrow and expand your searches, you can get around a lot of the crap that's force-fed by SEO "strategies." But I am saying that if we're going to have a global brain that's going to help us connect to each other's thoughts, maybe we need to be thinking about what the chemical is that stimulates that brain.
Because, if we work the metaphor backwards, an advertising model might be akin to a lima bean advertiser telling your kid, "I'll give you a dollar to stick your fork full of peas in your ear," every time he's trying to eat peas.
No comments:
Post a Comment