Sunday, November 8, 2009

I would have preferred "Infovore" to "Informavore," but still...

An essay at Edge.org called, "The Age of the Informavore." We'll get to my thoughts on the essay in a minute. My first thought was, "Wow. This guy doesn't make up new words very often." He is German, and English is his second language... so mad props to him for writing an interesting essay in another language. I can't do that, so I don't mean to be supercritical of his writing in general. But it really should be, "Infovore." Technically, in a Latin-root sense, he's correct. The Latin, "informare," means to "give form to the mind" or "teach." The root is "inform" not "info." But we use "info" in English as the short-form of the noun "information." And we tack it on to other things to come up with "infotainment" and "infolicious" and "infotastic." And by "we," I mean of course, me.

Be. That. As. It. May. (Ooh. Should that be a new ROFLMAO? BTAIM...)

The author of the essay, Frank Schirrmacher (WorldCat Identity), is a journalist, writer and publisher. Edge.org often has great essays, and this one is worth reading all the way through. Some major points considered below. Schirrmacher begins with this thesis:
We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on.

There it is in a nutshell. The tools we've been using for a couple hundred millenia to give our physical bodies leverage over our environment are now giving way to mental tools. I would agree that we depend on outside entities to manage, store and process our thoughts. But I'd argue that we've been doing so, to some degree, since language was invented. I don't process every thought I have need of; I often let others do lots of my "thinking" for me. There are people at work, for example, who are much better versed in all kinds of skills, knowledge and wisdom which I have no regular need of, but enjoy regular access to. I can walk 20 steps to the legal department and ask them questions. I can pick up my phone and ask our IT folks questions. I can follow instructions, and then promptly forget what I once "knew" for the 30 seconds it took me to interact with that information in my own sphere.

Obviously writing and printing aid that process. I don't have any idea what the capitol of Assyria was (though I do know my favorite color). If I need that information, I look it up.

Quantitatively different? Sure. But we've been doing these things for a long time.


I think what Schirrmacher is saying is different is two things. First, quantitatively, we simply have quicker access to a crapload more information. I am a firm believer that, at some point, quantitative differences can cause qualitative effects. So I'll certainly buy into the idea that being able to carry around a device the size of a deck of cards that can access most of the world's information wirelessly is a cognitive game-changer in and of itself. Second, though -- and what I think he's really getting at -- is that we are now offloading information about ourselves. He says:
Gerd Gigerenzer, to whom I talked and who I find a fascinating thinker, put it in such a way that thinking itself somehow leaves the brain and uses a platform outside of the human body. And that's the Internet and it's the cloud. And very soon we will have the brain in the cloud. And this raises the question of the importance of thoughts. For centuries, what was important for me was decided in my brain. But now, apparently, it will be decided somewhere else.

Hmmm... I'm back, a bit, to my original argument: "what is important for me" hasn't always been decided in my brain. It is often decided by my boss, my wife, my government, students, son, friends, doctor, lawyer, traffic cop, plumber, etc. Thinking is often a group activity.

He goes on to make the point, as did I, that we are now using "brain" tools as opposed to "body" tools. And just as the industrial revolution made us think of physical effort in more technical ways (so as to be measured, optimized, etc.), now the mental tools cause us to think about thinking in technical ways. He says:
The idea that thinking itself can be conceived in technical terms is quite new. Even in the thirties, of course, you had all these metaphors for the human body, even for the brain; but, for thinking itself, this was very, very late. Even in the sixties, it was very hard to say that thinking is like a computer.

I agree. And I still think that thinking is nothing like what is currently happening in computers, and won't for quite some time, Ray Kurzweil's (WC Identity) singularity notwithstanding. That's a whole other essay, though. But then Schirrmacher goes on to say:
But now, when you have a generation — in the next evolutionary stages, the child of today — which are adapted to systems such as the iTunes "Genius", which not only know which book or which music file they like, and which goes farther and farther in predictive certain things, like predicting whether the concert I am watching tonight is good or bad. Google will know it beforehand, because they know how people talk about it.

What will this mean for the question of free will? Because, in the bottom line, there are, of course, algorithms, who analyze or who calculate certain predictabilities. And I'm wondering if the comfort of free will or not free will would be a very, very tough issue of the future.

Hold on... that's not thought, per se. That's decision making and pattern recognition and cultural zeitgeist stuff which have been cognitively offloaded from individuals in many forms for millenia [note: need a word for "cognitive offloading." Will come back to that]. When I went into Tower Records in 1989, I didn't have a universe of choice and only mine own wee haid to sort things out. There was an entire ecosystem of publishers, buyers, merchandisers, store workers, marketers and friends to help me narrow down my millions of choices. Sometimes I went in for a specific album, having heard it on the radio (cognitive offload!). Sometimes a friend would want to point out some stuff they'd heard or bought recently (cognitive offload!). Sometimes I just browsed and looked at stuff on end-caps (cognitive offload!) or with cool covers (cognitive offload!) or that were being perused by cute girls (non-cognitive offload!).

Is that any different, on a "though-level" from iTunes or Amazon recommendations? Not to me. I still get to make the decision. It's just that now I get to make a decision using more (or different) decision engines. It may be "turtles all the way down" as far as the origin of the universe, but when it comes to choosing music, TV or movies... my choices are rarely going to be made solely based on what's in my head.

Why? Because those are "social thoughts." Always have been.What do I want to eat for dinner? Depends on the folks with whom I'm eating. What movie is OK to take the boy to? Depends on what my friend, Bob, says. Bob's got good taste when it comes to parsing the OK-ness of movies for young lads. How much baking soda goes into the pancakes? Betty Crocker, help a guy out. It's all external cognition.

Useful prediction. I don't see it coming any time soon (ha ha)


Does Schirrmacher have something interesting to say about the offloading of private cognition? Well... not yet, anyways. He goes on to talk about prediction:
The question of prediction will be the issue of the future and such questions will have impact on the concept of free will. We are now confronted with theories by psychologist John Bargh and others who claim there is no such thing as free will. This kind of claim is a very big issue here in Germany and it will be a much more important issue in the future than we think today. The way we predict our own life, the way we are predicted by others, through the cloud, through the way we are linked to the Internet, will be matters that impact every aspect of our lives.

Again... let's wait a second. Prediction isn't cognition either. Or it is, at best, a very specific type of cognition. And we know from quantum physics that observing a system changes it. So you can try to predict human behavior -- and I will agree that access to holycrapabytes of more and more personal data may provide some frightening attempts at prediction -- but, again, we do that now. I'm in marketing; it's kinda what we do. Predict and, to a degree, direct behavior. Until, however, we can solve traffic jams and make sure that there is always some Life Cereal on the shelf at Walmart (what the heck? it's popular! stock up, you guys!), I will be more worried about ham-handed attempts at prediction than at somebody actually knowing what's going to happen (see: Barak Obama's chances to become president back in the primaries).

Digital Darwinism, Communism, Taylorism. Not for shism, I don't think.


Then he jumps into "three issues" from the 19th century that are important for our new millennium:


    1. Darwinism: who and which thoughts survive on the net; who gets more traffic

    2. Communism: questions of free and who does work for free or offers value for free

    3. Digital Taylorism: the application of scientific, technical management principles to human behavior



      That last, I assume, means that we can't any better (or humanely) manage cognitive workflows than we can industrial or technical ones. We end up with dehumanization, etc. but on a level of thought-processing rather than work-processing.

      I'm would pretty much disagree with all of those. More on that later. But he jumps right from this to talking about multitasking... which is, again, an entirely different issue than cognitive offloading (extracognition? no...). He says:
      Now, look at the concept, for example, of multitasking, which is a real problem for the brain... you meet many people who say, well, I am not really good at it, and it's my problem, and I forget, and I am just overloaded by information...

      It's a kind of catharsis, this Twittering, and so on. But now, of course, this kind of information conflicts with many other kinds of information. And, in a way, one could argue — I know that was the case with Iran — that maybe the future will be that the Twitter information about an uproar in Iran competes with the Twitter information of Ashton Kutcher, or Paris Hilton, and so on. The question is to understand which is important. What is important, what is not important is something very linear, it's something which needs time, at least the structure of time. Now, you have simultaneity, you have everything happening in real time. And this impacts politics in a way which might be considered for the good, but also for the bad.

      Because suddenly it's gone again. And the next piece of information, and the next piece of information... And they all should be, if people want it, shared. And all the thoughts expressed in any university, at this very moment, there could be thoughts we really should know. I mean, in the nineteenth century, it was not possible. But maybe there is one student who is much better than any of the thinkers we know. So we will have an overload of all these information, and we will be dependent on systems that calculate, that make the selection of this information.

      Respectfully, I don't buy it.

      Hunting is different than gathering. But it ain't new to us.


      We're back to my whole thesis on information hunting vs. gathering. Are the systems different now? Yes. Clearly. Are they going to produce (and have already produced) enormous shifts in economies, politics, culture, education, entertainment? Yes, of course. But to say that in the 19th century you could know what thoughts you should know? Sorry, but... no. Just no. Lots less input? Sure. I buy that. Remember that in the 1800's, even through the late 1800's, something like 98% of the world's population was working on farms. To suggest that they could process all the information available and decide what was important is only true because, for the most part, there was so little information to be accessed (relative to our time). But, even then, if you wanted to "know what is best to know," you would have had to spend a lifetime reading and talking and studying. All of it in the midst of others, who would help you do so. Information has always been there. We now have more meta-information and more choices about which information to spend time on.

      He goes on to say:
      What did Shakespeare, and Kafka, and all these great writers — what actually did they do? They translated society into literature. And of course, at that stage, society was something very real, something which you could see.

      In Shakespeare's time, "society" in England -- an island the size of, well, England -- would have been about ten different societies. Most people never traveled more than 20 miles from the town of their birth. Religious and cultural activities varied greatly from region to region and the language itself was still more a collection of dialects than what we'd recognize as English. Within another 100 years, Shakespeare and the King James Bible would help provide a unifying, literate center for English on a going-forward basis. But, by then, the explosion of printed material would probably have meant that no one person could ever have read (much less fully grasped or memorized) all the works being printed. And that's just in English.

      He closes with:
      You will never really understand in detail how Google works because you don't have access to the code. They don't give you the information. But just think of George Dyson's essay, which I love, "Turing's Cathedral." This is a very good beginning. He absolutely has the point. It is today's version of the kind of cathedral we would be entering if we lived in the eleventh century. It's incredible that people are building this cathedral of the digital age. And as he points out, when he visited Google, he saw all the books they were scanning, and noted that they said they are not scanning these books for humans to read, but for the artificial intelligence to read.

      I agree about Dyson's essay. Go read that. And that final point is interesting; the Google Book project did scan the books so that, essentially, they became part of a "machine intelligence," is we want to get all SciFi and call it that. Google now "knows" that of all those billions of pages, about 1,019 have the combination of the words "poison" and "banana" on them. And being able to access those very quickly is important and different and game-changing and scary and fun and helpful and weird and different.

      But I don't think it means what Schirrmacher thinks it means. I don't believe we are offloading key cognitive processes in a way that is fundamentally different than we do with books, experts, pads of paper and fridge notes. There are just more choices.

      And there are more "meta choices." My wife and I were talking about this yesterday, and she believes there is value in memorization. Multiplication tables for kids. Poetry for anyone. Vocabulary. I agreed. Of course there is. If you have nothing memorized, you'd be, essentially, incapacitated. If you don't remember the words or grammar for, "I don't understand what you're saying," you won't be able to do much.

      But at another level, real thinking -- the kind that leads to interesting discoveries, creativity, conversation -- is less about learning the pieces, and more about what you do with those pieces. Is a certain set of [memorized intellectual] tools necessary? Of course. Will a larger range of possible tools mean more stress about which ones are most important? Sure. But we've had to make those choices for thousands of years at this point, certainly since widespread printing came about. Now we may also be making meta-information choices, deciding not just what pieces of data to memorize, but what systems to become expert in.

      I'm really not trying to downplay the importance of the Web, always-on communication and media, mobile computing, large-scale search and prognostication systems, etc. etc. They are hugely important. But when we focus on (as of yet) unproven assertions about how this change is qualitatively different... we lose the ability to deal with those changes using current (or past) thought processes.

      How does this translate to reality?


      I spend (roughly) 2 hours a day on information hunting tasks. RSS feeds, email, scanning articles, blogs, news sites, Twitter, Facebook, etc. During that time I make (I would estimate) 200-600 decisions about where information falls in a kind of hierarchy, as I read and process data:


        1. Unimportant to me: immediately forget

        2. Short-term, "immediate action required" (eg, "boss needs something now" email): interrupt information hunt, deliver action.

        3. Short-term, "that's good to know, but no action required:" set into short/medium term memory

        4. Short-term, "this is very interesting, but doesn't really require action from me:" possibly pass along to other interested folks.

        5. Short-term, "that's good to know, no action required, but further/future thought might prove fruitful:" attach metadata, breadcrumb, to-do, link, bookmark, post-it note, and file.

        6. Short-term, "need more information now:" do specific information search

        7. Medium-term, "need this for current projects, but don't need to do anything immediately:" file information appropriately

        8. Medium-term, "need this for current projects, should forward or take action:" forward or take action at appropriate time

        9. Medium-term, "this is interesting, but I don't have time now:" file for possible future reading

        10. Long-term, "this is very important, and requires action, but not immediately:" note and file for appropriate activity time.

        11. Lont-term, "this is interesting, but doesn't require any action on my part:" put in the part of my brain for interesting things that will fight, in a Darwinian sense, for long-term survival.



          This does look complicated, yes. And it seems almost like a computer program with a bunch of "If/then" loops. But -- as I've argued before -- I think we do this all the time. It's more like hunting behaviors than gathering, but it's already inside us, both individually and culturally. We just need to get better at it again.

          "Yes, but I have information overload! How does all that pointy-headed yabba yabba help me?"


          To be blunt, I think lots of people are in a place where they simply haven't thought about their thinking in awhile. We (folks my age and older, and even many younger people) have been raised in an "information gathering" society. We go to classes that are about "Subject A," and then "Subject B," then "C," etc. In Class A, students in Major A learn from Professor A about Subject A. Round pegs, round holes. Assembly line methodology. At work we have Entry Level Job A reporting to Manager A who does Job A and reports to Director A, all the way up to VP A. You often don't get an "information generalist" until you get to the CEO, and they usually have come up through a particular discipline.

          I'm not saying this is bad or wrong. It just prepares us to live as information gatherers. "Go there, to that row, in that patch of information, get the data related to your job, and bring it back and do something very specific with it."

          That's much different than, "The forest is there. It has everything you need. Figure it out."

          Schirrmacher, and many others, are -- I think -- experiencing some of the fear and uncertainty that comes with a change in environment. I don't mean to minimize it; I just think we need to concentrate on ways to become better hunters.

          To use Schirrmacher's term, we're already informavores. Have been since we started relying on tools, agriculture and group systems. We just need to make a transition from informasheep to informaraptors.

          No comments:

          Post a Comment