Sunday, June 29, 2008

Singularity follies

I saw Disney/Pixtar's "WALL-E" yesterday with my son. Fun movie, excellent animation, some good laughs. A bit heavy-handed on the overarching messages about society side... but that's Disney for ya. B+

Based on the film, I was going to write a quick post about how, apparently, in the film, singularity is achieved through waste management. Go read the Wikipedia article on "technological singularity" so I don't have to do a crappy job summarizing here. [pause] Thanks.

Machine intelligence is a wonderful topic for when you're hanging out waiting for a movie to start, or sitting around drinking wine coolers on the deck on a nice, early summer evening. It's fun to discuss the differences between creativity, computation, cognition, recognition, etc. and go on about how men and machines may differ -- both now and in the future -- in terms of thinking-type activities.

My point, from watching WALL-E, was going to be that we equate (especially as children) emotional goals very specifically with self-awareness. You can have an animal (or a plant, a teapot, a statue, a car, etc.) in a movie be, essentially, a prop, and have no "feelings." Or they may be rudimentary feelings that reflect back from the main characters. But for a creature to be "alive," it needs to do thinky things that have more to do with its own well-being (usually emotional) than with sheer computing power. Thus, though WALL-E may be able to do many computational things, what makes him "thinking," what has pushed him beyond the singularity, is his ability to formulate his own goals.

Interestingly, the "bad guy" in the movie [very minor spoiler] seems alive, too... but has received his goals as part of a program; ie, they are not his own goals, per se, but are direct instructions from a human.

That was about it for my original post idea... the thought that we base our idea (at least in a shallow, entertaining sense) on what is "real person thinking" on the ability not to solve problems, but to come up with them. To decide, "This situation isn't ideal for me... I can envision another possibility." Person-hood based not on survival (which requires all kinds of problem solving, and which animals do all the time), but on idealism.

That was the extent of it. But then I read a new post at Kevin Kelly's The Technium about "The Google way of science." The basic idea being that a new kind of cognition (or at least, though-work) is being done through super-fast evaluations of super-huge data sets. The example I like is the one about how Google provides on-the-fly Web site translation. They don't have an translation algorithm, they just compare enormous sets of currently translated documents.

This is, as Kelly and other point out, a fantastic way to solve problems. You don't worry about a model, you don't worry about a theory or an equation. You just put trillions of cycles of computing power to work examining billions of data points, and then you figure out where new data points would line up.

Fascinating, important stuff, yes. But Kelly goes on to suggest that this kind of computation disproves Searle's riddle of the Chinese room,  whereas I think it actualy *proves* Searle's point in that thought experiment. If I had access to all the (let's say) Chinese-to-English-and-back documents that Google does, I, too, could translate between the languages without understanding both. Maybe even neither. If you've ever tried Google's spot-translation facilities and seen what it does to metaphor, you know that quite a bit of understanding is lost (ahem) in translation.

Kelly goes on to quote George Dyson in a response he (Dyson) made to an article Chris Andersen wrote in Wired on this subject:
For a long time we were stuck on the idea that the brain somehow contained a "model" of reality, and that AI would be achieved by constructing similar "models." What's a model? There are 2 requirements: 1) Something that works, and 2) Something we understand. Our large, distributed, petabyte-scale creations, whether GenBank or Google, are starting to grasp reality in ways that work just fine but that we don't necessarily understand. Just as we will eventually take the brain apart, neuron by neuron, and never find the model, we will discover that true AI came into existence without ever needing a coherent model or a theory of intelligence. Reality does the job just fine.

By any reasonable definition, the "Overmind" (or Kevin's OneComputer, or whatever) is beginning to think, though this does not mean thinking the way we do, or on any scale that we can comprehend. What Chris Anderson is hinting at is that Science (and some very successful business) will increasingly be done by people who are not only reading nature directly, but are figuring out ways to read the Overmind.

Now... I love science fiction. But I really don't buy that dipping into enormous pools of data to look for correlations counts as any kind of "thinking" that we would recognize as being of an order even close to that of animals, to say nothing of the cute (yet not cuddly) WALL-E. Dyson himself says, "... though this does not mean thinking the way we do, or on any scale that we can comprehend." Well... why call it "thinking" if it's something completely different than what we call "thinking," and on a totally different scale... Mama always said, "Life is like a box of semantics." If I can call what the weather does "thinking" because it moves enormous numbers of things around and exacts changes and is involved in activities based on ultra-complex rules, then OK. What Google etc. does could be called "thinking," too. If we open it up that far, though, we've lost the original intention of what we mean when we use the term to apply to us man-apes.

When you challenge a child who has done something stupid or dangerous and ask, "What were you thinking?" you're not looking for an answer in terms of their problem solving abilities. If the boy-child has emptied 25 cans of shaving cream into the kiddie pool and is making "summer-time snow angels," you may love the creative spirit, hate the waste of money (and how he smells afterward), but your chat with him afterward will be about making choices, not about air pressure and aroma. You want to know what led him to the choice to do the unwise thing, so that you can teach him not to lead himself there. You want to help him create better problems for himself, not, in many cases, solve them.

I can't tell time anywhere near as accurately as a watch. But that doesn't mean that a watch is thinking. Or, if want to say it is, it is only ever thinking about what time it is.

* * * * *

PS: Irony of the week. The last line of dialogue in WALL-E was clipped slightly at my showing by the "pop" you get during a slightly crappy jump from one reel to another. A movie created using advanced, computerized digital effects about an advanced, computerized digital creature... partly f'd up by an analog zit. I was amused.

Thursday, June 26, 2008

The Happening: mysterious moviegoing madness

Something is "Happening."

Massive spoilers below about the newest M. Night film. But, if you don't feel like having the movie spoiled, know at least this... it's really, really pointless.

Thursday, June 19, 2008

Mad Stupid

So I downloaded the free trial of "Spore: Creature Creator." I've been drooling in anticipation of the full game of "Spore" now for... I don't know, Will... how long? 3 years? 5? Something like that.

Anyway... played this little mini-preview game-y thing where you create creatures using one of the engines that will be in the final game. It's fun. And my son really enjoyed it. I registered the trial online so that I could see other folks' creations, get updates, etc. Registration, as per normal, requires an email address (cue ominous music... why would he point out an obvious bit o' stuff like that? hmmmmm....)

The free trial of "Creature Creator" only gives you access to like 1/8th of all the pieces-parts. And my boy liked it enough that I decided to upgrade to the full version (never mind that I think this is essentially a marketing tease for the full game, now slated to come out in September, and that, IMHO, the "full version" of this little mini game should be free).

Clicking the "upgrade" button from within the game takes you to the purchase site for EA. OK.... Not exactly what I expected, as I'd already downloaded the large install file. Will they make me go through that again? I'd assumed I'd just pay and get an unlock code. A trick that 3rd-rate shareware peddlers have had perfected for years. We'll see...

So I add the full version of the program to my cart, fill in all my info for checking out...

And get an error.

"That email address is already in use."

Bwa? BWAAAA? The email address I gave EA as part of the registration process for a piece of trial software is already in use... Well, DUH! It's in use by me, who registered earlier today. And now I want to upgrade... but you won't let me, because my email address is already in use by you.

Mad stupid. Mad-5 stupid. I expected more from EA and Spore and Will and Maxis. This does not bode well...

Wednesday, June 11, 2008

Great idea for a story, novel, poem

Abe Books has posted a neat article about stuff that used booksellers have found in books. Money, baseball cards, airline tickets and lots of personal notes.

Would make a cool scene for a story or novel, or a neat moment in a poem.

Sunday, June 8, 2008

Triple movie review: You don't mess with the kung fu skull

One of the things I grew up doing while waiting for movies -- once the multi-plex took over from the single-screen cinemas -- was to play "mix up the movie titles." Nothing better to do while hanging out in the lobby on a freezing cold February day in Boston, eh?

So, I saw three movies over the last week, and the mixed-up title would have been the subject of this blog.

Other possible mixed-up movie titles based on what was playing at my local AMC theater:

  • What Happens in Narnia...

  • Iron Stranger

  • Indiana Jones and the Kingdom of Sex

  • Made of Panda

  • Sex and the Panda

  • Iron Panda

  • Panda Racer


The word "Panda" in a movie title is simply irresistible. So...

Kung Fu Panda: A- Quite a lot of fun with my 8-year-old son. Jack Black's voicing of the main character is excellent, and they allowed his personality to mold the character animation quite a bit; how could you not? Lots of laugh-out-loud moments, and really great, somewhat unusual (which is nice) art. Best bit -- when Po, the titular panda, performs the super-killer-hyper move involving the minor twitch of his pinkie, he says, "Ska-doosh." Very Jack Black. Very funny. Half-grade off for pulling out almost every trope in the kung fu sack.

You Don't Mess with the Zohan: B Solid "B" movie. Ha ha. If you go to this movie expecting Adam Sandler and Co. to engage in an enormous amount of over-the-top pseudo-hasidic silliness, making fun of Jews, Arabs, gays, straights, rednecks, etc... you won't be disappointed. Sandler is consistently goofy and never misses a chance to refer to sex as, "making the sticky." If that line revolts you, please avoid the movie. My favorite dialogue (note: Zohan is attempting to escape his Israeli super-star anti-terrorist status in the states, and so is pretending to be...)
Zohan: I'm from Australia [with a glottal, "hutzpah" "h," somehow, at the beginning of "Australia."]
Friend's Mom: Oh, it must be so much nicer there since they got rid of appartheid.
Zohan: Oh, yes. Much cooler.

That's about the size of it. Nice cameos from Henry Winkler (if you ever dreamed of watching the Fonz puke out of a limo, this film's for you), Dave Matthews as the violent, anti-everything red neck, Chris Rock as a taxi driver from Cameroon, Mariah Carey as herself (the low point of the film, frankly... she's just not funny), George Takei, Kevin Nealon (why?), Rob Scheider (of course) and John McEnroe. Full point off for not having English sub-titles for all the Yiddish or semi-Israeli slang or whatever it was. That woulda been cool.

Indiana Jones and the yadda yadda yadda: C+ It's an OK movie. What sucks about it is that it's just an OK movie. No really funny lines or memorable bits. No really amazing action sequence... They go down a couple waterfalls in a car/boat/thing? Really? That's all you got for me, Indy? Oh... and riding a motorcycle through the college library? Seriously? That counts as action these days? Yeesh. No really good interaction between Indy and his... uh... young friend. No really villainous villain. It all felt very phoned-in to me. I'd place it 3rd in the pantheon, behind (duh) the original and "Last Crusade." Major points off for [minor spoiler alert] the big "Ohhhh...." near the end being, "The city of gold doesn't literally mean gold... it means treasure... and knowledge was their treasure." Yeah. Right.

Comforthood

Todays journey of metaphoric bliss: Alzheimer, buses, jewelry, YouTube.

Patients with Alzheimer's and other cognitive troubles who wander out of their nursing homes are a danger to themselves, of course. And with short-term memory issues, folks can go as little as a block away and then forget how to get back or why they're out. To help with this, some German nursing homes have put "phantom" bus stops outside their facilities. Patients remember the distinctive look of the bus stops and associate it with "going home." So they stop, rest, and the workers from the home come and get them (link).

Paco Underhill did absolutely groundbreaking work in the science of retail shopping behaviors. The New York Times called him, "the anthropologist of the dressing room." He wrote "Why We Buy: The Science of Shopping," (Google, WorldCat) and has consulted all over the place. In a 1996 NewYorker article (by Malcom Gladwell, no less), titled "The Science of Shopping," the concept of the "butt brush" theory is discussed. Full article here.

The quote that I'm most interested in today, though, is, "...the likelihood of a woman being converted from a shopper to a buyer is inversely proportional to the likelihood of her being brushed on her behind while she's examining merchandise." Which is the explanation for giant, wide aisles around the jewelry, perfume and watch displays in stores like Lord and Taylor, Macy's, etc. When pondering a pretty purchase, we get into a kind of dreamy, fugue state. Being bumped on the behind takes us out of that state and puts us back into the reality of, "Holy crap... that watch costs as much as three car payments."

[Note: I share this story with all my marketing and advertising students, male and female. It's a good trick, and not just for guys with wives and girlfriends. Men go into this same state, I believe, when shopping for power tools, HDTVs, boats, video games, etc. My non-scientific assumption, though, is that men are more likely to break out of Shopper's Fugue if you bump them in the testicles.]

What's the connection to degenerative brain disorders and shopping for jewelry? Well... let's move on to YouTube.

Douglas Galbi, over at the ever-intelligent and interesting "purple motes" blog, has an excellent recent post titled, "Stories largely missing in online video." His conclusion, after going over some good stats, is that online video is not successful in telling stories. While I agree with him that the "short form" video -- with YouTube as its major example -- isn't doing much storytelling, I'm going to point out some details that, I think, are important with regards to online viewing habits.

First, Doug is 100% right that the majority of YouTube videos are short, and a large percentage are repurposed  music videos that, in the past, would have run on MTV or VH1 or a similar network. A research study I was involved with at my day job provided much the same insight ("The YouTube Phenomenon," page 2-16 of "Our Social Spaces," from "Sharing, Privacy and Trust in Our Networked World.") Our survey indicated that 49% of the top 100 YouTube videos were music videos. Also, 63% of the top 100 videos were "professional," in nature. This segment of the material is clearly not "user created content," but maybe best described as "user uploaded."

Doug also points out that online video viewing time only amounts to 3% of traditional TV viewing time. When considering this, lets remember that TV is, and has been for 50 years, the dominant communication medium in our country. It's only over the past few years that even a decent minority of the U.S. population (23.3% as of December 2007, according to the OECD) has access to broadband Internet service, which is pretty much a requirement for watching online video.

My two points, and they relate back to comfort -- which relates to bus stops and butt touching --  are simply as follows.

First, we currently regard TV as, largely, a "comfort medium." We sit down to watch, don't interact much, and enjoy it largely as entertainment. There are good stories on TV, yes. Because stories are a big part of how we like to be entertained, especially in "comfort" mode. I would remind my several readers, however, that lots and lots of TV is also "short form" entertainment, lacking in real storytelling elements. We have talk shows, sports, game shows, reality TV, news, weather and informational shows that don't have traditional narrative. And many of these have parallel elements in Web video. I just watched, for example, Clinton's "campaign suspension" speech on the NYT site. It was very, very nice to have the transcript and a TOC right next to the video. I think that as more online video becomes nested within other activities, it will gain more usage. I also think that as broadband becomes more the norm, non-narrative video will seem much more natural online, both in aggregate and compared to TV viewing.

As to when we'll get more narrative, storytelling content on the Web... well, it's starting. Hulu provides free (ad supported) access to narrative TV and movies. I missed an episode of Battlestar Gallactica a few weeks ago and watched the hour-long show on the SciFi channel's site to make up for my DVR behaving badly. I now have a desk chair in my home office for working on the computer... and a comfy chair nearby for relaxing and watching DVDs and long Web-videos. But, even when I choose to watch long-form video on my computer, there are issues. My spam-blocker, anti-virus software pops up in front of the movie screen and tells me it's finished updating and update. Super. My IM pings, unless I've remembered to turn it off. My screen saver kicks in sometimes. Geez. I'm trying to watch TV on my computer and it keeps behaving like a computer.

The boundaries are melting. Slowly, yes. I agree with Doug that, at the moment, there's not a lot of storytelling going on specifically within online video. I do think, though, that it's beginning. And, also, that many online "stories" have video as one element, with other media embedding video as part of the story.

We like our comfort zones, and TV is a *HUGE* comfort zone for Americans. We head to the bus stop of our La-Z-Boy lounger because it means, "Here there be relaxation." Major changes in how we watch long-form video will take time, and will require computers to become something other than "working machines," and to stop touching us on our collective butts when we're trying to enjoy a story.

Saturday, June 7, 2008

Is the Web convex or concave? A meditation on dillweediness

[Note note: the draft of this post was written months ago. I'm not sick anymore, thanks for asking.]

Note: I am sick as heck. Bad cold. This is Day 4 of what, at work, is being called affectionately, "The Pox." I read an interesting post on Lifehacker about "Presenteeism," the opposite of absenteeism. The idea that going to work, regardless of consequences, is necessary. We're all the stars in our own life drama. So the idea that I'd put my own work requirements above the health and welfare of my coworkers isn't completely unreasonable; especially when we take into account the fact that we don't know what facts to take into account in terms of where/how we get sick. All this being apropos of nothing, except that I did stay home from work Thursday and worked from home on Friday, and now consider those acts to be somewhat selfless and communal. Whereas before, I would have considered myself lazy and weak. New wine, old skins. Yea.

Meanwhile... having been sick, I've been waking up early and watching The Daily Show on the DVR. One of the episodes from last week featured an interview with Lee Siegel, author of "Against the Machine: Being Human in the Age of the Electronic Mob." I did not read the book, and don't plan on it. This is a review of a couple things Lee said on the the show.

First, he made the claim that relationships mediated by electronics -- the Web, that is -- aren't really as real as those in real life and (?) those conducted over the phone. Hmmm... Odd that he wouldn't consider the phone part of the machine of the electronic mob. When it debuted, critics believed that the phone would end civilized discourse, as it allowed for communication without physical presence and, therefore, without possible physical repercussions. That is true (I suppose), as you can call somebody a dillweed on the phone and not worry about him/her cracking you on the mellon.

Lee went on to say that because of the lack of real presence on the other end of the digital line, we tend to imbue "the other" with our own characteristics, thus making the relationship both shallow and somewhat fictional. That's not a bad point. It is easier, certainly, to create a web (ha ha) of assumption when there is more left to the imagination. He then started talking about bad behavior on blogs and bulletin boards, what with the ranting and raving and flaming and invective and... and... and...

And he lost me. Even as an interesting antagonist to my own view... he lost me. Because you can't have it both ways, Lee. If the machine is bad because it is a concave lens that diminishes our perception of "other," that's one thing; if it is a convex lens that exaggerates the bad behavior of others... hold on. Can it be both?

Well, here's the thing: it can, if you're being a dillweed.

I tend to expect the best of people, regardless of circumstance. I assume that they, like me, want to get along, be friendly, be smart, do the right thing, etc. That holds true online as well as in RL. I've had very cool, long, intelligent disagreements with people in both places. Where it stops (again, regardless of media), is when someone clearly just wants to rant on their own, and has no interest in discourse; no interest in the voice of "the other."

Does that happen on the Web more than in RL? Perhaps. Comments on blogs are often not set up as discussion points, but more as stand-alone statements. And it is certainly possible to read a such a comment as if it were aimed right at you, thus making it seem like a churlish response, rather than a simple statement.

And so we're back to the Web, as Lee said, distorting relationships because of our tendency to put ourselves in the center of the whole thing. We either assume closeness that isn't there (because we want to see it), or assume animosity that isn't there (because we read everything as personal).

At least we do when we're being dillweeds. I've done it, for sure. A disagreeable statement that, in RL, might have been mitigated with a shrug and eyebrow-raise, comes across as totally hot-headed and unreasonable. And I've flamed back, too. But... but but but (this is the big but, and I like big buts, and I can not lie)... because of this tendency, signs and appeals to reason come across even more strongly, too. I've made some very good friends over the Web -- some of whom I've never met in person. And in almost every case, it is because their online voice is one that I want to hear more of.

Which is the same as in RL. We seek out those people whose presence is pleasant. And that's the case online, too.

Yes, there are more cranky, shallow statements on the Web. But there are also more chances for rare and beautiful flowers to spring up, in stark contrast with the dillweeds.

Friday, June 6, 2008

Another new poem: Where there's smoke

Where there's smoke

The thunder came back for a third time last night.
Explosive light spattered behind and beyond,
too far up the county to preview the drums
with a white, sharpened, spark bone
jammed into your eyes.

Sitting, not sleeping (for how could we sleep?),
as the fists of the clouds beat down on the tent
that night stretches over our streets and our eyes
now pointless as shelter
from violent light.

Each rumble is different, a fingerprint boom.
One feels like a train rolling over our graves.
While the next is a branch cracking under your foot
in a forest of black fingered
dry-as-dust wood.

The first wakes us up and the next pulls us out
of our beds with a fist of sound gripping the sheets.
By the third... we've relaxed, and got milk for the wait
while mountains of air
converse with the heat.

They talk to us, too, of course. Querulous bombs.
The volume is such that it's hard to make out
what the words are. But listening, closely, we hear:
"Don't fear us -- we're only
the gentlest of signs."

* * * * *

[with thanks to Shannon whose comment improved this]

Wednesday, June 4, 2008

New poem: Bad pun

Bad Pun

he defines "untied"
as "tied to nothing"

no hope of hope
no jump into a lake of cool
sweet summer peace
no rope swing leap
from earth to air to water

boys fly free

men tire
mourn
hang rubber
on a dying tree

Sunday, June 1, 2008

The perils of self-knowledge

First of all, let me explain the use of the word "perils" in the post title. It's an arcane word, and clearly out of conventional usage. We'll most often see it in modern language used in an slightly ironic way, often with alliteration: "The perils of puppies," "Perambulator Perils," etc.

I use it here very specifically, rather than its near synonym, "danger." Why? Because "The danger of self-knowledge," implies future harm. If something is dangerous; you can avoid it or not. "Peril" is more about the activity itself, already undertaken.

And self-knowledge is like that. You can go into almost any situation and come back with self-knowledge. But by then, it's too damned late. You can't say, for example, "There's a danger of increased self=knowledge at this year's Thanksgiving dinner with my wife's family... I'll stay away." You go, no thought of peril, and you learn that your tolerance for various kinds of bad behavior has lessened since you all got together 10 years ago. You come away with new self knowledge. [Note: this is an erroneous, facile example -- I get along just swimmingly with my wife's family. Not that they read this blog, but if they do, they'll recognize the fiction; we never have Thanksgiving with them. So, ha.]

You just can't tell when a trip will turn into an adventure in discernment.

But sometimes you should.

About 12 years ago, my then-boss had our whole team take the Myers-Briggs Type Indicator (MBTI). It was so we could learn about ourselves and each other and work together better. Those of you who have read my stuff before know that I have a low tolerance for corporate hoo-hah. Being told, "You're going to learn about yourself," makes me feel like a small child being led by the hand. I already understand myself very well, thank you. And if you don't think so, then clearly *you* don't know me very well. ;-)

So... The MBTI. I'm an ENTP, if you care. Which, in general, made sense to me at the time. Read the linked description, if you know me, and (I think) it's not too far off.

But... when you take the test, you get a score from"0" (meaning dead center between two of the paired functions) to "100" (meaning extremely one way). I was very near zero for the first three classifications. Which, when it came to splitting the difference between Sensing v. iNtuition and Thinking v. Feeling, I was fine. I like balance on those things, and would have been surprised to find a test that scored me much higher in either of those pairs.

But an "Extrovert" score of only 4? That's mad! I'm the f'in life of the party! I love public speaking and teaching. I have no fear of strangers and of approaching people I don't know for help, advice, directions, bottled water, sunscreen, etc. I like working on a team. All kinds of extroverty stuff. What's with the "4?" That's nearly balanced!

Well, come to find out, I'm a closet introvert. What the trainer we had (she was quite good) explained, is that for the MBTI, the categories are less about activity than attitude. From the Wikipedia definition:
People with a preference for Extraversion draw energy from action: they tend to act, then reflect, then act further. If they are inactive, their level of energy and motivation tends to decline. Conversely, those whose preference is Introversion become less energized as they act: they prefer to reflect, then act, then reflect again. People with Introversion preferences need time out to reflect in order to rebuild energy. The Introvert's flow is directed inward toward concepts and ideas and the Extravert's is directed outward towards people and objects.

Gulp. New self-knowledge came flowing in as the trainer explained this. I am introverted, in that sense, at many times. I like to reflect before acting. Sometimes several times. Sometimes to the point where it seems like procrastination, even to me. And that one line -- need time out to reflect in order to rebuild energy. Yikes! Totally me, totally on the spot.

So. Hmmm.... Yes. I went in skeptical, and came out having learned something about myself that has, ever since, been helpful to some degree, yet painful, too. Because self-knowledge doesn't necessarily imply actively working on anything based on that knowledge. Now, when I get funky and low after having spent too much time in "extrovert mode," I understand that it's my introverted need to reflect and recharge. I know, now, that I'm not an extrovert with periods of waning energy; I'm an introvert with occasional bursts of energy.

The point of all this being that I just took another one of these kinds of assessments at work. And I went in with a bit of the same attitude: "Yeah, it might be fun and/or interesting. Yeah, I'm sure it'll tell me some stuff I already know. But it'll be no big deal."

Indiana Jones would've known: there's always snakes in that cave.

I'm still processing what I learned. It took a couple of years after the MBTI for me to get comfy with the results. We'll see about this latest batch of understanding and maybe, later, I'll share the results.

But maybe I won't. As they say about ENTPs, "...less interested in generating and following through with detailed plans than in generating ideas and possibilities."