In a recent post on the Media Ecology Association's official listserv, Steve Weinstock (who is also the father of an autistic child), brought to our attention research on a new technology that is being developed under the designation of XPod. The idea is that it would be an iPod like device that would have sensing technology to detect what kind of activity the user is engaged in, and more important, what kind of mood the user is in, and it would have computer programming that would choose the type of music it would play based on the user's emotions and behavior. The idea is that it would be a "smart" technology that would be "trained" by the user. Steve supplied the following links for further information:
http://www.livescience.com/scienceoffiction/060121_xpod.html
http://ebiquity.umbc.edu/_file_directory_/papers/234.pdf
In an insightful response that raised some important questions, Davis Foulger also supplied another link:
http://www.livescience.com/scienceoffiction/051012_ipod.html
Echoing some of the concerns that have been raised, it is perhaps typical of inventors and technologists that they have not considered some essential aspects of this new technology, such as its consequences--this is an example of what Arthur Koestler wrote about in The Sleepwalkers: A History of Man's Changing Vision of the Universe (the point being that discoverers typically do not understand the ramifications of their discovery).
So, the sleepwalkers working on the XPod do not consider the effects this type of technology will have, that as much as it will be trained by the individual, it will be training them in turn, as a kind of biofeedback mechanism. As John Culkin put it (and McLuhan echoed), we shape our tools and thereafter they shape us. Only with technologies such as these, the process of shaping and being shaped is exactly what the technology is all about.
The XPod reminds me in many ways of Muzak which is an example of programming the acoustic environment for specific purposes, such as encouraging shopping at a store, facilitating turnover at a fast food restaurant, calming nerves in a waiting room, or increasing productivity in the workplace. When I was a teenager growing up in Kew Gardens, a neighborhood in the New York City borough of Queens) back in the early 1970s, a friend of mine named Robert Wilson (who lived in the adjoining neighborhood of Forest Hills) took a few of us (I remember Marty Friedman was there) to the offices of Muzak, Inc. in Manhattan, where his father worked. His dad gave us all promotional record albums of Muzak, and the back of the album cover went into a detailed description of how Muzak was based on scientific research on human behavior and how to modify it based on music (this connected to our fascination with psychology, which was very big in the 70s, and Marty Friedman went on to major in psych, pursue a career in the field, and recently teach psychology on the college level; I myself took some courses in the subject and did some volunteer work as an undergraduate doing peer crisis counseling along Rogerian lines).
Anyway, while Muzak tries to influence groups of people, which by its very nature can only yield limited success, XPod tailors its programming to the individual, but to what end? Even if there is no specific intent to modify mood and behavior along the lines of Muzak, one cannot not do so (see my recent post on Paul Watzlawick).
If the XPod is not programmed to change the user's mood, but rather reflect it, then it will result in reinforcing whatever the individual is already feeling. On an individual level, this opens up the possibility that a depressed person might have his or her depression reinforced by the XPod. If that same person were to subsequently commit suicide, it would no doubt lead to the accusation that the suicide was a consequence of the technology. Lawsuits would be all but inevitable under the circumstances, as a product that programs moods would leave the programmers open and vulnerable to being held accountable for whatever followed in the wake of the programming. And even reinforcing pre-existing moods is a form of influence. By the same token, if someone is impulsive and engaged in active behavior, reinforcing the behavior may lead to destructive and/or self-destructive behavior. I'm not saying that the programmers would necessarily be at fault in such instances, but there would be no way to entirely avoid responsibility and liability for what they have introduced into the social system.
On a larger scale, reflecting back what people already feel can be taken to be a further extension of what Christoper Lasch has termed The Culture of Narcissism in his insightful analysis of contemporary American culture. The XPod ends up as a reflecting pool, just making us more self-absorbed than ever.
On the other hand, the technology may be used to encode a kind of human relations agenda of trying to keep people happy, well-adjusted, satisfied with life, and thereby serve the function of what Jacques Ellul calls integrative propaganda in his classic book Propaganda (a book that should be required reading for media ecologists).
This all relates to a theme I dealt with in Part II of my book Echoes And Reflections: On Media Ecology As a Field of Study (and in previous articles that it was based on), that contemporary society is characterized by the presence of both narcissism and echolalia, by a tension between the two extremes. What we need is a healthy balance and new synthesis of these opposing tendencies. To that end, I suspect that the XPod will not be helpful.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment