Showing posts with label Facebook. Show all posts
Showing posts with label Facebook. Show all posts

Sunday, May 24, 2015

Abuse on Twitter

Back on February 7th, I was quoted in a Tech Times article entitled, Three Immediate Things Twitter Must Do To Curb Online Abuse And Win Back Users, written by Christian de Looper. I had been asked to comment on the problem of abusive tweets and the general issue of online abuse, and on what Twitter needs to do to address the problem, so let me begin by providing you with my full response to the query:

Twitter is not the only social media platform that has a problem with online abuse on the part of its users—YouTube, for example, is notorious for the problem. The fact that it has become a public relations concern for Twitter reflects the fact that it is second only to Facebook when in comes to communication and social interaction on the internet. 

Twitter ought to be concerned, and can learn a lesson from the fall of MySpace, where a sense of "lawlessness" and "anything goes" contributed to the mass migration to Facebook, where controls over user behavior are more stringent, creating a very safe, you might say suburban, white bread kind of environment. First and foremost, to follow Facebook's lead, Twitter would have to become much more responsive to user complaints, essentially issuing warnings and shutting accounts down whenever any user is accused of abusive behavior. 

The problem they face is that Twitter has gained much from its wide open atmosphere, not so much an open frontier like the old MySpace, but more like an urban and urbane melting pot of myriad voices. Crack down too much on offensive tweets, and you lose the open atmosphere that makes Twitter so attractive, and you alienate users who view that sort of activity as censorship. Twitter will need to engage in a very tricky balancing act between the need to provide a safe and attractive environment and otherwise show users they are in control and care about abusive behavior, while maintaining their status as an open and democratic platform. The problem they face in the virtual world is exactly the same problem we face in open societies in real life, between security and freedom, between the needs of the community and the rights of the individual.


Long time readers of Blog Time Passing may find my comparison of Twitter, Facebook, and the old MySpace familiar, expressed for example in my post of March 1, 2009, About Face(book).

Be that as it may, let's turn now to de Looper's recent article about Twitter, which begins with the following introduction:


A memo written by Twitter CEO Dick Costolo has been leaked, with Costolo admitting that Twitter "sucks at dealing with trolls."

The memo also resolved that Twitter needs to fight online abuse head-on. But what exactly can Twitter do to put an end to harassment on Twitter?

Now comes those three things Twitter can do to curb online abuse, and the first one is where my quotes, taken out of the larger commentary I provided, comes in:


1. Twitter Needs To Get Ruthless

Twitter has been very passive about its online abuse problem. The company has not made real efforts to combat the issue.

"To follow Facebook's lead, Twitter would have to become much more responsive to user complaints, essentially issuing warnings and shutting accounts down whenever any user is accused of abusive behavior," said Lance Strate, professor of communications and media studies at Fordham University, in an email with Tech Times.

Twitter, however, is in the middle of a balancing act. The service currently has a very "open" vibe about it, and cracking down too hard on users could take away from this.

"The problem they face in the virtual world is exactly the same problem we face in open societies in real life, between security and freedom, the needs of the community and the rights of the individual," continued Strate.

So, now, that's it for me, but let's get the other two bits of advice, shall we?


2. Twitter Needs to Be Public About Its Abuse Battle

Online abuse is not a problem unique to Twitter. YouTube is notorious for Internet trolls that comment on videos with the idea of wanting to take people down a peg. Often comments aren't even related to the videos being posted.

The problem for Twitter, however, is a little different because of the public relations issues involved. If Twitter is going to successfully put an end to abuse on its platform, it needs to be public about it. The issues surrounding public abuse on Twitter could end tomorrow, but that doesn't mean that users will think that it has. Twitter's reputation is important when it comes to gaining users. In fact, many users have left Twitter because of its online abuse problem, and it's likely that they won't come back until Twitter is able to successfully deal with the problem.

3. Twitter Needs To Do It In-House

In the past, Twitter has rather famously left dealing with online abuse to third parties, most notably Women, Action and the Media (WAM!).

The fact that Twitter has not dealt with the problem internally shows a lack of caring from the company. It seems, however, as though the company will finally be stepping up.

Twitter must deal with the problem in-house, and must hire people to deal with it. The tools for the company to be able to combat online abuse have been put in place -- there is a "blocked accounts" page where users can see who has been blocked from their feed, for example, and users can more easily report issues. Now Twitter needs to have employees that deal with those problems.


The conflict between safety and growth lies at the heart of Abraham Maslow's humanistic psychology of motivation,  as famously represented by his Hierarchy of Needs diagram


So, once again, it comes down to a fundamental conflict. Openness is democratic, and facilitates originality, creativity, and growth, but brings with it risk and danger, in this instance the possibility of abuse, as well as scams and spam, and various forms of cybercrime. Closing things off provides more of a margin of safety and security, but at the cost of the ferment that made the platform, group, or society interesting and vibrant in the first place. How to find the balance, to avoid being boring as well as to curb abuse, that is the question. 


Monday, October 27, 2014

Not Quite An AboutFace(book)

So, back at the beginning of this month, I was quoted in TechNewsWorld on the subject of Facebook's announcement of new guidelines regarding how they should go about manipulating and experimenting on their users. You may remember that I was previously quoted on the subject in the Christian Science Monitor back in July, as discussed in my previous post, Facebook Follies. So this serves as something of a follow up to that post, so maybe you want to go read that one first if you haven't already?

Either way, this post concerns a story by Erika Morphy published on October 3rd, entitled Being Facebook Means Never Having to Say You're Sorry. And the subtitle/blurb for it reads as follows:


Facebook may seem apologetic about the way it manipulated users in a study to gauge their emotional responses to certain types of posts, but it stopped short of actually making an apology. It also didn't quite say it wouldn't do it again. In fact, the guidelines Facebook has promised to follow in conducting future research are general, vague and, well, they're guidelines.

The article then begins in earnest with a brief summary of the most recent installment in Facebook's soap opera relationship with the public:


Facebook on Thursday announced it had developed a framework for conducting research on its 1.3 billion or so users.

Although Facebook so far has revealed only the general outlines, this framework clearly is a response to the onslaught of criticism the company received this summer, when it blithely reported the findings of a study about how News Feed content affected a user's mood.

In carrying out that research, Facebook withheld certain posts and promoted others to see how users would react.

And then on to how Facebook's actions were received by most of us:


When its methodology became public, reactions were immediate and harsh.
"While all businesses of this scale constantly experiment with the factors that influence their customers and users, there's something especially spooky about the idea of being experimented on in our digital lives," said Will McInnes, CMO of Brandwatch.
Facebook apparently was caught off guard by the vitriol. 

At this point, a new section begins with the heading, "A Look at the Framework":


The new framework includes giving researchers clearer guidelines and, in certain cases—such as when dealing with content that might be considered deeply personal—putting the project through an enhanced review process before the research begins.

Further review is required if the work involves collaboration with someone in the academic community.

Toward that end, Facebook has created a panel comprised of its most senior subject-area researchers, along with people from its engineering, research, legal, privacy and policy teams to review projects.

Facebook also will put its new engineers through a six-week training boot camp on privacy and research related issues.

Now that all of the basic information is established, it's time for some discussion, analysis, and evaluation, and this begins a new section under the heading of "Informed Consent?":


Facebook's new guidelines appear to be missing some fundamental ingredients, starting with actual policies on what will or won't be permissible.

For instance, it is unclear whether Facebook would repeat this summer's study under the new guidelines.

The company has not exactly said that it shouldn't have tinkered with users' News Feeds—just that it should have considered other, perhaps nonexperimental, ways to conduct its research. Facebook acknowledged that its study would have benefited from review by a more senior group of people. It also owned up to having failed to communicate the purpose of its research.

And now it's time to hear from the peanut gallery, aka moi:


Facebook has not promised to inform users the next time it conducts a research project, noted Lance Strate, professor of communications and media studies at Fordham University.

"Instead, Facebook is in effect saying, 'I'm sorry, I made a mistake, I won't do it again, I can change, I promise—just trust me,' while giving their users absolutely no concrete reason why they should be trusted," he told TechNewsWorld.

The irony is that Americans usually are very willing to participate in consumer research and divulge all sorts of personal, private information in focus groups, interviews, surveys and opinion polls—as long as they are asked whether they are willing to take part in the study first, Strate pointed out.

Indeed, asking permission to conduct such studies goes beyond privacy and business ethics to common courtesy and basic human decency, he said. "It's the sort of thing we teach children long before they enter kindergarten—to ask for permission, to say, 'Mother, may I' and 'please' and 'thank you.'"

Facebook's apparent sense of entitlement regarding the collection of user data and the violation of user privacy is one reason for the extraordinary amount of buzz surrounding the launch of Ello as an alternative social network, Strate added.

It is funny how there seems to be an overall decline in civility in American society that correlates to the rise of new media, and first starts to appear in relation to the electronic culture of television. So perhaps it should not come as a surprise that Facebook would be a part of this trend, and it would be reflected in the ways in which Facebook treats its users.

Thinking about the term user itself, it does suggest what Martin Buber would call an I-It relationship. And while the alternative, an I-You relationship, is based on a sense of mutual respect between participants, a mutual recognition of each other as entities, the I-It relationship is fundamentally asymmetrical. And calling us users suggests that we are the I in the relationship, and Facebook is the It. But Facebook's behavior indicates that they see the situation as quite the opposite: The users are the It, and Facebook is the I.

So maybe, rather than referring to us as Facebook users, we should be called the Facebook used?

Anyway, here's how the article ends:


That thought surely has occurred to Facebook's executive team, which might have been one factor behind the release of the guidelines, McInnes told TechNewsWorld.

"Facebook's greatest fear and business risk is a user exodus, and so it knows that the trust of users is crucial," he said. "This move represents Facebook stepping up and looking to close down the risk of such a backlash again."

So, how about we set up a pool to see when the Facebook exodus will actually occur? I'm thinking some time around 2019, but it could be as soon as 2016. I guess it all depends when the New Media Moses arrives to part the Web(2.0) Sea...

Anyway, just for the record, here's the original comment that my quotes were taken from:


The essential ethical principle for research involving human subjects is informed consent, and that is exactly what Facebook has failed to address, or apparently to accept. Instead, Facebook is in effect saying, "I'm sorry, I made a mistake, I won't do it again, I can change, I promise, just trust me," while giving their users absolutely no concrete reason why they should be trusted. The irony is that Americans by and large are quite willing to participate in consumer research and divulge all sorts of personal, private information in focus groups, interviews, surveys and opinion polls, as long as they are given a choice, as long as they are asked whether they are willing to take part in the study first. This goes beyond issues regarding privacy and business ethics to common courtesy and basic human decency. It's the sort of thing we teach children long before they enter kindergarten, to ask for permission, to say "mother may I" and please and thank you. Facebook's apparent sense of entitlement regarding the collection of user data and the violation of user privacy is undoubtedly the reason for the extraordinary amount of buzz surrounding the launch of Ello as an alternative social network, and for this reason we can expect to see continued erosion of Facebook's near monopoly over the social media sector.

Not much different from the article, Morphy did a good job with it I thought, but this way you can see my complete thought. And since then, aside from Ello, I've heard a little about another alternative, MeWe. And no doubt there are dozens of folks out there working on alternative social media platforms, hoping that theirs will be the next one to catch fire and take over as the internet's burning bush.

Monday, September 15, 2014

Which Media Ecologist Are You?

So, back in June I saw a post on the Accrinet company's blog, written by the company president Jeff Kline, and I should add that the blog is a wonderful resource on online communication, especially for nonprofits—I highly recommend it!

Anyway, so back in June, Jeff Kline posted on How To Make Online Quizzes For Your Nonprofit, which I somehow found intriguing, even though I rare partake in those quizzes that seem to pop up almost daily on Facebook.

Now, I know what you're thinking, that as a professor I am more than a little familiar with the practice of quizzing and testing, and you're right, I am. But that sort of thing is far from my favorite part of the job, I hasten to add, and anyway, what we do in the classroom is a far cry from the kinds of quizzes that circulate on social media. I was going to call them "fun quizzes" but that would be presumptuous, and maybe suggestive of a subset of amusing (and amazing) ourselves to death, quizzing ourselves to death? I know sometimes it may feel like that over on Facebook.

Of course, as someone who teaches about new media, among other things, it never hurts to try things out for myself, and anyway no one, not even Neil Postman, said that you can't have a little fun once in a while. You just have to be aware of the distinction between entertainment and serious discourse.

And pertaining to testing, it is also important to remember that a test is only a test, and it may be a measurement of some sort, but we shouldn't confuse the measurement with the phenomenon being measured. An example Postman frequently pointed to was intelligence testing, which is supposed to measure some "thing" called intelligence which we're not even really sure exists, at least not as a singular phenomenon, let alone a quantifiable one. So, we don't exactly know what intelligence is, but we come up with tests for it anyway, and then say that the score you get on the test is your intelligence. That's exactly the kind of problem with the word "is" that Alfred Korzybski criticized long ago, the kind of problem that general semantics is meant to counter. And as Postman pointed out, saying that the score is your intelligence is an example of reification, of making real something that is only a measure, symbol, or representation. 

So, testing is inherently problematic, and intelligence testing especially so, since so much can be riding on it, from placement in schools to whether or not an individual is involuntarily institutionalized. And it has also been used to support bias and discrimination based on race, ethnicity, gender, and socioeconomic class. A great book on this subject was written by the well known scientist, Stephen Jay Gould, entitled The Mismeasure of Man. It was required reading in Postman's media ecology doctoral program.





So, anyway, maybe these online quizzes serve an important critical function in getting us to think a little bit about testing in general, and not take them too seriously?

Be that as it may, to get back to what I was writing about before I got off on this tangent, Kline's blog post directed me to qzzr, a site where you can create your own quiz. Like many such sites, LinkedIn for example, qzzr offers two tiers of service, one for free, and a premium option that lets you capture leads and otherwise drive social media traffic to your organization's website, as well as providing some tracking data. And having tried qzzr out, let me say right up front that I second Kline's recommendation, creating a quiz was relatively easy and enjoyable, and the quiz I made got a great response from folks I shared it with. And it works perfectly on mobile devices as well as on computers. Also, I had some email interactions with the folks at qzzr and I found them to be responsive, entirely helpful, and quite pleasant to deal with.

Wait a minute, you made a quiz?, you may be saying to yourself, or saying out loud if you have no filters. And yeah, well, I did. I just had this idea to make a quiz on (can you guess?), Which Media Ecologist Are You? So I did it. It took a little bit of work, but I really did get into creating the quiz, and the end product was even more gratifying then I imagined. And you can take the quiz for yourself over on the qzzr site by clicking on Which Media Ecologist Are You?, but another cool feature they provide is the ability to embed the quiz, as I've done below:






So, what do you think? And who did you get? And if you don't mind, please share the result on Facebook and Twitter.

My intent was to have some fun with it, and most of the responses I got were along those lines, of folks enjoying the quiz, and happy or intrigued by the results. My intent was to promote media ecology and the Media Ecology Association, and I think the quiz did the job. What I didn't expect, and found especially gratifying, was that some folks said they found the quiz to be thought-provoking, in getting quiz takers to be aware of and think about the wide range of subject matter covered within the field of media ecology. I didn't give too much thought to the quiz's educational value when I created it, but I think it's great that it can work in that way. Some folks said they are going to use it with their students, again, something I never considered, but actually I think it isn't a bad idea, and I'm going to ask my students to take it as well.

I should add that there were a few people who were critical of one aspect of the quiz or another. You can't please everybody, after all, and some folks had issues with the questions on politics and geography in particular, or just were displeased with the outcome. For this reason, I will not reveal all of the possible results of the quiz, there are 26 media ecologists you might end up with, because I know some people will question why I included one or another, or why I didn't include someone they would have included. To which I can only say, hey, I did the quiz my way, based on my understanding of our field and on what I thought would work best in this format, and maybe you shouldn't take the quiz so seriously after all, hmmm?

I'll note that in response to one critical comment over on the Media Ecology Association's discussion list, I responded with the following:

while you may consider the questions and results "dubious" in some way, let me assure you that the quiz was prepared utilizing the most rigorous of scientific methodologies to render results that are entirely unassailable. While you may have found some questions to have more than one possible answer that you would want to choose, and some where none of the answers strike you as acceptable, understand that the formulation of the wording of each item was extremely precise, while the accompanying images were deliberately chosen to evoke subtle and subliminal responses, so that even when you were uncertain or unhappy with an answer, the choice you made provided data derived from your unconscious that aided in assessing you intellectually, professionally, and in regard to your personality profile. The quiz underwent extensive pretesting and refinement in order to insure the the highest degree of validity and reliability. In short, as all good media ecologists know, there is no arguing with science, and if the results say that you are Tony Schwartz, then that is in fact who you are.

I hope you get the fact that my rsponse was a bit of satire, bringing us back to my earlier point about reification and testing. Oh, and as for who I got when I took the quiz, it was Neil Postman.  And you?

Thursday, July 31, 2014

Facebook Follies

My last post was on The Future of Facebook? so I might as well follow up with another post on Mark Zuckerberg's creation, what some might say is the Frankenstein monster of social networks. 





And speaking of electric shocks to the system, but in this case not so much it's alive! as he might be dead, there is an eerie resonance between Facebook's experimental manipulation of users' emotions in 2012 and Stanley Milgram's famous obedience to authority experiments that were conducted over half a century earlier. Click on the link to read the summary of the study over on Wikipedia.

About a decade after the experiments, Milgram published a book entitled Obedience to Authority, which went over the experiments, and contextualized them in light of the excuse used by many Nazis after World War Two that they were only following orders—Milgram significantly referenced Hannah Arendt's Eichmann in Jerusalem—and controversies concerning our own conduct of the Vietnam War, notably the My Lai Massacre. Milgram's book was required reading in Neil Postman's media ecology doctoral program.





I've included links to the Milgram book here, as well as Arendt's famous report on the Eichmann trial, where she coined the phrase, the banality of evil, for your convenience. And we certainly can make an equation of it:

Obedience to Authority=Banality of Evil

Anyway, back when I was a doctoral student, Postman also showed us the documentary about Milgram's experiments, which is a little hokey, maybe laughable for being so, but also quite disturbing. It occurred to me that I should check on YouTube to see if it was there, and low and behold:







For Milgram, the moral of the story was how willing so many of us are to obey authority even while disagreeing with what they were being told to do, that most of the subjects who went all the way with the electric shocks were not sadists, and in fact were quite disturbed by what was happening. When asked about it after the experiment, they said they wanted to stop, but the experimenter wouldn't let them, this despite the fact that no force was used, no coercion or persuasion, beyond the experimenter's insistence that the subject continue to give the victim electric shocks. Milgram referred to this as agentic shift, that the subjects ceded their independence as agents, abandoning their sense of responsibility for what was happening, and their freedom of choice. 

Put another way, the relationship, in this case an authority relationship, was more power and overwhelmed the content of the situation, that an innocent was being harmed. This is consistent with our understanding of relational communication as established by Gregory Bateson and Paul Watzlawick, and parallels McLuhan's famous dictum, the medium is the message.

The results would no doubt be different today than they were back in the day when respect for authority and a desire for conformity were quite powerful. It was the kind of culture we associate with the fifties, but it extended into the early sixties, at least. The point that Milgram missed, however, was that he himself had conducted a cruel set of experiments, inflicting psychological damage on some of his subjects, all in the name of a more abstract higher authority: Science. There are many forms of obedience, after all.

In response to these experiments, and others like them, but especially these, rules were put in place governing the use of human subjects. An experiment like this probably could not be conducted today, at least not at a university, where any study involving human subjects has to be reviewed and approved by an Institutional Review Board. So I really have to wonder how in the world my undergraduate alma mater, Cornell University, approved the participation of two of its faculty in an experiment where the emotional states of Facebook users were manipulated without their knowledge, without even their awareness that they were subjects in an experiment?

The study was carried out in 2012, and the results recently published in the Proceedings of the National Academy of Sciences of the United States of America. The article is authored by Adam D. I. Kramera of Facebook, co-authored by Cornell faculty Jamie E. Guilloryb and Jeffrey T. Hancock, and entitled, Experimental evidence of massive-scale emotional contagion through social networks. I'm not sure whether that link will work if you're not at a university that subscribes to the journal, so here is the abstract:


Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.


There also is a shorter paragraph summarizing the significance of the study:


We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.

Now, if you can access the article, you can see in the comments section a number of individuals expressing concerns about the ethics, and even the legality of the study, as well as some defense of it, stating that it was a minimal risk study that did not require informed consent. There are other criticisms as well, concerning the methodology and reasoning used to interpret the data, but let's hold that aside for now. Instead, let's go to an article published by the Christian Science Monitor on July 3rd, where I was one of the new media scholars asked to comment on the revelations regarding Facebook's questionable research.

The article, written by Mark Guarino, is entitled Facebook experiment on users: An ethical breach or business as usual? and it starts with the following blurb: "Many Internet companies collect user data. But privacy experts and Internet users question whether Facebook's 2012 experiment marked a breach of corporate ethics and public trust." And here's how it begins:


It's not yet clear if Facebook broke any laws when it manipulated the news feed content of nearly 700,000 users without their explicit consent to test whether social networks can produce "emotional contagion."

(It turns out, to a modest extent, they can.)

But the uproar after release of the results of this 2012 study is raising new questions on how pervasive such practices are–and the extent to which they mark a breach of corporate ethics.


While it is generally known that Internet companies such as Facebook, Google, Microsoft, Twitter, and Yahoo, claim the right to collect, store, access, and study data on their users, the Facebook experiment appears to be unique.

Unique is a bit of an understatement. Facebook users do willingly provide the social network with a great deal of personal information, while at the same time making that information accessible to others, to Facebook friends of course, and often at least some of it being made open to public view, as well as to any third party whose applications users may be willing to approve. That's understood. It is also no secret that Facebook makes money from advertising, and delivers advertising targeted to users, based on the personal information we all provide to them. And neither does anyone try to disguise the fact that Facebook can track when people click on a link, that advertisers and marketing professionals can know how many people clicked on their advertisement, and of that group, how many actually made a purchase. We may or may not approve of all or some of this, and some of us may not be aware of the extent to which this all works, but none of it is kept hidden from users. As they used to say on the X-Files, the truth is out there.






But this is a horse of another color, and by this I mean both Facebook and the experiment, as the article proceeds to make clear:


Not only is the company the largest social network in the world, the kind of information it accumulates is highly personal, including user preferences spanning politics, culture, sport, sexuality, as well as location, schooling, employment, medical, marriage, and dating history. The social network algorithms are designed to track user behavior in real time – what they click and when.

The Information Commissioner's Office in the United Kingdom announced the launch of an investigation to determine whether Facebook broke data protection laws governed by the European Union. The Federal Trade Commission in the US has not yet said whether it is launching a similar probe or not. On Thursday, the Electronic Privacy Information Center, a civil liberties advocacy group in Washington, filed a formal complaint with the FTC, urging action.

The experiment, conducted over a week in January 2012, targeted 689,003 users who were not notified that their news feed content was being manipulated to assess their moods in real time. The study determined that an increase in positive content led to users posting more positive status updates; an increase in negative content led to more negative posts.

So now that the facts of the matter have been established, it's time to raise the question of ethical conduct, or lack thereof:


What alarmed many Internet activists wasn't the use of metadata for a massive study, but rather the manipulation of data to produce a reaction among users, without their knowledge or consent, which they see as a violation of corporate ethics.


Just to interrupt again for a moment, why specifically activists? Doesn't this pretty much marginalize the concern? We could instead say that this alarmed citizens groups, that might sound a little better, but still. Doesn't this alarm Facebook users in general? Or as we used to refer to them, citizens? Just asking, mind you... Okay, back to the article now: 

“It’s one thing for a company to conduct experiments to test how well a product works, but Facebook experiments are testing loneliness and family connections, and all sorts of things that are not really directed toward providing their users a better experience,” says James Grimmelmann, a law professor and director of the Intellectual Property Program at the University of Maryland Institute for Advanced Computer Studies in College Park. “These are the kinds of things that never felt part of the bargain until it was called to their attention. It doesn’t match the ethical trade we felt we had with Facebook,” Professor Grimmelmann says. Many academics studying tech and online analytics worry about the ethics involving mass data collection. A September 2013 survey by Revolution Analytics, a commercial software provider in Palo Alto, Calif., found that 80 percent of data scientists believe in the need for an ethical framework governing how big data is collected.


So now it's academics and activists, that's a little better, but academics are not exactly part of the mainstream, or part of what Nixon used to call the Silent Majority. Oh well, let's hear what Facebook had to say in response to all this:


Facebook leaders expressed remorse, but they stopped short of apologizing for the experiment, which reports show reflect just a small portion of the studies that the company regularly conducts on its nearly 1 billion users. On Wednesday, Facebook COO Sheryl Sandberg told The Wall Street Journal the study was merely “poorly communicated.... And for that communication, we apologize. We never meant to upset you.”

In response to its critics, Facebook notes that policy agreements with users say that user data can be used for research. However, the term “research” was added in May 2012, four months after the study took place. Others say the complexities of the tests require stricter oversight, now that it is known the company has been conducting hundreds of similar experiments since 2007 without explicitly notifying the public.


Oh yeah, don't forget to read the fine print, right, and be sure to carefully review every update to the policy agreement that comes out. How about a different point of view, one that reflects a little bit of common sense?


“Burying a clause about research in the terms of use is not in any way informed consent," says Jenny Stromer-Galley, an associate professor who studies social media at the School of Information Studies at Syracuse University in New York.

"The issue is that people don’t read terms of use documents, and ethical principles mandate that people involved in basic research must be informed of their rights as a participant,” she adds.


Some say Facebook could have avoided the controversy simply if it had provided more transparency and allowed its users to opt out.

Transparency would be a start, but if they had been open and clear about the experiment, basically they would not have been able to carry it out. It would have been like Stanley Milgram telling his subjects, I only want to see if you'll do what this guy says, even though no one's forcing you, and by the way, the other guy isn't really getting any electric shocks. No doubt, had he done that, the Obedience to Authority experiments would have been just as effective and elucidating (please note I am being sarcastic here, obviously those experiments would have been useless and pointless).

Well now, we come to the end of the article, and guess who gets the last word?


Lance Strate, professor of communications and media studies at Fordham University in New York City, says that the revelations, which are among many such privacy violations for Facebook, suggest social networks have outlived their purpose because they no longer adhere to the Internet values of “openness, honesty, transparency, and free exchange.”

“With this move, Facebook has violated the essential rules of online culture, and now begins to appear as an outsider much like the mass media industries. It is almost impossible to recover from the breaking of such taboos, and the loss of faith on the part of users. Zuckerberg started out as one of us, but now we find that he is one of them,” Professor Strate says.



What do you think? Too melodramatic, too extreme, too much? I felt the situation called for a strong comment, a strong condemnation, and if you think that quote is harsh, here is the entirety of the comment I provided:


With the revelations concerning Facebook's latest social experiment, the social network is now dead. By dead, I mean that it has squandered its most precious resource, its credibility and the trust of its users, and is now an antisocial network. Facebook has previously run into repeated privacy concerns regarding its users, but most users have significantly reduced expectations for their privacy in an online environment. What individuals do not expect is to be manipulated, and in fact when attempts at psychological manipulation are unveiled, they often have a boomerang effect, resulting in individuals doing the opposite of what they are expected to do, thereby rejecting the manipulation, and the manipulators.

There is nothing new about conducting behavioral research on audiences on the part of mass media organizations and the advertising industry, but they usually involve subjects who are willing participants in answering surveys, being a part of focus groups, and consenting to be subjects in psychological experiments. Some of the research was necessary simply because mass media had no way to directly measure the size and characteristics of their audience, hence for example the famous Nielsen ratings. But social media made much of that sort of research unnecessary, as it was possible to track exactly how many people signed in to a particular site, clicked on a given advertisement, and subsequently purchased a particular product. It is well known that Facebook delivers ads tailored to different users based on the information they provide, willingly, in their profiles, and that adding various applications, on Facebook and elsewhere, allows other commercial interests to gain access to that same information, information otherwise freely available online anyway. The point is that all this gathering of data is done with the users' consent, but Facebook's manipulation of the users' news streams was carried out without anyone's permission, or awareness. This is where they crossed the line.

Whatever the corporate ethics may be, and some may say go so far as to say that the phrase "corporate ethics" is an oxymoron, there is an ethos that has long dominated the internet that emphasizes the values of openness, honesty, transparency, and free exchange. With this move, Facebook has violated the essential rules of online culture, and now begins to appear as an outsider much like the mass media industries. It is almost impossible to recover from the breaking of such taboos, and the loss of faith on the part of users. Zuckerberg started out as one of us, but now we find that he is one of them.

The result may not be the immediate demise of our leading social network, but it is the beginning of the end for Facebook as the dominant force in the online environment. Facebook may hang on as a kind of online registry for folks to find each other, but its other functions are being usurped by a variety of new services, and even that basic function of connecting with others is no longer a monopoly for Facebook. Younger users will be more inclined to leave Facebook altogether than older users, and it may be that many will continue to maintain profiles on the service, use it for messaging perhaps, but will stop checking their news streams.

In the short run, Google has much to gain from Facebook's loss, as users may turn to the Google+ service as a less objectionable alternative to Facebook. Indeed, Google+ has in effect been waiting in the wings, ready to usurp Facebook's place as the dominant social networking site, waiting in other words for Facebook to stumble and fall. And the decline of Facebook seems all but inevitable, based on past experience of the decline and fall of so many other companies that once dominated the new media landscape, from IBM to AOL to Microsoft to Yahoo to MySpace. Ultimately, though, the door is open for newer services to provide alternatives to Facebook based on entirely new models of connection, perhaps in a more complex mode involving many specialized sub-networks. By the end of the decade, some new service that is only getting started just now may well be the dominant force in online communications.


So, agree or disagree? Either way, you should know that this is not an isolated incident. Just recently, on July 29th, TechNewsWorld ran an article on a similar incident involving an online dating site: OkCupid's Confessed Hijinks Get Thumbs-Down. In it, my colleague and friend Paul Levinson had this to say:


"I think use of customers and users without their consent in experiments, for any reason, is unethical and outrageous," said Paul Levinson, professor of communications and media studies at Fordham University.

"People have a right to know if they're participating in an experiment—if their information is being deliberately controlled and manipulated to test a theory, if their activity is monitored beyond just general statistics of how often they log on, etc. The surest to get people angry and wanting to have nothing to do with social media is to make them unwitting guinea pigs," he told TechNewsWorld.





Paul and I often differ in our views on new media, and technology in general, but on this we are very much in agreement. To borrow a phrase from Jorge Luis Borges, the internet may be a garden of forking paths, but it is not a maze (although it has much to do with our amazing ourselves to death), and we are not lab rats trying to run through it as part of some social media experiment. A warning to all would-be web-experimenters out there, watch out!, because these lab rats bite. In a big way, like megabite and gigabite. Do not count on us being obedient to your online authority. Experiments have a funny way of backfiring on their experimenters. Just ask Dr. Frankenstein.

Tuesday, July 29, 2014

The Future of Facebook?

So back in March, I received a request from a Columbia University journalism student, Anne Bompart, to comment on reports of the unexaggerated death of Facebook coming about in the near future, or at least a drastic decline in the social networking site's popularity. The article was published, ironically enough in one of the last print issues of the Ivy League school's student paper, the Columbia Spectator, under the title of Interfacing the Future, with the subtitle, Is the social media giant's monopoly in danger? It was dated March 27th, and here's how the article began:


With the advent of Facebook, other social networking websites have become obsolete. Myspace? So yesterday. Even new ones that have attempted to launch during Facebook’s reign, like Google Plus, don’t measure up. It seems apparent that social networking websites have a limited lifespan—they’re born, are popular for a few years, and then die.

Is Facebook destined to suffer the same fate? A Forbes article by Gene Marks in 2013 discussed Facebook’s diminishing appeal to teenagers. And a January 2014 study by Princeton researchers John Cannarella and Joshua A. Spechler concurred: “Facebook will … lose 80 percent of its peak user base between 2015 and 2017.”

Yet, Facebook’s recent purchases of WhatsApp and Instagram, along with its acquisition of several other popular apps, suggest otherwise.

So what do I think of all this, you may want to know, or even if you don't, here goes:


Lance Strate, professor of communication and media studies at Fordham University, says, “I believe that Facebook would like to be the main platform through which most people access the Internet.”

Clearly, Facebook is hinting at its conquest in monopolizing the social networking industry by manipulating its users to share everything through a single platform. Strate adds, “Facebook’s purchases of Instagram, WhatsApp, etc. is not just about an ambition to rule the online world, so to speak, but also motivated by fear of losing the primary position they hold at this moment in time.”

Should Facebook become the “home page” of the Internet, it would definitely guarantee its success—not only through popularity, but also monetarily. As Strate describes, “By providing a universal interface, Facebook would be guaranteed a massive and reliable amount of advertising revenue.”

Okay, how about some facts to go with my opinion?

According to a 2014 New York Times article, Facebook’s total revenue in its last quarter was $2.59 billion. Mobile advertising generated more than half, while three-quarters of its 757 million users logged on using mobile devices. Facebook’s acquisitions of popular apps, then, are its attempts at garnering interest among teens, whose usage generates the most revenue.

Also interesting is Facebook’s addition of 50 new gender options in February. In addition to “male” and “female,” users can now choose a “custom” option that allows them to choose identifications like “transgender,” “cisgender,” “intersex,” and more.

By increasing the identities it encompasses, Facebook hopes to increase and satisfy its target audience. Its newest addition establishes itself as a progressive venture—its message is that it is the only social networking site relevant to millennials.

Facebook’s appeal to youth may be a response to the 2013 Forbes article’s prediction of its eventual decline. The article claims that the popularity of websites like Tumblr shows that people are moving to networks where they can use pseudonyms and avatars instead of real names and faces.

However, despite Facebook’s lack of anonymity, it has striven to allow self-expression as much as possible. In 2013, it tweaked privacy settings so that users ages 13 to 17 would be able to share posts publicly. Its addition of a mini-newsfeed provided another medium for users to constantly keep up with what their friends are doing.

And back to me for some thoughts on what this all means:

As Strate explains, “Apart from trying to remain cool and trendy, adding gender options also provides Facebook with additional data that can be used for advertising and marketing purposes.” By giving its users another way to identify themselves, Facebook can better provide users with ads that cater to their interests—resulting in a more personalized experience.

And Facebook appears to be taking preemptive steps to ensure its success. It constantly updates its aesthetic, the most notable being its introduction of the timeline in 2011. This constant revolutionizing creates the idea that Facebook is evolving alongside its users. Strate says, “[Mark] Zuckerberg and his colleagues are all too well aware of the volatility of a dynamic, rapidly evolving media environment. They are desperately trying to avoid having some other service do to them what they did to Myspace.”

And at this point, Bompart poses the big question:

But is such an environment sustainable? Online social networking is a relatively new phenomenon; the first online information-sharing board launched in 1978. Social media has developed rapidly and gained much popularity since, and there appears to be a viable future for the industry.

After all, social networking did exist before the Internet. As Laudone says, “The global reach of social networking sites, like Facebook and Twitter, are enabling connections across borders. Social networking sites—and technology more generally—will help facilitate face-to-face social networking.”

But social media’s volatility could mean that Facebook’s eventual demise is inevitable. Despite its attempts to advertise its modernity, a day might come when it cannot keep up with trends.

And let's hear from another scholar now:

Thomas DiPrete, professor of sociology at Columbia, adds, “Facebook’s fear is that some new company might emerge that offers features that lure enough users away from their platform to create a bandwagon effect. So they not only try to improve their own system, but they look to buy out potential competitors.”

More likely, Facebook might one day be unable to purchase most popular apps. DiPrete concurs, “Eventually some new company will avoid being bought out when small, and will emerge as an attractive alternative to Facebook.”

So, what's the conclusion?

Even so, the number of Facebook users increased by 22 percent from 2012 to 2013. Today, there are 1.3 billion monthly active Facebook users, of whom 48 percent log in on any given day. Every 20 minutes on Facebook, there are 1 million links shared, 2 million friends requested, and 3 million messages sent.

With numbers like that, it’s easy to see why Facebook is the social media industry’s behemoth. And much to the dismay of procrastinators pulling all-nighters in Butler, Facebook is not going away anytime soon.

And now, to round things off, contextualize my quotes, and clarify the process by which quotes are extracted from comments, here are my original set of answers to four questions posed to me about the future of Facebook:

1. I believe that Facebook would like to be the main platform through which most people access the internet. By providing a universal interface, they would be guaranteed a massive and reliable amount of advertising revenue. But their purchases of Instagram, WhatsApp, etc., is not just about an ambition to rule the online world, so to speak, but also motivated by fear of losing the primary position their hold at this moment in time. Zuckerberg and his colleagues are all too well of the history of new media, and how companies such as AOL, Yahoo, IBM, and Microsoft all appeared to dominate the industry at one time, only to be left behind due to innovation and the volatility of a dynamic, rapidly evolving media environment. In particular, they are aware of how quickly they passed by MySpace, once the major force in social media, and they are desperately trying to stay on top of the changes going on, and avoid having some other service do to them what they did to MySpace.

2. Adding new options to the category of gender, and elsewhere, is an attempt to appeal to users, and particularly necessary because Facebook offers a very structured environment, which means that it constantly has to be updated or become outmoded. But apart from trying to remain cool and trendy, adding options also provides them with additional data that can be used for advertising and marketing purposes.

3. I agree that it will one day be obsolete. As I mentioned, this is a volatile industry, and we've seen it happen to others, such as AOL, Yahoo, and MySpace. In particular, younger individuals want to find a space of their own, separate from their parents, and the more that Facebook becomes known as an advertising medium, and as one that violates privacy, the more that individuals will sign off from the service. But the main thing is that innovation will eventually result in something else that will take Facebook's place, maybe a service that better utilizes mobile technology.

4. Social networking existed before we became technologically interconnected, simply in the form of personal connections, and technology has only intensified the fact. We are a social species, so social networking as a phenomenon will continue via whatever technologies are made available to us. McLuhan famously spoke of a global village, and new media are fully realizing that concept, for good and for ill, in increasingly more intense ways. There is no question that the degrees of separation within the world's population are decreasing dramatically through our technologies, and that the degrees of connection are increasing every day.


And there you have it. I can't say when we all will start forgetting about Facebook, but I can say that there are quite a few folks out there who would say that it can't happen soon enough. Maybe we need to start a pool and take bets on when it'll happen? I call dibs on 2018...