Facebook researchers recently apologized for intentionally manipulating what people saw in their newsfeeds as they conducted a study on user interaction. The researchers filtered the feeds to contain fewer positive emotional words. They wanted to see if that would lead users to post their own content with fewer positive words.
Interactive Computing Professor Amy Bruckman says the Facebook study brings up hard questions. Bruckman wonders if Facebook had only done the happy case (filtering people’s newsfeeds to have fewer negative words), would people still have concerns about the study and privacy? She says it’s not that users lost control in this one instance – it’s that they never had it in the first place. Facebook’s algorithm is curated by an algorithm that is not transparent.
But these issues are not new—Bruckman has published on this topic since the late 1990s, and did an empirical study on how Internet users react to being studied more than ten years ago.
How do people on social media feel about being studied by researchers? To shed some light on this topic, Jim Hudson and I conducted a study in which we entered chatrooms and told people we were studying them. Actually, what we were really studying was whether they kicked us out of the chatroom.
This is not something we did lightly—we got IRB approval for our study, and it went to full board review. For those of you keeping score, this was a full waiver of consent for a deceptive study (we said we were studying language online). The study was approved because our IRB felt there was minimal risk to participants and the study could not practicably be carried out without the waiver.
In the chat groups we entered, we waited one minute, and posted one of four messages: no message (control), a message saying we’re recording this, the same message with a command to opt out of recording, and the same with an opt in. So what happened? In short, almost no one opted in or out, and we were kicked out of the chatroom in less than five minutes in 63.3% of the time (in the conditions in which we posted a message). In the control condition in which we entered but posted no message, we were kicked out 29% of the time.
Intriguingly, for every additional 13 people in a chatroom, our chance of getting kicked out went down by half. Our hunch is that the smaller the group, the more intimate it feels, and the more our presence felt like an intrusion. I.e. three friends discussing their favorite sports team is a quite different social context than 40 strangers playing a chatroom-based trivia game.
I believe this study was ethical. What we learned from it—how people really feel about being studied, for that context—outweighs the annoyance we caused. This is a judgment call, and something we considered carefully. The irony of annoying people to show how much what we were doing is annoying is not lost on me. But would I do similar studies? Yes, if and only if there was benefit that outweighed the harm.
This research was published in 2004. Our chatroom study makes the points that:
- It is possible to do ethical research on Internet users without their consent.
- It is possible to do ethical research that annoys Internet users.
- The potential benefit of the study needs to outweigh the harm.
- Annoying people is doing some harm, and most users are annoyed by being studied without their consent.
I deliberately annoyed Internet users without their consent in the name of science, and I would do it again.
See more information on Bruckman’s experiment in online social interactions here.
For more information, or to schedule an interview, please contact: