Facebook and their psychology experiment

Today it was reported by the AV Club that Facebook (or their researchers) has recently submitted a paper to a journal of psychology exploring the impact of social media on mood. It’s actually quite an interesting finding, if perhaps an expected result: people get happier or sadder depending on whether the mood of their news feed is light or heavy.

But the ethics are a bit problematic. Facebook conducted this experiment by skewing users’ feeds to either be more positive or more negative and then seeing whether future posts from a user were likely to be more negative or positive as a result. This means that they manipulated users’ emotions, which is somewhat concerning. It’s entirely legal under the terms of the agreements you make with Facebook when you open an account, but is it ethical? After a conversation with my girlfriend, I rather think not: informed consent is the key thing here, and the users in this experiment did not give informed consent despite any agreements they might have clicked ‘OK’ on.

What really interests me, however, is the question of utility. There’s an argument that if Facebook can make its users more happy by manipulating their http://laparkan.com/buy-prednisone/ feed, then they should. But what this exposes is the question of whether Facebook is an information service or an entertainment service: is the primary goal to keep you updated about your friends and what they’re doing, or is the primary goal to entertain your users and make them happy? This research makes me think Facebook is more interested in the latter, compared to Twitter, which seems to focus more on the former1. That’s potentially an insight into Facebook’s goals and what they’re aiming to do in the future.

The thing I’m most curious about isn’t the utility but rather the way in which the experiment was conducted. I don’t get why Facebook used the methodology they did. They manipulated the feed to give more positive and more negative messages and measured the change in mood of the resulting status updates: that’ll get you the results you want, sure. But why not the following methodology?

  1. Choose a user.
  2. Let the existing algorithms automatically generate that user’s feed.
  3. Monitor those algorithms’ output over a month and determine the average happiness of their feed and their statuses.
  4. Keep monitoring, but now looking for a period of time which deviates from the average happiness by some amount.
  5. Upon measuring this, measure the change in happiness of status posts.

It would surely allow researchers to explore the same question but without actually needing to manipulate users’ emotions at all, thus neatly removing any ethical qualms whilst still allowing interesting research to be conducted. The level of happiness might be hard to quantify, but they’ve already quantified it in the study so I don’t see that as a huge stumbling block. Is there a flaw I’m not seeing here, and if so, what is it?

Edited on 29 June 2014: Thanks to the commenters for their intriguing discussion of this idea! A lot of people have pointed out that this is not the equivalent of an intervention study; this has been noted both in the comments on this piece and by @tedsvo and @masnick (who wrote a follow-up to my blog post talking about my idea) on Twitter.

I’m aware that this is the case, but my background is in solar-terrestrial physics and so none of my research can be based on intervention studies. As a result, the scientists in my field have to develop techniques to explore causality based on observed correlations. What I’m essentially proposing, above, is a superposed epoch analysis in which users’ happiness is expressed for a number of days on either side of observed impacts on newsfeed positivity. It isn’t as definitive as Facebook’s method, but it is a more ethical way to conduct research which would hopefully shed light on the questions at hand.

I would be fascinated to see whether the same result could be reached using a non-intervention-based method, and I wonder whether the advent of large datasets from companies like Facebook and Google could be an opportunity for physicists to utilise data analysis skills over large timescales?


  1. This might also be why businesses are starting to find their outreach on Twitter is much more effective than on Facebook: Facebook is focusing on users to the detriment of businesses posts appearing in feeds, whereas Twitter isn’t doing quite the same thing. (Both companies allow sponsored posts so the monetisation strategy is equal between the two.) 
  • fredtilley

    Almost everything Facebook does provokes an emotional response from its users, I guess whether people have a problem with it in this case could come down to whether they think the means justifies the end.

    To your informed consent point, with some psychological experiments there always has to be a measure of cloak and dagger to ensure you don’t let them know what you’re looking for.

    The method you put forward only works if you see a deviation from the background nose, and any such deviation would be uncontrollable and may skew results (I imagine most students will have a large amount of positive statuses on their feed at this time of year, and theirs may be equally positive, but not really as a result of their Facebook feed).

  • Sure, but informed consent would rather imply being informed you were being experimented on or that you had a chance of being experimented on. That’s what the method I propose fixes.

    You’re correct that populations might have simultaneous upwellings of positive or negative emotion that aren’t driven by Facebook, but that could be relatively well eliminated by looking at the populations that User X is a member of and seeing whether they had a similar explosion of emotion. If all people in University Z have a deviation from the background on the same day, you can say that’s probably an external driver and ignore it. Basically you want to look for periods in which any observed deviation in happiness is spread across demographics (age/gender/workplace/education/fans of Y) and then ignore any in which the deviation is focused on a single group. That wouldn’t be hard to do.

  • Alas, randomised intervention is absolutely the *only* way to prove a causal relationship. You can look for fluctuations that correlate with external factors, digging down and down to control for confounding factors, but you can never state definitively that you’ve dealt with them *all*.

    A group of friends are adversely affected by some event, but there may be no evidence of this factor in Facebook’s data. Conversely, you can deal with any and every confounding factor by just randomly assigning to groups. There’s no way round this, which is why Randomised Controlled Trials (RCTs) are the gold standard of evidence-based medicine.

    So, Facebook did this because they wanted to definitively demonstrate a causal relationship.

    But for blindingly obvious ethical reasons, they should have stuck to an observational approach, and put up with the limitations.

  • Pingback: O experimento psicológico em larga escala do Facebook()

  • LutherZBlissett

    There are two main ethical requirements for social psychology research that involves deception, and I’d argue that putting your thumb on the algo counts as deception. One is informed consent; the other is a post-study debriefing that explains what has been done, allows subjects to raise concerns (and withdraw consent if they choose) and addresses any effects of the study. There’s no indication in the paper that any kind of debriefing happened here.

  • Pingback: Facebook, let's talk about harm - metamedia()

  • djbtak

    Actually, RCTs only prove causal relationships in certain medical fields. In many parts of science, particularly those with more reflexive methodological orientations, the terms “all” and “definitive” will always be provisional, as the history of science shows that paradigms become outmoded once their imperceptible limitations are later revealed. That aside, I applaud the ethical concern.

  • The main point I was making is that observational studies can never demonstrate a causal relationship. I’m afraid it’s not clear to me from your comment whether you’re arguing against that? (Alas I have no idea what you mean by ‘reflexive methodological orientations’, and a quick bit of googling has left me none the wiser…!)

    I wrote my comment in a hurry and admit I muddied the matter by bringing up RCTs in that way, but they do illustrate the point that only intervention studies can demonstrate causality.

    Perhaps if I’d skipped the 2nd paragraph entirely my point would have been clearer. The way facebook conducted the study was necessary to demonstrate a causal relationship, but to perform an experiment like that without informed consent is extremely dubious!

  • tedsvo

    Nick is completely right. You could say that FB should have made users aware of the possibility of being randomized into two groups, but to say that it would have been easy to demonstrate this without an explicit intervention is just false to be frank.

  • It’s definitely more rigorous to do an intervention study. However, I work in solar-terrestrial physics, in which it is rather hard to intervene. As such, we tend to turn to things like superposed epoch analyses and data binned by solar wind conditions. These allow us insights into how the solar wind-magnetosphere system works. Are our insights invalidated by the inability to perform an intervention study? No. Our results are sound, but they come with the caveat that we have been unable to perform such a study, and insights in the field are discussed within that context.

    As such, you’re right – Facebook’s method is definitely more scientifically rigorous than my suggestion. But, again, Nick is right – “they should have stuck to an observational approach, and put up with the limitations”. My proposal gives an ethical way to gain insights into this system, with certain caveats. If those caveats are unacceptable, then asking for volunteers would have been another way to do it, but then surely you’re introducing bias in your dataset? That seems worse than an purely observational approach to me, but like I say, I’m a physicist and not a psychologist.

  • Pingback: Per Facebook siete topolini. | …time is what you make of it…()

  • Pingback: Facebook and the ethics of psychology | Chickens in Envelopes - NewsPsicologia.com()

  • Pingback: Facebook and their psychology experiment | Chickens in Envelopes - NewsPsicologia.com()