Dhiraj Murthy is a Reader of Sociology at Goldsmiths, University of London. Dhiraj Murthy’s current research explores social media, virtual organizations, and big data quantitative analysis. His work on social networking technologies in virtual breeding grounds was funded by the S. National Science Foundation, Office of Cyber Infrastructure. Dhiraj also has a book about Twitter, the first on the subject, that is published by Polity Press. His work on innovative digital research methods has been cited widely. For further information, visit his website .
@dhirajmurthy
The Facebook Psychology ‘experiment’ which manipulated the emotional content of nearly 700,000 users provides evidence
that corporations need to have review procedures in terms of ethics that universities of been developing for some years surrounding social media research.
In a university context, Institutional Review Boards (IRBs)
are responsible for monitoring the ethics of any research
conducted at the University. The US government’s Department of Health and Human Services publishes
very detailed guidance for human subjects
research. Section 2(a) of their IRB guidelines states that “for the IRB to approve research […] criteria include, among other things […] risks, potential
benefits, informed consent,
and safeguards for human subjects”.
Most IRB’s take this mission quite seriously
and err on the side of caution
as people’s welfare is at stake.
The reason for this is simply to protect human subjects. Indeed, part of IRB reviews also evaluate
whether particularly vulnerable populations (e.g. minors, people with mental/physical disabilities, women who are pregnant, and various other groups depending
on context) are not additionally harmed due to research conducted. Animal research protocols
follow similar logics. Before
University researchers conduct
social research, the ethical implications of the research
are broadly evaluated within ethics and other criteria.
If any human subject is participating in a social experiment or any social
research, most studies
either require signed informed
consent or a similar protocol
which informs participants of any risks associated with the research and allows them the option to opt out if they do not agree with the risks or any other parameters of the research.
Therefore, I was tremendously saddened
to read the
Proceedings
of the National Academy of Sciences (PNAS) paper co- authored by Facebook data scientist Adam D. I. Kramer, Jamie E. Guillory
of University of California, San Francisco and Jeffrey T.
Hancock of Cornell
University titled ‘Experimental evidence of massive-scale emotional contagion through
social networks’. The authors of this study argue that agreement to Facebook’s ‘Data Use Policy’
constitutes informed consent
(p. 8789). The paper uses a Big Data (or in their words ‘massive’) perspective to evaluate emotional behavior
on Facebook (of 689,003 users).
Specifically, the authors
designed an experiment with a control
and experimental group in which they manipulated the emotional sentiment
of content in a selection
of Facebook users’ feeds to omit positive and negative
text content. Their conclusion was that the presence of positive emotion
in feed content
encouraged the user to post more positive
emotional content. They also found
that the presence of negative emotion in feed content encouraged
the production of negative
content (hence the disease metaphor
of contagion). In my opinion, any potential scientific value of these findings (despite
how valuable they may be) is outweighed by gross ethical negligence.
This experiment should
have never gone ahead. Why? Because manipulating people’s emotional behavior
ALWAYS involves risks. Or as Walden succinctly put it ‘Facebook intentionally made thousands
upon thousands of people sad.’
In some cases, emotional interventions may be thought
to be justifiable by participants. But, it is potential research
subjects who should (via informed consent)
make that decision.
Without informed consent,
a researcher is playing God. And the consequences are steep. In the case of the Facebook
experiment, hundreds of thousands of users were subjected to negative content
in their feeds. We do not know if suicidal
users were part of the experimental group or individuals with severe depression, eating disorders, or conditions of self-harm. We will never
know what harm this experiment did (which could have even lead to a spectrum of harm from low-level malaise to suicide).
Some users had a higher percentage of positive/negative content omitted (between 10%-90% according
to Kramer and his authors. Importantly, some users had up to 90% of positive
content stripped out of their feeds, which is significant. And users stripped
of negative content
can argue social engineering.
To conduct a psychological experiment that is properly
scientific, ethics
needs to be central. And this is truly not the case here. Facebook
and its academic co-authors have conducted bad science and give the field of data science a bad name. PNAS is a respected journal
and anyone submitting should have complied
with accepted ethical
guidelines regardless of the fact that Facebook is not
an academic institution.
Additionally, two of the authors are at academic
institutions and, as such, have professional ethical standards to adhere to. In the case of the lead author from Facebook, the company’s
Data Use Policy has been used as a shockingly poor proxy for a full human subjects
review with informed
consent. What is particularly upsetting
is that this was an experiment that probably did real harm. Some have argued that at least Facebook published
their experiment while other companies
are ultra-secretive. Rather than praising Facebook for
this, such experiments cast light
on the major ethical issues
behind corporate research
of our online data and our need to bring these debates into the public sphere.