Monday, 16 October 2017

Social media research, ‘personal ethics’, and the Ethics Ecosystem

Gabrielle Samuel: Research Fellow Lancaster University / Research Associate, King’s College London.  Research interests: research ethics, ethical and social issues surrounding innovative biotechnologies; social media ethics; qualitative research.
Gemma Derrick: Lecturer, Department of Educational Research, Lancaster University. Research interests: Research evaluation, Qualitative research, Group/committee decision-making. 
Ethics review may seem overly bureaucratic to some, but in this blog we argue that a more researcher-committee collaborative process, rather than a gatekeeper ‘tick-box’ role, may help with navigating the ‘ethics ecosystem’ when using new research tools such as social media (SM) data.
The ethics ecosystem exists as an inter-related membership of academic bodies that, when fully functional, acts to reinforce a high-level of ethical behaviour from researchers, and to guard against academic misconduct.  Specifically, this ethics ecosystem can be described as all the individuals (researchers), organisations (research institutions/research ethics committees (RECs)) and external bodies (publishing houses, funding bodies, professional associations) which promote ethically responsible research behaviour in the academy. Ordinarily, the academy’s ethics ecosystem works well due to a shared understanding of what ethically responsible research behaviour is. However, this system breaks down when new ideas, methods or approaches are introduced to its members, and each player interprets and enforces theses ideals of ethical behaviour differently.  This forces each member to re-examine concepts previously thought to be set in ethical stone.  Such is the case of SM research.
Currently this system is failing SM research
Our research has spoken to members at all levels of the ethics ecosystem; researchers using SM data, research ethics committee members, universities, funding bodies, publishing houses and journal Editors, and we found that members possessed inconsistent understandings of ethics applicable to the use of SM in research.  There were different interpretations of the established ethical notions of consent (should we ask for it? shouldn’t we? when and how should we?) and privacy (how, or even should, SM users’ data be protected, and to what degree?); some members viewed SM data as ‘fair game’, while others were more cautious; and only some shouldered responsibility to protect SM users’ perceived privacy. What was lacking was an overarching understanding reinforced by a larger governance body as a mechanism to fuel a wider, community-led understanding about ethical conduct (and misconduct) towards SM research.
At the research level of the ecosystem, researchers’ were monitoring their own decisions about how best to act ethically. However, when left to their own devices this over-reliance on subjective monitoring of behaviour risks the development of a form of “personal ethics”, which would be different for each researcher within this ecosystem;
Interviewer: Are there any guidelines that you follow in your own research?
Researcher: It’s my guidelines. Everybody has their own definition of ethics…. 
This became dangerous when the acceptability of these decisions were related to how strongly researchers justify them, rather than being dependent on conduct checks and balances available by a wider, community-led ethical understanding of SM research;
You’ve got to develop the sense of what's right…then put that across and make your case’
The differing interpretations of personal ethics dovetailed at the institutional level of the ecosystem, when researchers had to, or chose to submit their research proposal to a REC for consideration. Committee members, as actors in this level of the ecosystem, spoke about their lack of experience in reviewing this type of research simply because so few proposals are submitted (due to the differing researcher interpretations of whether ethical review was required). As such REC judgements of ethical conduct relied heavily on researchers’ justifications of ethical decision-making within the application;
We…sometimes make different decisions even for projects that look pretty similar. It’s how they build up their case doing that particular project 
The same held true for other members of the ethics ecosystem, such as the Journal editors and, by extension, peer-reviewers.
To summarise, what does this wide disagreement around SM research mean for the ethics ecosystem? After all, there is nothing wrong with ethical norms being driven by researchers’ different subjective justifications of their personal ethics a.k.a ethical pluralism. However, for SM research, and similar new research tools, reliance on researchers’ justifications of ethical behaviour can be dangerous as it risks leaving important ethical decisions in limbo, and allows for ethically problematic research to fall between the cracks.
What is needed is more governance within the ethics ecosystem.  Only then can enough checks and balances exist to ensure best practice, promote a shared understanding of SM research ethics, and provide necessary audits to protect against scientific misconduct.
One step towards this is to require researchers to submit for ethics review to provide an extra layer of scrutiny. More importantly, it provides REC members with the tacit knowledge necessary to act as this larger arbitrator of ethical conduct for SM research.

Friday, 29 September 2017

Westminster Student Blog Series

We will be posting a series of short vlogs, produced by University of Westminster Postgraduate students. They are all based on their research of social media. We will be posting one a week for the next few weeks, so keep your eyes peeled!

The Internet as Playground and Factory

Author: Remigijus Marciulis



In recent years, the labour theory of value has been a field of intense interest and debates, particularly the use of Marxist concepts in the digital context. There are straight facts showing that giant online companies like Facebook and Google have accumulated enormous amounts of capital by selling their users’ data to advertisers. The phenomena of a ‘Social Factory’ is discussed by different scholars. The value we create go beyond actual factory walls, including the online sphere. “The sociality is industrialised and industry is socialised” (Jarrett, 2016: 28). Trebor Scholz refers the Internet as a ‘playground’ and ‘factory’. His argument is based on the fact that being online is a part of having fun. Is it really uploading a video on YouTube counts as a digital exploitation? On the other hand, Christian Fuchs says that there is a straight connection between the time spent on the Internet and the capitalist exploitation, free labour and surplus-value. Kylie Jarrett tackles the subject from a metaphoric angle of the Digital Housewife. She applies concepts of Marx and feminist approaches by investigating the digital world. According to her, a digital or immaterial labour is profoundly exploited by capitalism.
The interview with Dr Alessandro Gandini explores subjects of digital labour and ‘playbour’, the use and appropriateness of Marxist concepts. To sum up, the subject of digital labour and exploitation is complex and diverse. It requires a more profound study that distinguishes the “real” digital work and time spent online for leisure. Scholars agree that the communicative action and activism are the main key instruments fighting against digital capitalist inequalities.

Friday, 22 September 2017

Call for speakers: Answering social science questions with social media data

Thursday 8th March 2018, The Wellcome Collection, London, NW1 2BE

After several successful events, we’re pleased to say that the NSMNSS network (http://nsmnss.blogspot.co.uk/) and Social Research Association (www.the-sra.org.uk) are again teaming up to deliver a one-day conference on ‘Answering social science questions with social media data’.

As social media research matures as a discipline, and methodological and ethical concerns are being addressed, focus is increasingly shifting on to the role that it can and should play in the social sciences – what are the questions it can help us to answer?

We are looking for speakers who have completed a piece of social research using social media data to present their findings and discuss how this has made a difference:
  • ·    How has it impacted policy, best practice, or understanding?
  • ·    How has it answered a question that would have been unfeasible using conventional research methods alone?
This research could be in any substantive area, from health or crime to politics or travel, as long as it is ‘social’ research. It can also include any type of analysis – quantitative or qualitative analysis, big data or small – as long as it involves some form of data collection via a social media platform. We want to encourage a range of different methods and topics to help demonstrate the diversity of the methodology and the role it can play.

Are you interested in presenting?
If you have completed a piece of research using social media research methods, or have any suggestions of who we should contact, then please complete the submissions template and send to nsmnss@natcen.ac.uk by Monday 27th November. Let us know the name and topic of the research study, which social media platform was used, a brief description of methodology, and the findings and impact of this study.

This event is being set up by the SRA and NSMNSS network. We want to keep the event accessible and ticket prices reasonable, but need to cover the costs of the venue hire/refreshments, so we cannot pay presenters – however there will be 1 free place per presentation, and we will be able to cover reasonable ‘within UK’ public transport travel expenses.


The #NSMNSS & SRA teams







Friday, 15 September 2017

Westminster Student Blog Series

We will be posting a series of short vlogs, produced by University of Westminster Postgraduate students. They are all based on their research of social media. We will be posting one a week for the next few weeks, so keep your eyes peeled!

Digital Review: Public Sphere and the Exclusion of Women

Author: Karolina Kramplova


The creation of new forms of digital social media during the first decade of the 21st century has completely changed the way in which many people communicate and share information. When we think about social media as a space where the public can discuss current affairs and politics, it is interesting to consider it with the theory of public sphere. Ever since Habermas established this concept, it was criticised by scholars like Nancy Fraser. She argues that the theory was established based on a number of exclusions and discriminations. I focused on the exclusion of women from the political life. Andy Ruddock, an author of the book, Youth and Media, also talks about the lack of representation of women in subculture studies and how social media is not about democratisation and public debate but rather about people picking what they like. An activist, Hannah Knight, acknowledges the discrimination women face until this day. However, when it comes to public sphere and social media, even though Knight argues there is a space for public debates, she says people are not listening to everyone. Social media empowers movements such as the Women’s March, but does it contribute towards democratisation, or do we just want to believe it does? Therefore, both the scholar, Ruddock and the activist Knight, have persuaded me that the concept of public sphere is no longer relevant when it comes to social media.

Friday, 1 September 2017

Westminster Student Blog Series

We will be posting a series of short vlogs, produced by University of Westminster Postgraduate students. They are all based on their research of social media. We will be posting one a week for the next few weeks, so keep your eyes peeled!

Journalism, the Filter Bubble and the Public Sphere

Author: Mick Kelly



"The influence of social media platforms and technology companies is having a greater effect on American journalism than even the shift from print to digital.”
(Bell and Owen, 2017)

This is the conclusion of a study released in March 2017 by researchers from Columbia University’s Graduate School of Journalism who investigated a journalism industry reacting to controversies about fake news and algorithmic filter bubbles that occurred at the time of the US presidential election. The report noted the following key points:

• Technology companies have become media publishers
• Low-quality content that is sharable and of scale is viewed as more valuable by social media platforms than high-quality, time-intensive journalism
• Platforms choose algorithms over human editors to filter content, but the ‘nuances of journalism require editorial judgment, so platforms will need to reconsider their approach’.

The report states that news might currently reach a bigger audience than ever before via social media platforms such as Facebook, but readers have no way of knowing how data influences the stories they read or how ‘their online behaviour is being manipulated’. (Bell and Owen, 2017)

This video assignment reveals that the debate has existed since 2011 when Eli Pariser wrote The Filter Bubble, which explained how data profiling led to personalisation and the algorithmic filtering of news stories. The theme of this video is the impact of this robotic process on journalism within the public sphere, and includes an interview with Jim Grice, who is Head of News and Current Affairs at London Live.


R
EFERENCE
Bell, E. and Owen, T. (2017) The Platform Press: How Silicon Valley reengineered journalism. The Tow Centre for Digital Journalism at Columbia University’s Graduate School of Journalism. Available from:
https://www.cjr.org/tow_center_reports/platform-press-how-silicon-valley-reengineered-journalism.php [Accessed 30 March 2017]

Friday, 25 August 2017

Westminster Student Blog Series

We will be posting a series of short vlogs, produced by University of Westminster Postgraduate students. They are all based on their research of social media. We will be posting one a week for the next few weeks, so keep your eyes peeled!


The Act of Sharing on Social Media

Author: Erxiao Wang



Social media has been the growing theme of the Information Age we live in. It is of considerable importance to understand, utilise, and engage wisely with social media platforms within our contemporary networked digital environment. This video is a theoretically inspired social media artefact; it introduces some of the key arguments in University of Westminster Professor Graham Meikle's book “Social Media: Communication, Sharing and Visibility”. This book includes theories such as the idea of media convergence, the business model of online sharing, the mediated online visibility and also the exploitable data that is subject to online surveillance. Moreover, Professor Meikle has also analysed the commercial Internet of Web 2.0 that enabled the user generated content and witnessed the coming together of the public and personal communication, and the always-on mobile connectivity has enabled greater mobility that allows social change. In order to make use of these key theories in our everyday practice and conversations of social media, I had a chat with a good friend of mine Junchi Deng, who is an accordion player who attends the Royal Academy of Music here in London and also posts and vlogs regularly. We talked about the act of sharing on social media, and the idea of promoting oneself on social networking sites, especially for a musician like himself. Our discussion centres around everyday use of social media, and our individual observations towards the incorporation and review of these platforms.

Friday, 4 August 2017

Terrorism and Social Media Conference Part Two

This is the second part of a two-part series on the #TASM conference. You can read the first part here.

The 27th and 28th June saw the congregation of some of the world’s leading experts in counter-terrorism and 145 delegates from 15 countries embark on Swansea University’s Bay Campus for the Cyberterrorism Project’s Terrorism and Social Media conference (#TASMConf). Over the two days, 59 speakers presented their research into terrorists’ use of social media and responses to this phenomenon. The keynote speakers consisted of Sir John Scarlett (former head of MI6), Max Hill QC (the UK’s Independent Reviewer of Terrorism Legislation), Dr Erin Marie Saltman (Facebook’s Policy Manager for counter-terrorism and counter-extremism in Europe, the Middle East and Africa), Professor Philip Bobbitt, Professor Maura Conway and Professor Bruce Hoffman. The conference oversaw a diverse range of disciplines including law, criminology, psychology, security studies, linguistics, and many more. Amy-Louise Watkin and Joe Whittaker take us through what was discussed (blog originally posted here).

Both Dr Weeda Mehran, and Amy-Louise Watkin and Sean Looney presented on children in terrorist organisations and their portrayal through videos and images. Dr Mehran analysed eight videos and found that children help to create a spectacle as they generate memorability, novelty, visibility and competitiveness, and display high levels of confidence while undertaking executions. On the other hand, Watkin and Looney found in their analysis of images in online jihadist magazines that there are notable differences between IS and AQ in their use of children with IS focusing on displaying brutality through images of child soldiers and AQ trying to create shame and guilt at their Western followers through images of children as victims of Western-back warfare. They concluded that these differences need to be taken into account when creating counter-messages and foreign policy.

Joe Whittaker presented his research on online radicalisation. He began with a literature review of the field, concluding that the academic consensus was that the Internet is a facilitator, rather than a driver, of radicalisation. He then offered five reasons as to why there was good reason to doubt this consensus: the lack of empirical data, how old the data is compared to the growth of the Internet, the few dissenting voices in the field, the changing online threat since 2014, and the wealth of information that can be learned from other academic fields (such as Internet studies and psychology). He then offered three case studies of individuals radicalised in the previous three years to learn whether the academic consensus still holds; finding that although it does in two cases, there may be good reason to believe that social media could drastically change the nature of some individuals’ radicalisation.

On the topic of corporate social responsibility in counter-terrorism, Chelsea Daymon and Sergei Boeke discussed different aspects of private entities engaging in policing extremist content on the Internet. Daymon drew upon the different projects and initiatives conducted by industry leaders, such as Google’s Jigsaw projects and the shared database between Microsoft, Twitter, Facebook, and YouTube. She, however, warned against the excessive use of predictive technology for countering violent extremism, suggesting that it could raise practical and ethical problems in the future. Drawing from Lawrence Lessig’s models, Boeke outlined four distinct categories of regulation that can be applied to the Internet: legal, architectural, market-based, and altering social norms before offering different suggestions for how this can be used in the context of countering terrorism.

The final panel related to creating counter-narratives, which included Dr Paul Fitzpatrick, who discussed different models of radicalisation, and how it related to his work as Prevent Coordinator at Cardiff Metropolitan University. He began by critiquing a number of prevalent models including Moghaddam’s staircase, as well as all multi-stage, sequential models, observing that, having seen over one hundred cases first-hand, no-one had followed the stages in a linear fashion. He also highlighted the particular vulnerabilities of students coming to university, who have their traditional modes of thinking deliberately broken down, and are susceptible to many forms of extreme thinking. Sarah Carthy, who presented a meta-analysis of counter-narratives, followed Dr Fitzpatrick. She observed that specific narratives are particularly powerful because they are simple, present a singular version of a story, and are rational (but not necessarily reasonable). Importantly, Carthy noted that despite many assuming that counter-narratives can do little harm – the worst thing that can happen is that they are ignored – some were shown to have a detrimental effect on the target audience, raising important ethical considerations. The final member of the counter-narrative panel was Dr Haroro Ingram, who presented his strategic framework for countering terrorist propaganda. Ingram’s framework, which draws on findings from the field of behavioural economics, aims to disrupt the “linkages” between extremist groups’ “system of meaning”. Dr Ingram observes that the majority of IS propaganda leverages automatic, heuristic-based thinking, and encouraging more deliberative thinking when constructing a counter-narrative could yield positive results.

The last day of the conference saw keynote Max Hill QC argue that there is a strong place for counter-narratives to be put into place to discredit extremist narratives, and spoke of his experiences visiting British Muslims who have been affected by the recent UK terrorist attacks. He told of the powerful counter-narratives that these British Muslims hold and argued their importance in countering extremist propaganda both online and offline. Hill also argued against the criminalising of tech companies who ‘don’t do enough’, asking the question of how we measure ‘enough’? His presentation was shortly followed by Dr Erin Marie Saltman who discussed Facebook’s advancing efforts in countering terrorism and extremism. She argued that both automated techniques and human intervention are required to tackle this and minimise errors on the site that sees visits from 1.28 billion people daily. Saltman gave an overview of Facebook’s Violent Extremism Policies and spoke of the progress the organisation has made regarding identifying the ability of actors to make new accounts. Overall, Saltman made it crystal clear that Facebook are strongly dedicated to eradicating all forms of terrorism and violent extremism from their platform.

With the wealth of knowledge that was shared from the academics, practitioners and private sector companies that attended TASM, and the standard of research proposals that followed from the post-TASM research sandpit, it is clear that TASM was a success. The research presented made it very clear that online terrorism is a threat that affects society as a whole and the solutions will need to come from multiple directions, multiple disciplines, and multiple collaborations. 
You can find Max Hill QC’s TASM speech in full here and follow us on Twitter @CTP_Swansea to find out when we will be releasing videos of TASM presentations.


Tuesday, 1 August 2017

Terrorism and Social Media Conference Part One

This is the first part of a two-part series on the #TASM conference. Please look out for the next part of this series later this week!

The 27th and 28th June saw the congregation of some of the world’s leading experts in counter-terrorism and 145 delegates from 15 countries embark on Swansea University’s Bay Campus for the Cyberterrorism Project’s Terrorism and Social Media conference (#TASMConf). Over the two days, 59 speakers presented their research into terrorists’ use of social media and responses to this phenomenon. The keynote speakers consisted of Sir John Scarlett (former head of MI6), Max Hill QC (the UK’s Independent Reviewer of Terrorism Legislation), Dr Erin Marie Saltman (Facebook’s Policy Manager for counter-terrorism and counter-extremism in Europe, the Middle East and Africa), Professor Philip Bobbitt, Professor Maura Conway and Professor Bruce Hoffman. The conference oversaw a diverse range of disciplines including law, criminology, psychology, security studies, linguistics, and many more. Amy-Louise Watkin and Joe Whittaker take us through what was discussed (blog originally posted here).

Proceedings kicked off with keynotes Professor Bruce Hoffman and Professor Maura Conway. Professor Hoffman discussed the threat from the Islamic State (IS) and al-Qaeda (AQ). He discussed several issues, one of which was the quiet regrouping of AQ, stating that their presence in Syria should be seen as just as dangerous as and even more pernicious than IS. He concluded that the Internet is one of the main reasons why IS has been so successful, predicting that as communication technologies continue to evolve, so will terrorists use of social media and the nature of terrorism itself. Professor Conway followed with a presentation discussing the key challenges in researching online extremism and terrorism. She focused mainly on the importance of widening the groups we research (not just IS!), widening the platforms we research (not just Twitter!), widening the mediums we research (not just text!), and additionally discussed the many ethical challenges that we face in this field.

The key point from the first keynote session was to widen the research undertaken in this field and we think that the presenters at TASM were able to make a good start on this with research on different languages, different groups, different platforms, females, and children. Starting with different languages, Professor Haldun Yalcinkaya and Bedi Celik presented their research in which they adopted Berger and Morgan’s 2015 methodology on English speaking Daesh supporters on Twitter and applied this to Turkish speaking Daesh supporters on Twitter. They undertook this research while Twitter was undergoing major account suspensions which dramatically reduced their dataset. They compared their findings with Berger and Morgan’s study and a previous Turkish study, finding a significant decrease in the follower and followed counts, noting that the average followed count was even lower than the average Twitter user. They found that other average values followed a similar trend, suggesting that their dataset had less power on Twitter than previous findings, and that this could be interpreted as successful evidence of Twitter suspensions.

Next, we saw a focus away from the Middle East as Dr Pius Eromonsele Akhimien presented his research on Boko Haram and their social media war narratives. His research focused on linguistics from YouTube videos between 2014 when the Chobok girls were abducted until 2016 when some of the girls were released. Dr Akhimien emphasised the use of language as a weapon of war. His research revealed that Boko Haram displayed a lot of confidence in their language choice and reinforced this through the use of strong statements. They additionally used taunts to emphasise their control, for example, “yes I have your girls, what can you do?” Lastly, they used threats, and followed through with these offline.

Continuing the focus away from the Middle East, Dr Lella Nouri, Professor Nuria Lorenzo-Dus and Dr Matteo Di Cristofaro presented their inter-disciplinary research into the far-right’s Britain First (BF) and Reclaim Australia (RA). This research used corpus assisted discourse analysis (CADS) to analyse firstly why these groups are using social media and secondly, the ways in which these groups are achieving their use of social media. The datasets were collected from Twitter and Facebook using the social media analytic tool Blurrt. One of the key findings was that both groups clearly favoured the use of Facebook over Twitter, which is not seen to be the same in other forms of extremism. Also, both groups saliently used othering, with Muslims and immigrants found to be the primary targets. The othering technique was further analysed to find that RA tended to use a specific topic or incident to support their goals and promote their ideology, while BF tended to portray Muslims as paedophiles and groomers to support their goals and ideology.

The diversity continued as Dr Aunshul Rege examined the role of females who have committed hijrah on Twitter. The most interesting finding from Dr Rege’s research was the contradicting duality of the role of these women. Many of the women were complaining post-hijrah of the issues that pushed them into committing hijrah in the first place: loneliness, cultural alienation, language barriers, differential treatment, and freedom restrictions. They tweeted using the hashtag #nobodycaresaboutawidow and advised young women who were thinking of committing hijrah to bring Western home comforts with them, such as make-up.

Friday, 21 July 2017

Save your outrage: online cancer fakers may be suffering a different kind of illness



Peter Bath, University of Sheffield and Julie Ellis, University of Sheffield

Trust is very important in medicine. Increasing numbers of people are using the internet to manage their health by looking for facts about specific illnesses and treatments available. And patients, their carers and the public in general need to trust that this information is accurate, reliable and up to date.

Alongside factual health websites, the internet offers discussion forums, personal blogs and social media for people to access anecdotal information, support and advice from other patients. Individuals share their own experiences, feelings and emotions about their illnesses online. They develop relationships and friendships, particularly with people who have been through illnesses themselves and can empathise with them.

Some health professionals have concerns about the quality of medical information on the internet. But others are advocating that patients should be more empowered and encourage people to use these online communities to share information and experiences.

Within these virtual communities, people don’t just have to trust that the medical information they encounter is factually correct. They are also placing trust in the other users they encounter online. This is the case whether they are sharing their own, often personal, information or reading about the personal experiences of others.


Darker side to sharing


While online sharing can be very beneficial to patients, there is also a potentially darker side. There have been widely-publicised cases of “patients” posting information about themselves that is, at best, factually incorrect and might be considered deliberately deceptive.

Blogger Belle Gibson built a huge following after writing about being diagnosed with a brain tumour at the age of 20 and the experience of having just months to live. She blogged about her illness, treatment, recovery and eventual relapse while developing and marketing a mobile phone app, a website and a book. Through all of this she advocated diet and lifestyle changes over conventional medicine, claiming this approach been key to her survival.

But Gibson’s stories were later revealed to be part of a tangled web of deceit, which also involved her promising to donate money to charities but, allegedly, never delivering the payments.

In one sense, people’s trust was broken when they realised they had paid money under false pretences. In another sense, they may have followed Gibson’s supposed example of halting prescribed treatments and adopting a new diet and lifestyle when there was no real evidence this would work. But, at a deeper level, people may feel betrayed because they sympathised and indeed empathised with a person who was later revealed to be a fraud.

The truth was eventually publicised by online news outlets and Gibson was subject to complaints and abuse on social media. But there is something about the anonymity of the internet that facilitates this kind of deceptive behaviour in the first place. People are far less likely to be taken in by this sort of thing in the real world, but they are online. And it destroys people’s trust in online resources across the board.


Trust in extreme circumstances


Despite this, the moral outrage generated online by this kind of extreme and relatively isolated incident may be misplaced. There is evidence to suggest that people who do this may actually be ill but it’s a very difference sort of illness.

Faking diseases or illnesses – often described as Munchausen’s syndrome – is not unique to the internet and was reported long before its advent. The Roman physician Galen is credited with being the first to identify occasions on which people lied about or induced symptoms in order to simulate illness. More recently, the term “Munchausen by internet” has been used to describe behaviour in which people use chat rooms, blogs and forums to post false information about themselves to gain sympathy, trust or to control others.

Whichever way we view people who post such false information, their behaviour raises the question why people with genuine illnesses still share such intimate details when the potential for dishonesty from others is so evident. Our new research project, “A Shared Space and a Space for Sharing”, led by the University of Sheffield, is trying to understand how trust works in online spaces among people in extreme circumstances, such as the terminally ill.

The ConversationWe need to know why people trust and share so much with others when they have never met them and when there is so much potential for deceit and abuse. It is also important to identify people who fake illness online if we are to ensure there is public trust in genuine online support platforms.


Peter Bath, Professor of Health Informatics, University of Sheffield and Julie Ellis, Research associate, University of Sheffield

This article was originally published on The Conversation. Read the original article.

Friday, 14 July 2017

Book review: The SAGE Handbook of Social Media Research Methods

Charlotte Saunders is a Research Analyst at NatCen Social Research and the newest member of the NSMNSS team!
She works on quantitative and secondary analysis projects across a range of policy areas. Previously, Charlotte spent three years working for Ipsos MORI on a variety of qualitative and quantitative projects. Most recently she spent several months with VSO in Tanzania managing teams of young volunteers working in secondary schools. Charlotte holds an MSc in Public Health from the London School of Hygiene and Tropical Medicine. Her research project there looked at inequalities in access to clean water and sanitation in South America. 

In the past decade social media has transformed many aspects of our lives. It has revolutionised the way we communicate and the widespread adoption of mobile devices means its impact on everyday life continues to grow. As social researchers, we know that social media opens up huge new reserves of naturally occurring data for us to play with. The large volume of data produced is a goldmine for quantitative researchers; the opportunities provided by big data are well documented. But there are also new prospects in qualitative research, with new sources of ‘thick data’ which help us to understand the stories, emotions and worldviews behind the numbers.

Despite all these possibilities, many of us don’t know how to take advantage of them. There are lots of things to consider; from the technical knowledge required to access and store the data to the potential ethical issues involved in using people’s data for research without first asking their permission. The SAGE Handbook of Social Media Research Methods promises to guide researchers through the whole process - from research design and ethical issues, data collection and storage through to analysis and interpretation.

The editors, Luke Sloan (Senior Lecturer at Cardiff University and Deputy Director of the Social Data Science Lab) and Anabel Quan-Haase (Associate Professor of Information and Media Studies and Sociology at the University of Western Ontario) have compiled chapters which cover the whole research process. Discussions of the limitations of naturally occurring data from social media, and some of the techniques that can be used to overcome them are practical and guide the researcher through the issues clearly. The chapters outlining the history, structure and demographics of some less common social media platforms give a good basic overview for those who rarely stray away from Facebook or Twitter.

Overall this is a helpful guide to research using social media. The ethics discussions outline the key issues that researchers need to consider. There are also clear step-by-step guides which walk researchers through some of the technical processes needed to engage with social media sites. Simon Hegelich’s chapter “R for Social Media Analysis” is a good example; simple and easy to follow, Hegelich takes the reader through a simple project analysing and visualising data from Twitter using the free software programming language R.

Most of the book is well written and easy to follow although some chapters are less accessible and require significant existing knowledge. These chapters are likely to be valuable for experienced researchers looking to transfer their knowledge to a social media setting, but students and junior researchers may well find themselves scouring the internet for definitions and context.

Social Media Research Methods fulfils its aim to allow researchers to “apply and tailor the various methodologies to their own research questions”.  The step-by-step guides are logical and easy to follow and the case studies demonstrate how methods can be used in real research. For those with a good existing understanding of the research methodologies and techniques in their field this is an invaluable text opening up the social media research world. Those with less experience will probably need to refer to other resources to get up to speed with some chapters, but even then this is a useful addition to any social research library.


The SAGE Handbook of Social Media Research Methods is available to purchase here.

Friday, 2 June 2017

Q&A Session with Authors of The SAGE Handbook of Social Media Research Methods

Last Friday there was a launch event in San Diego at ICA (International Communication Association) for the new SAGE Handbook of Social Media Research Methods.

The editors Luke Sloan and Anabel Quan-Haase kindly responded to the questions that you submitted. If you missed the event, you can view the Q&A session here: https://storify.com/SAGE_Methods/q-a-with-anabel-quan-haase-and-luke-sloan

Let us know your thoughts by tweeting us @NSMNSS!

Friday, 19 May 2017

The SAGE Handbook of Social Media Research Methods – Questions for Authors

Have you read the recently published SAGE Handbook of Social Media Research Methods? It offers a step-by-step guide to overcoming the challenges inherent in research projects that deal with ‘big and broad data’, from the formulation of research questions to the interpretation of findings. The Handbook includes chapters on specific social media platforms such as Twitter, Sina Weibo and Instagram, as well as a series of critical chapters.

There is a launch event taking place on Friday 26th May in the US, at the Communication and Technology reception at ICA (San Diego), sponsored by SAGE. The editors Luke Sloan and Anabel Quan-Haase are happy to answer any questions you may have about the book, even if you aren’t able to attend in person – their responses will be posted throughout the day on Twitter via @SAGE_Methods.

If you have any questions about particular chapters, or to do with social media research methods generally, please tweet us your questions @NSMNSS using #SMRM or email Keeva.Rooney@natcen.ac.uk or  Franziska.Marcheselli@natcen.ac.uk  by Wednesday 24th May. 

We will pass your questions on and you can look out for the responses during the event!


Friday, 12 May 2017

Anti-Islamic Content on Twitter

This blogpost was written by Carl Miller, Research Director, and Josh Smith, Researcher, at the Centre for the Analysis of Social Media (CASM) at Demos. @carljackmiller

This analysis was presented at the Mayor of London’s Policing and Crime Summit on Monday 24 April, 2017.

The Centre for the Analysis of Social Media at Demos has been conducting research to measure the volume of messages on Twitter algorithmically considered to be derogatory towards Muslims over a year, from March 2016 to March 2017. This is part of a broad effort to understand the scale, scope and nature of uses of social media that are possibly socially problematic and damaging.

Over a year, Demos’ researchers detected 143,920 Tweets sent from the UK considered to be derogatory and anti-Islamic – this is about 393 a day. These Tweets were sent from over 47,000 different users, and fell into a number of different categories – from directed insults to broader political statements.

A random sample of hateful Tweets were manually classified into three broad categories:
  • ‘Insult’ (just under half): Tweets used an anti-Islamic slur in a derogatory way, often directed at a specific individual.
  • ‘Muslims are terrorists’(around one fifth) Derogatory statements generally associating Muslims and Islam with terrorism.
  • ‘Muslims are the enemy’ (just under two fifths): Statements claiming that Muslims, generally, are dedicated toward the cultural and social destruction of the West.
The researchers found that key events, especially terrorist attacks, drive large increases in the volume of messages on Twitter containing this kind of language.

The Brussels, Orlando, Nice, Normandy, Berlin and Quebec attacks all caused large increases. There was a period of heightened activity over Brexit, and sometimes online ‘Twitter storms’ (such as the use of derogatory slurs by Azealia Banks toward Zayn Malik) also drove sharp increases.

Tweets containing this language were sent from every region of the UK, but the most over-represented areas, compared to general Twitter activity, were London and the North-West.

Of the 143,920 Tweets containing this language and classified as being sent from within the UK, 69,674 (48%) contained sufficient information to be located within a broad area of the UK. To measure how many Tweets each region generally sends, a random baseline of 67 Million Tweets were collected over 19 days over late February and early March. The volume of Tweets containing derogatory language towards Muslims was compared to this baseline. This identified regions where the volume was higher or lower than the expectation on the basis of general activity on Twitter.

In London, North London sent markedly more tweets containing language considered derogatory towards Muslims than South London.

27,576 (39%) tweets were sent from Greater London. Of these, 14,953 Tweets (about half) could be located to a more specific region within London (called a ‘NUTS-3 region’; typically either a London Borough or a combination of a small number of London Boroughs).[1]
  • Brent, Redbridge and Waltham Forest sent the highest number of derogatory, anti-Islamic Tweets relative to their baseline average of general Twitter activity.
  • Westminster and Bromley sent the least number of derogatory, anti-Islamic Tweets relative to their baseline average of general Twitter activity.

Demos’ research identified six different online tribes. [2] These were:

Core political anti-Islam. The largest group of about 64,000 users including recipients of Tweets. Politically active group engaged with international politics.
  • Hashtags employed by this group suggest engagement in anti-Islam and right wing political conversations: (#maga #tcot #auspol #banIslam #stopIslam #rapefugees)
  • In aggregate, words in user descriptions emphasise nationality, right-wing political interest and hostility towards Islam (anti, Islam, Brexit, UKIP, proud, country)
Contested reactions to Terrorist attacks. The second largest group, of about 18,000 users, including recipients of tweets.
  • Aggregate overview of user descriptions imply a relatively young group (sc, snapchat, ig, instagram, 17,18,19,20, 21)
  • User descriptions also imply a mix of political opinion (blacklivesmatter, whitelivesmatter, freepalestine)
  • Hashtags engage in conversations emerging in the aftermath of terrorist attacks (#prayforlondon, #munich, #prayforitaly, #prayforistabul, #prayformadinah, #orlando)
  • Likewise, hashtags are a mix of pro- and anti-Islamic (#britainfirst, #whitelivesmatter, #stopislam, #postrefracism, #humanity)
The counter-speechers. A group of 8,700 people; although of course the data collection, by design, only detected the part of the counter-speech conversation containing language that can be used in a way derogatory towards Muslims. It is therefore likely that it did not collect the majority of counter-speech activity.[3]

The shape of the cluster shows a smaller number of highly responded to-/retweeted comments.
  • Hashtags engage predominantly with anti-racist conversations (#racisttrump, postrefracism, #refugeeswelcome, #racism, #islamophobia)
  • In aggregate, user descriptions show mix of political engagement and general identification with left-wing politics (politics, feminist, socialist, Labour).
  • Overall they also show more descriptions of employment than the other clusters (writer, author, journalist, artist).
The Football Fans. 7,530 users are in this cluster, including recipients of Tweets.
  • The bio descriptions of users within his cluster overwhelmingly contain football-related words (fan, football, fc, lfc, united, liverpool, arsenal, support, club, manchester, mufc, chelsea, manutd, westham)
  • No coherent use of hashtags. This cluster engaged in lots of different conversations.
India/Pakistan. Just under 5,000 users are in this cluster (including recipients).
  • Hashtags overwhelmingly engage in conversation to do with India-Pakistan relations or just Pakistan (#kashmir, #surgicalstrike, #pakistan, #actagainstpak).
  • In aggregate, words in user descriptions relate to Indian/nationalist identity and pro-Modi identification (proud, Indian, hindu, proud indian, nationalist, dharma, proud hindu, bhakt,)
The Gamers. 2,813 users are in this cluster (including Tweet recipients).
  • There is no coherent use of hashtags.
  • Overall, aggregate comments in user descriptions either imply young age (16,17,18) or are related to gaming (player, cod [for ‘Call of Duty’], psn)
A small number of accounts overall are responsible for many of the tweets containing language generally considered to be derogatory towards Muslims.
  • 50% of Tweets classified as containing language considered anti-Islamic and derogatory are sent by only 6% of accounts
  • 25% of Tweets classified as containing language considered anti-Islamic and derogatory were sent by 1% of accounts.
Likewise, a small number of accounts were the recipients of the derogatory, anti-Islamic activity that was directed at a particular person.

The full paper, outlining methodology and ethical notes, can be downloaded here. 



NOTES –
[1] An important caveat is that the volumes associated with each of these regions are obviously smaller than the total number of Tweets in the dataset overall
[2] A caveat here is that this network graph includes Tweets that are misclassified and also includes the recipients of abuse. It is also important to note that not everyone who shares Tweets does so with malicious intent; they can be doing so to highlight the abuse to their own followers.
[3] In other work on the subject we have found there are usually more posts about solidarity, support for Muslims than attacks on them.

Wednesday, 29 March 2017

Who uses Twitter?

Luke Sloan is a Senior Lecturer in Quantitative Methods and Deputy Director of the Social Data Science Lab at the School of Social Sciences, Cardiff University, UK. Luke has worked on a range of projects investigating the use of Twitter data for understanding social phenomena covering topics such as election prediction, tracking (mis)information propagation during food scares and ‘crime-sensing’. His research focuses on the development of demographic proxies for Twitter data to further understand who uses the platform and increase the utility of such data for the social sciences. He sits as an expert member on the Social Media Analytics Review and Information Group (SMARIG) which brings together academics and government agencies. @drlukesloan

Who uses Twitter?

It’s a simple question, but one that is tricky to answer. We all think we know the types of people who use Twitter – the urban elite, celebrities, professionals, young people… but providing an empirical account is challenging and without knowing who tweets we can’t even start a conversation about representativeness and bias. To understand how the social world manifests in the virtual we need to know who is present or underrepresented.

Much work has been done on using Twitter metadata to estimate proxy demographics for UK users such as gender (Sloan et al. 2013) and age, occupation and social class (Sloan et al. 2015), but these methods rely on people self-reporting a first name, an age or date of birth and an occupation to classify. The question has always been whether certain groups, such as older people and those from certain occupations, are less likely to choose to construct their virtual identity with reference to these characteristics or not.

Clearly it’s quite a leap forward to be able to use British Social Attitudes 2015, a random probability sample survey of over 4,000 respondents with weights calculated to account for non-response bias, to help us understand the Twitter population. The data allow us to compare Twitter usage by demographic groups benchmarked against the 2011 Census whilst evaluating previous attempts at demographic proxies.

So, how accurate is the picture of the demographic characteristics developed through proxies?

As it turns out we find some interesting discrepancies. According to the BSA data we find more men on Twitter than expected and we see that although most users are younger there are more older users on the platform than we previously thought. We also find that there are strong class effects regarding Twitter use, largely in line with previous proxy estimates most of the time but substantially out of line for certain groups. The full paper is open access and can be read here.

How does this aid our understanding of how the social world manifests online? To take an example, a recent study by Draper et al. found that, during the horsemeat food scare of 2013 Twitter was dominated by jokes and humour. The overall discourse suggested that this wasn’t perceived as a serious incident and that the issue wasn’t really a public concern, but we now know that Twitter is dominated by the higher NS-SEC groups – people with high incomes who are the least likely to come into contact with the budget adulterated products. Twitter thought it was funny because Twitter is dominated by people who were largely unaffected by the scare. This is an important lesson in how representation impacts upon what the data is telling us.

Of course, it’s no surprise that Twitter is dominated by the professional and managerial groups, but at least now we have some strong evidence to underwrite our expectations.



Read the full paper: Sloan, L. (2016) Who Tweets in the United Kingdom? Profiling the Twitter Population Using the British Social Attitudes Survey 2015, Social Media + Society 3:1, DOI: https://doi.org/10.1177/2056305117698981


Thursday, 23 February 2017

Programming as Social Science - new methods network

Phillip Brooker is a Research Associate at the University of Bath working in social media analytics, with a particular interest in the exploration of research methodologies to support the emerging field. His background is in sociology, drawing especially on ethnomethodology and conversation analysis, science and technology studies, computer-supported cooperative work and human-computer interaction. Phillip has previously contributed to the development of Chorus (www.chorusanalytics.co.uk), a Twitter data collection and visualisation suite. He currently works on CuRAtOR (Challenging online feaR And OtheRing), and interdisciplinary project focusing on how "cultures of fear" are propagated through online "othering".

Digital data and computational methods are increasingly becoming consolidated as essential elements of social science research and teaching. However, the algorithmic processes through which digital data are extracted, processed and visualised are often ‘black boxed’ and obscured from researchers who use those tools, which hinders our understanding of how they might be handled methodologically. Hence, there is an already-high and ever-increasing need for social scientists to engage with computational tools as a “critical technical practice” (Agre, 1997). In other words, since we are now pretty much completely reliant on software as part of our everyday research and teaching practices, it is all the more important that we were able to unpick and interrogate how these software packages operate, in order to better account for our data and research practices!

To this end, myself and Jonathan Gray (both at the University of Bath) have set up a mailing list/network called “Programming as Social Science (PaSS)”, for researchers interested in software programming both as an object of study and as a tool that we can learn and use within social science research. Here, we’re capitalising on lots of good work that has already been done in fields such as Science and Technology Studies, New Media Studies, Social Media Analytics, Software Studies, Ethnomethodology, Human-Computer Interaction, Computer-Supported Cooperative Work, and so on. All of these fields (and many more we haven’t listed!) have contributions to make in regard to understanding how we might critically leverage programming skills as part of social science teaching and research. So the PaSS mailing list/network has been established to act as a (low-traffic) hub for discussing these kinds of ideas, as well as sharing resources, updates, announcements and initiatives around programming in the context of social research.

If you’d like to join in, you can sign up via the following link: www.jiscmail.ac.uk/PaSS. Please feel free to invite anyone and share widely; the computer geek in me is very much looking forward to chatting about programming as part of my work!

Thursday, 16 February 2017

Visualising Facebook

Daniel Miller is Professor of Anthropology at University College London. Recent books include Social Media in an English Village (UCL Press 2016). Miller. et. al. How the World Changed Social Media (UCL Press 2016). With J. Sinanan Webcam (Polity 2014) Ed. With H. Horst, Digital Anthropology (Bloomsbury 2012). With M. Madianou Migration and New Media (Routledge 2012) Consumption and its Consequences (Polity 2012), with S. Woodward Blue Jeans (California 2012) Tales from Facebook (Polity 2011). He recently completed a volume about media in the social lives of patients with a terminal diagnosis, forthcoming as, The Comfort of People (Polity 2017). @DannyAnth

This March will see the publication of a new book called Visualising Facebook, which I have written with Jolynna Sinanan. It will be available as a free download from UCL Press. One of the key arguments from the larger Why We Post project, of which this book is one out of eleven volumes, is that human communication has fundamentally changed. Where previously it consisted almost entirely of either oral or textual forms, today, thanks to social media, it is equally visual. Think literally of Snapchat. So, it is a pity that when you look at the journals and most of the books about social media, they often contain either no, or precious few, actual visual illustrations from social media itself. One of the joys of digital publication is that it is possible to reproduce hundreds of images. So, our book is stuffed to the gills with photographs and memes taken directly from Facebook, which is, after all, our evidence.
For example, as academics, we might suggest that the way women respond to becoming new mothers in Trinidad, is entirely different from what you would find in England. In the book, we can reproduce examples from hundreds of cases, where it is apparent that when an English woman becomes a mother she, in effect, replaces herself on Facebook with images of her new infant. Indeed, these often become her own profile picture for quite some time. By contrast, one can see postings by new mothers in Trinidad, where they are clearly trying to show that they still look young and sexy or glamorous, precisely because they do not want people to feel that these attributes have been lost, merely because they are now new mothers.

In writing this book we examined over 20,000 images. These provide the evidence for many generalisations, such as that Trinidadians seem to care a good deal about what they are wearing when they post images of themselves on Facebook. While, by and large, English people do not. But this becomes much clearer when you can see the actual images themselves. Or we might suggest that English people are given to self-deprecating humour, while Trinidadians are not. Or that in England gender may create a highly repetitive association between males and generic beer, as against women with generic wine. In every case, you can now see exactly what we mean. We also have a long discussion about the importance of memes and why we call them `the moral police of the Internet’. How memes help to establish what people regard as good and bad values. This makes much more sense when you are examining typical memes with that question in your head.


To conclude, given the sheer proportion of social media posting that now consists of visual images, it would seem a real pity to look this gift horse in the mouth. Firstly, it has now become really quite simple to look at tens of thousands of such images in order to come to scholarly conclusions. But equally, it is now much easier to also include hundreds of such images in your publications to help readers have a much better sense of what exactly those conclusions mean and whether they agree with them.

Friday, 3 February 2017

Mine your Data – Why understanding online health communities matters

Originally posted on the NatCen blogsite on 10/11/16 
Aude Bicquelet is Research Director in the Health team. Prior to joining NatCen in 2016, she held a fellowship at the LSE (Department of Methodology) where she taught courses on Research Design, Mixed-Methods and Text Mining approaches. 
Aude specialises in the analysis of ‘Big Qualitative Data’ on health related issues and has worked with professional and regulatory health bodies such as the National Institute for Health and Care Excellence (NICE) and the Royal College of Physicians.  Methodological and substantive outputs of her research have been published in academic journals; she has also published a book on ‘Textual Analysis’ with Sage.
In addition to her interest in Health policies she is interested in Social and Political attitudes and has researched widely in the areas of political participation and √©lites’ attitudes towards the EU. 

A staggering 73% of adults in the UK turn to the internet when experiencing health problems. Whether it is to check symptoms, find out about available treatments or share experiences about living with a particular condition, the internet has become the first port of call with many turning to the web before they even consider going to see a doctor. While many of these conversations take place on health-related websites such as Patient or Netdoctor, people suffering from health conditions also share their experiences on social media – and health practitioners should take note.  
Earlier this week I presented findings from a recent study looking into how people use social media to discuss health issues at the ESRC Festival of Social Science. In this study, funded by the NCRM, we used text mining techniques to analyse comments about chronic pain posted under YouTube videos.  
We found that chronic pain sufferers use YouTube to describe their experiences and vent their frustration. We analysed over 700 YouTube comments, and found they can be sorted into one of five categories:
  • Sharing Experiences: commenters thank each other for sharing their experiences in the videos posted on the website, emphasising tolerance and empathy for chronic pain sufferers.
  • Expressing Frustration: chronic pain sufferers expressed their frustration in their own words. These illustrate how YouTube and other social media offer new avenues for communicating pain outside clinical contexts.
  • Coping with Pain: chronic pain sufferers used social media to share their daily practices to cope with chronic pain.
  • Alternative Therapy: commenters spoke openly about their use of alternative medicines, illegal drugs or alcohol to manage their pain. The often conflicting relationship with clinicians – who were perceived as over- or under-medicating – was also common in this category.
  • Risks and Concerns: they also discuss the risks associated with different types of medication – in particular, addiction and overdose - along with increased risks of depression associated with some treatments against pain.
The insights gained from social media research provide important substantive information for health practitioners. People communicate online in a way they don’t during interviews with researchers or during doctors’ appointments. Online forums and social media are rife with information that’s difficult to obtain through traditional research techniques where social desirability, fear of judgement or stigma, and wanting to be seen as ‘functioning well’ may influence what people are willing to say.  From a purely practical perspective, they also provide freely available naturally occurring data with access to (at times) to hard to reach groups.
Of course, there is a great deal of uncertainty around how to harness the opportunities of analysing the wealth of health information posted online in a representative, robust and ethical way.
Despite their usefulness and efficiency, analyses of Internet comments on health forums do raise a host of concerns such as representativeness – where the views of one cohort in a population having access, technical skills and inclination to post comments on Internet websites are over represented while the views of others are excluded (i.e. the so-called ‘digital divide’) and consent – where, online commentators may not expect to be research subjects.
Nevertheless, the explosion of Big Data and the popularity of online communities might precipitate the need to integrate social media analysis and health research in the near future. For instance, it has been shown that patients who visit their doctors with inappropriate or misinterpreted information from the internet will do little to enhance doctor–patient communication (see Ziebland 2004). But, doctor-patient communication could be improved simply if health professional themselves were better informed about the common fears and sometimes the common ‘myths’ disseminated on online health communities.

Watch Aude’s presentation from NatCen’s event ‘What Social Media Can Tell us about Society’, live from Twitter’s London HQ. This event was part of the ESRC Festival of Social Science
If you’re interested in how social media research can help you, please get in touch: aude.bicquelet@natcen.ac.uk or new-business@natcen.ac.uk