Friday, 27 October 2017

Gender stereotyping used in the media for the recruitment of women to the British Army

This blog is written by Paige Salvage, a student at Cardiff University. She has been studying relevant sociological theories and methods for how we can understand the digital society and how the Internet shapes our everyday lives.
 
The recent development of government legislation allowing female soldiers into infantry positions has resulted in extensive sociological interest in the changing role of women in the Army. Stiehm (1988) suggested that women in the Army have fixed stereotypes, “whores or lesbians” suggesting that serving female soldiers have both sexual and sexuality foci. Assuming this is true, Goffman’s (1976:9) suggests that “gender displays […] reflect fundamental features of the social structure” in the Army. This study considered how images that have been used across social media, in particular the British Army’s Instagram page (@britisharmy), depict women who are already serving. In order to interpret the findings it was important to understand how women are stereotyped into roles society deem as typically ‘female’.

It is important to note that Stiehm’s (1988) stereotypical labelling of women in the Army is not only subjective but also unethical. However, Goffman’s (1976) work on gender displays balanced any possible disparity as he suggested that gender is a flexible and fluid notion. With Braun and Clarke’s (2006) guide to thematic analysis, this study aimed to probe the following two inductive themes:

1.       Masculine - those who are taking on a more masculine role or are taking part in a military activity that is stereotypically male      

2.       Feminine - those who take part in more stereotypically female activities
 
Data collection was limited to the British Army’s Instagram page (@britisharmy) after 8th July 2016 when women were officially allowed access to infantry roles. This period was considered significant as it was expected that more images of women in combat or infantry scenarios would be shared as a promotion of women into these roles.


Findings

Masculine
 
Analysis of images in Instagram (@britisharmy) suggested that images of women are more likely to be depicted in masculine roles (Figure 1). This suggests that the British Army are attempting to break the down the assumptions that women in the military are the weaker sex. By providing the public with images of women taking part in arduous and dangerous activities, the British Army are promoting masculine roles for women.

However, this does not address the common stereotypes that depict women in the Army. Women who hold a more masculine physique in these images may be considered “lesbian”, as Stiehm (1988) noted that this label was linked more towards physical attributes rather than the sexuality of the individual. As some show masculine traits, these women become ‘othered’ by assuming a more male stereotype.
 

Feminine

There are significantly less images in instagram depicting women in the army as feminine (Figure 2), suggesting that the British Army are biased towards using images to portray women in active combat roles. This is contrary to the acknowledged social role of women as homemakers and carers. Indeed, feminine images of women on @britisharmy tend to depict women in healthcare or domestic (eg chefs) roles, suggest that the British Army concur with the idea that women need less physically demanding  roles.

From this study, we can start to understand how masculine pictures of women are used as promotional aids to increase recruitment numbers, particularly into infantry roles. It appears that, following the 2016 policy change to allow females into infantry regiments, the British Army are now targeting women in their recruitment campaigns. However, it could not be concluded that female stereotypes were purposefully used to recruit certain types of women for these infantry roles.

 

Friday, 20 October 2017

Westminster Student Blog Series

We have been posting a series of short vlogs, produced by University of Westminster Postgraduate students. They are all based on their research of social media. As this is the final vlog, we would like to thank all the Westminster students for their entries.

Public Sphere in the case of the Women's March on London

Author: Tian




A student of the Frankfurt School of Social Research, Jürgen Habermas wrote The Structural Transformation of the Public Sphere (1962) to explore the status of public opinion in the practice of representative government in Western Europe.

The feminist critique points out that the ‘public sphere’ has proposed a sphere of educated, rich men, juxtaposed to the private sphere that has been seen as the domain of women, which also excluded gays, lesbians, and ethnicities (Fraser, 1990). The criticism has also suggested that a democratic and multicultural society should based on the plural public arenas (Fuchs, 2014). Habermas agrees that his early account in The Structural Transformation of the Public Sphere, published in 1962, has neglected proletarian, feminist and other public spheres (Fuchs, 2014).

The recent example would be the success of Women’s March. The US election proved a catalyst for a grassroots movement of women to assert the positive values that the politics of fear denies. The Women’s March was held in order to represent the rights of women and solidarity with participants from the threatened minorities such as Muslim and Mexican citizens and the LGBTQ community. There were hundreds of marches have taken place in the major cities around the world.

The video would criticize the concept of ‘pubic sphere’ from both feminist perspective and the emerging of social media. Discuss how the rise of social networking sites has resulted in public discussions on digital platform.


Reference:

Fraser, N. (1990). Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy. Social Text, (25/26), p.56.

Fuchs, C., (2014). Social media. A critical introduction. London: Sage.

Monday, 16 October 2017

Social media research, ‘personal ethics’, and the Ethics Ecosystem

Gabrielle Samuel: Research Fellow Lancaster University / Research Associate, King’s College London.  Research interests: research ethics, ethical and social issues surrounding innovative biotechnologies; social media ethics; qualitative research.
Gemma Derrick: Lecturer, Department of Educational Research, Lancaster University. Research interests: Research evaluation, Qualitative research, Group/committee decision-making. 
Ethics review may seem overly bureaucratic to some, but in this blog we argue that a more researcher-committee collaborative process, rather than a gatekeeper ‘tick-box’ role, may help with navigating the ‘ethics ecosystem’ when using new research tools such as social media (SM) data.
The ethics ecosystem exists as an inter-related membership of academic bodies that, when fully functional, acts to reinforce a high-level of ethical behaviour from researchers, and to guard against academic misconduct.  Specifically, this ethics ecosystem can be described as all the individuals (researchers), organisations (research institutions/research ethics committees (RECs)) and external bodies (publishing houses, funding bodies, professional associations) which promote ethically responsible research behaviour in the academy. Ordinarily, the academy’s ethics ecosystem works well due to a shared understanding of what ethically responsible research behaviour is. However, this system breaks down when new ideas, methods or approaches are introduced to its members, and each player interprets and enforces theses ideals of ethical behaviour differently.  This forces each member to re-examine concepts previously thought to be set in ethical stone.  Such is the case of SM research.
Currently this system is failing SM research
Our research has spoken to members at all levels of the ethics ecosystem; researchers using SM data, research ethics committee members, universities, funding bodies, publishing houses and journal Editors, and we found that members possessed inconsistent understandings of ethics applicable to the use of SM in research.  There were different interpretations of the established ethical notions of consent (should we ask for it? shouldn’t we? when and how should we?) and privacy (how, or even should, SM users’ data be protected, and to what degree?); some members viewed SM data as ‘fair game’, while others were more cautious; and only some shouldered responsibility to protect SM users’ perceived privacy. What was lacking was an overarching understanding reinforced by a larger governance body as a mechanism to fuel a wider, community-led understanding about ethical conduct (and misconduct) towards SM research.
At the research level of the ecosystem, researchers’ were monitoring their own decisions about how best to act ethically. However, when left to their own devices this over-reliance on subjective monitoring of behaviour risks the development of a form of “personal ethics”, which would be different for each researcher within this ecosystem;
Interviewer: Are there any guidelines that you follow in your own research?
Researcher: It’s my guidelines. Everybody has their own definition of ethics…. 
This became dangerous when the acceptability of these decisions were related to how strongly researchers justify them, rather than being dependent on conduct checks and balances available by a wider, community-led ethical understanding of SM research;
You’ve got to develop the sense of what's right…then put that across and make your case’
The differing interpretations of personal ethics dovetailed at the institutional level of the ecosystem, when researchers had to, or chose to submit their research proposal to a REC for consideration. Committee members, as actors in this level of the ecosystem, spoke about their lack of experience in reviewing this type of research simply because so few proposals are submitted (due to the differing researcher interpretations of whether ethical review was required). As such REC judgements of ethical conduct relied heavily on researchers’ justifications of ethical decision-making within the application;
We…sometimes make different decisions even for projects that look pretty similar. It’s how they build up their case doing that particular project 
The same held true for other members of the ethics ecosystem, such as the Journal editors and, by extension, peer-reviewers.
To summarise, what does this wide disagreement around SM research mean for the ethics ecosystem? After all, there is nothing wrong with ethical norms being driven by researchers’ different subjective justifications of their personal ethics a.k.a ethical pluralism. However, for SM research, and similar new research tools, reliance on researchers’ justifications of ethical behaviour can be dangerous as it risks leaving important ethical decisions in limbo, and allows for ethically problematic research to fall between the cracks.
What is needed is more governance within the ethics ecosystem.  Only then can enough checks and balances exist to ensure best practice, promote a shared understanding of SM research ethics, and provide necessary audits to protect against scientific misconduct.
One step towards this is to require researchers to submit for ethics review to provide an extra layer of scrutiny. More importantly, it provides REC members with the tacit knowledge necessary to act as this larger arbitrator of ethical conduct for SM research.

Friday, 29 September 2017

Westminster Student Blog Series

We will be posting a series of short vlogs, produced by University of Westminster Postgraduate students. They are all based on their research of social media. We will be posting one a week for the next few weeks, so keep your eyes peeled!

The Internet as Playground and Factory

Author: Remigijus Marciulis



In recent years, the labour theory of value has been a field of intense interest and debates, particularly the use of Marxist concepts in the digital context. There are straight facts showing that giant online companies like Facebook and Google have accumulated enormous amounts of capital by selling their users’ data to advertisers. The phenomena of a ‘Social Factory’ is discussed by different scholars. The value we create go beyond actual factory walls, including the online sphere. “The sociality is industrialised and industry is socialised” (Jarrett, 2016: 28). Trebor Scholz refers the Internet as a ‘playground’ and ‘factory’. His argument is based on the fact that being online is a part of having fun. Is it really uploading a video on YouTube counts as a digital exploitation? On the other hand, Christian Fuchs says that there is a straight connection between the time spent on the Internet and the capitalist exploitation, free labour and surplus-value. Kylie Jarrett tackles the subject from a metaphoric angle of the Digital Housewife. She applies concepts of Marx and feminist approaches by investigating the digital world. According to her, a digital or immaterial labour is profoundly exploited by capitalism.
The interview with Dr Alessandro Gandini explores subjects of digital labour and ‘playbour’, the use and appropriateness of Marxist concepts. To sum up, the subject of digital labour and exploitation is complex and diverse. It requires a more profound study that distinguishes the “real” digital work and time spent online for leisure. Scholars agree that the communicative action and activism are the main key instruments fighting against digital capitalist inequalities.

Friday, 22 September 2017

Call for speakers: Answering social science questions with social media data

Thursday 8th March 2018, The Wellcome Collection, London, NW1 2BE

After several successful events, we’re pleased to say that the NSMNSS network (http://nsmnss.blogspot.co.uk/) and Social Research Association (www.the-sra.org.uk) are again teaming up to deliver a one-day conference on ‘Answering social science questions with social media data’.

As social media research matures as a discipline, and methodological and ethical concerns are being addressed, focus is increasingly shifting on to the role that it can and should play in the social sciences – what are the questions it can help us to answer?

We are looking for speakers who have completed a piece of social research using social media data to present their findings and discuss how this has made a difference:
  • ·    How has it impacted policy, best practice, or understanding?
  • ·    How has it answered a question that would have been unfeasible using conventional research methods alone?
This research could be in any substantive area, from health or crime to politics or travel, as long as it is ‘social’ research. It can also include any type of analysis – quantitative or qualitative analysis, big data or small – as long as it involves some form of data collection via a social media platform. We want to encourage a range of different methods and topics to help demonstrate the diversity of the methodology and the role it can play.

Are you interested in presenting?
If you have completed a piece of research using social media research methods, or have any suggestions of who we should contact, then please complete the submissions template and send to nsmnss@natcen.ac.uk by Monday 27th November. Let us know the name and topic of the research study, which social media platform was used, a brief description of methodology, and the findings and impact of this study.

This event is being set up by the SRA and NSMNSS network. We want to keep the event accessible and ticket prices reasonable, but need to cover the costs of the venue hire/refreshments, so we cannot pay presenters – however there will be 1 free place per presentation, and we will be able to cover reasonable ‘within UK’ public transport travel expenses.


The #NSMNSS & SRA teams







Friday, 15 September 2017

Westminster Student Blog Series

We will be posting a series of short vlogs, produced by University of Westminster Postgraduate students. They are all based on their research of social media. We will be posting one a week for the next few weeks, so keep your eyes peeled!

Digital Review: Public Sphere and the Exclusion of Women

Author: Karolina Kramplova


The creation of new forms of digital social media during the first decade of the 21st century has completely changed the way in which many people communicate and share information. When we think about social media as a space where the public can discuss current affairs and politics, it is interesting to consider it with the theory of public sphere. Ever since Habermas established this concept, it was criticised by scholars like Nancy Fraser. She argues that the theory was established based on a number of exclusions and discriminations. I focused on the exclusion of women from the political life. Andy Ruddock, an author of the book, Youth and Media, also talks about the lack of representation of women in subculture studies and how social media is not about democratisation and public debate but rather about people picking what they like. An activist, Hannah Knight, acknowledges the discrimination women face until this day. However, when it comes to public sphere and social media, even though Knight argues there is a space for public debates, she says people are not listening to everyone. Social media empowers movements such as the Women’s March, but does it contribute towards democratisation, or do we just want to believe it does? Therefore, both the scholar, Ruddock and the activist Knight, have persuaded me that the concept of public sphere is no longer relevant when it comes to social media.

Friday, 1 September 2017

Westminster Student Blog Series

We will be posting a series of short vlogs, produced by University of Westminster Postgraduate students. They are all based on their research of social media. We will be posting one a week for the next few weeks, so keep your eyes peeled!

Journalism, the Filter Bubble and the Public Sphere

Author: Mick Kelly



"The influence of social media platforms and technology companies is having a greater effect on American journalism than even the shift from print to digital.”
(Bell and Owen, 2017)

This is the conclusion of a study released in March 2017 by researchers from Columbia University’s Graduate School of Journalism who investigated a journalism industry reacting to controversies about fake news and algorithmic filter bubbles that occurred at the time of the US presidential election. The report noted the following key points:

• Technology companies have become media publishers
• Low-quality content that is sharable and of scale is viewed as more valuable by social media platforms than high-quality, time-intensive journalism
• Platforms choose algorithms over human editors to filter content, but the ‘nuances of journalism require editorial judgment, so platforms will need to reconsider their approach’.

The report states that news might currently reach a bigger audience than ever before via social media platforms such as Facebook, but readers have no way of knowing how data influences the stories they read or how ‘their online behaviour is being manipulated’. (Bell and Owen, 2017)

This video assignment reveals that the debate has existed since 2011 when Eli Pariser wrote The Filter Bubble, which explained how data profiling led to personalisation and the algorithmic filtering of news stories. The theme of this video is the impact of this robotic process on journalism within the public sphere, and includes an interview with Jim Grice, who is Head of News and Current Affairs at London Live.


R
EFERENCE
Bell, E. and Owen, T. (2017) The Platform Press: How Silicon Valley reengineered journalism. The Tow Centre for Digital Journalism at Columbia University’s Graduate School of Journalism. Available from:
https://www.cjr.org/tow_center_reports/platform-press-how-silicon-valley-reengineered-journalism.php [Accessed 30 March 2017]

Friday, 25 August 2017

Westminster Student Blog Series

We will be posting a series of short vlogs, produced by University of Westminster Postgraduate students. They are all based on their research of social media. We will be posting one a week for the next few weeks, so keep your eyes peeled!


The Act of Sharing on Social Media

Author: Erxiao Wang



Social media has been the growing theme of the Information Age we live in. It is of considerable importance to understand, utilise, and engage wisely with social media platforms within our contemporary networked digital environment. This video is a theoretically inspired social media artefact; it introduces some of the key arguments in University of Westminster Professor Graham Meikle's book “Social Media: Communication, Sharing and Visibility”. This book includes theories such as the idea of media convergence, the business model of online sharing, the mediated online visibility and also the exploitable data that is subject to online surveillance. Moreover, Professor Meikle has also analysed the commercial Internet of Web 2.0 that enabled the user generated content and witnessed the coming together of the public and personal communication, and the always-on mobile connectivity has enabled greater mobility that allows social change. In order to make use of these key theories in our everyday practice and conversations of social media, I had a chat with a good friend of mine Junchi Deng, who is an accordion player who attends the Royal Academy of Music here in London and also posts and vlogs regularly. We talked about the act of sharing on social media, and the idea of promoting oneself on social networking sites, especially for a musician like himself. Our discussion centres around everyday use of social media, and our individual observations towards the incorporation and review of these platforms.

Friday, 4 August 2017

Terrorism and Social Media Conference Part Two

This is the second part of a two-part series on the #TASM conference. You can read the first part here.

The 27th and 28th June saw the congregation of some of the world’s leading experts in counter-terrorism and 145 delegates from 15 countries embark on Swansea University’s Bay Campus for the Cyberterrorism Project’s Terrorism and Social Media conference (#TASMConf). Over the two days, 59 speakers presented their research into terrorists’ use of social media and responses to this phenomenon. The keynote speakers consisted of Sir John Scarlett (former head of MI6), Max Hill QC (the UK’s Independent Reviewer of Terrorism Legislation), Dr Erin Marie Saltman (Facebook’s Policy Manager for counter-terrorism and counter-extremism in Europe, the Middle East and Africa), Professor Philip Bobbitt, Professor Maura Conway and Professor Bruce Hoffman. The conference oversaw a diverse range of disciplines including law, criminology, psychology, security studies, linguistics, and many more. Amy-Louise Watkin and Joe Whittaker take us through what was discussed (blog originally posted here).

Both Dr Weeda Mehran, and Amy-Louise Watkin and Sean Looney presented on children in terrorist organisations and their portrayal through videos and images. Dr Mehran analysed eight videos and found that children help to create a spectacle as they generate memorability, novelty, visibility and competitiveness, and display high levels of confidence while undertaking executions. On the other hand, Watkin and Looney found in their analysis of images in online jihadist magazines that there are notable differences between IS and AQ in their use of children with IS focusing on displaying brutality through images of child soldiers and AQ trying to create shame and guilt at their Western followers through images of children as victims of Western-back warfare. They concluded that these differences need to be taken into account when creating counter-messages and foreign policy.

Joe Whittaker presented his research on online radicalisation. He began with a literature review of the field, concluding that the academic consensus was that the Internet is a facilitator, rather than a driver, of radicalisation. He then offered five reasons as to why there was good reason to doubt this consensus: the lack of empirical data, how old the data is compared to the growth of the Internet, the few dissenting voices in the field, the changing online threat since 2014, and the wealth of information that can be learned from other academic fields (such as Internet studies and psychology). He then offered three case studies of individuals radicalised in the previous three years to learn whether the academic consensus still holds; finding that although it does in two cases, there may be good reason to believe that social media could drastically change the nature of some individuals’ radicalisation.

On the topic of corporate social responsibility in counter-terrorism, Chelsea Daymon and Sergei Boeke discussed different aspects of private entities engaging in policing extremist content on the Internet. Daymon drew upon the different projects and initiatives conducted by industry leaders, such as Google’s Jigsaw projects and the shared database between Microsoft, Twitter, Facebook, and YouTube. She, however, warned against the excessive use of predictive technology for countering violent extremism, suggesting that it could raise practical and ethical problems in the future. Drawing from Lawrence Lessig’s models, Boeke outlined four distinct categories of regulation that can be applied to the Internet: legal, architectural, market-based, and altering social norms before offering different suggestions for how this can be used in the context of countering terrorism.

The final panel related to creating counter-narratives, which included Dr Paul Fitzpatrick, who discussed different models of radicalisation, and how it related to his work as Prevent Coordinator at Cardiff Metropolitan University. He began by critiquing a number of prevalent models including Moghaddam’s staircase, as well as all multi-stage, sequential models, observing that, having seen over one hundred cases first-hand, no-one had followed the stages in a linear fashion. He also highlighted the particular vulnerabilities of students coming to university, who have their traditional modes of thinking deliberately broken down, and are susceptible to many forms of extreme thinking. Sarah Carthy, who presented a meta-analysis of counter-narratives, followed Dr Fitzpatrick. She observed that specific narratives are particularly powerful because they are simple, present a singular version of a story, and are rational (but not necessarily reasonable). Importantly, Carthy noted that despite many assuming that counter-narratives can do little harm – the worst thing that can happen is that they are ignored – some were shown to have a detrimental effect on the target audience, raising important ethical considerations. The final member of the counter-narrative panel was Dr Haroro Ingram, who presented his strategic framework for countering terrorist propaganda. Ingram’s framework, which draws on findings from the field of behavioural economics, aims to disrupt the “linkages” between extremist groups’ “system of meaning”. Dr Ingram observes that the majority of IS propaganda leverages automatic, heuristic-based thinking, and encouraging more deliberative thinking when constructing a counter-narrative could yield positive results.

The last day of the conference saw keynote Max Hill QC argue that there is a strong place for counter-narratives to be put into place to discredit extremist narratives, and spoke of his experiences visiting British Muslims who have been affected by the recent UK terrorist attacks. He told of the powerful counter-narratives that these British Muslims hold and argued their importance in countering extremist propaganda both online and offline. Hill also argued against the criminalising of tech companies who ‘don’t do enough’, asking the question of how we measure ‘enough’? His presentation was shortly followed by Dr Erin Marie Saltman who discussed Facebook’s advancing efforts in countering terrorism and extremism. She argued that both automated techniques and human intervention are required to tackle this and minimise errors on the site that sees visits from 1.28 billion people daily. Saltman gave an overview of Facebook’s Violent Extremism Policies and spoke of the progress the organisation has made regarding identifying the ability of actors to make new accounts. Overall, Saltman made it crystal clear that Facebook are strongly dedicated to eradicating all forms of terrorism and violent extremism from their platform.

With the wealth of knowledge that was shared from the academics, practitioners and private sector companies that attended TASM, and the standard of research proposals that followed from the post-TASM research sandpit, it is clear that TASM was a success. The research presented made it very clear that online terrorism is a threat that affects society as a whole and the solutions will need to come from multiple directions, multiple disciplines, and multiple collaborations. 
You can find Max Hill QC’s TASM speech in full here and follow us on Twitter @CTP_Swansea to find out when we will be releasing videos of TASM presentations.


Tuesday, 1 August 2017

Terrorism and Social Media Conference Part One

This is the first part of a two-part series on the #TASM conference. Please look out for the next part of this series later this week!

The 27th and 28th June saw the congregation of some of the world’s leading experts in counter-terrorism and 145 delegates from 15 countries embark on Swansea University’s Bay Campus for the Cyberterrorism Project’s Terrorism and Social Media conference (#TASMConf). Over the two days, 59 speakers presented their research into terrorists’ use of social media and responses to this phenomenon. The keynote speakers consisted of Sir John Scarlett (former head of MI6), Max Hill QC (the UK’s Independent Reviewer of Terrorism Legislation), Dr Erin Marie Saltman (Facebook’s Policy Manager for counter-terrorism and counter-extremism in Europe, the Middle East and Africa), Professor Philip Bobbitt, Professor Maura Conway and Professor Bruce Hoffman. The conference oversaw a diverse range of disciplines including law, criminology, psychology, security studies, linguistics, and many more. Amy-Louise Watkin and Joe Whittaker take us through what was discussed (blog originally posted here).

Proceedings kicked off with keynotes Professor Bruce Hoffman and Professor Maura Conway. Professor Hoffman discussed the threat from the Islamic State (IS) and al-Qaeda (AQ). He discussed several issues, one of which was the quiet regrouping of AQ, stating that their presence in Syria should be seen as just as dangerous as and even more pernicious than IS. He concluded that the Internet is one of the main reasons why IS has been so successful, predicting that as communication technologies continue to evolve, so will terrorists use of social media and the nature of terrorism itself. Professor Conway followed with a presentation discussing the key challenges in researching online extremism and terrorism. She focused mainly on the importance of widening the groups we research (not just IS!), widening the platforms we research (not just Twitter!), widening the mediums we research (not just text!), and additionally discussed the many ethical challenges that we face in this field.

The key point from the first keynote session was to widen the research undertaken in this field and we think that the presenters at TASM were able to make a good start on this with research on different languages, different groups, different platforms, females, and children. Starting with different languages, Professor Haldun Yalcinkaya and Bedi Celik presented their research in which they adopted Berger and Morgan’s 2015 methodology on English speaking Daesh supporters on Twitter and applied this to Turkish speaking Daesh supporters on Twitter. They undertook this research while Twitter was undergoing major account suspensions which dramatically reduced their dataset. They compared their findings with Berger and Morgan’s study and a previous Turkish study, finding a significant decrease in the follower and followed counts, noting that the average followed count was even lower than the average Twitter user. They found that other average values followed a similar trend, suggesting that their dataset had less power on Twitter than previous findings, and that this could be interpreted as successful evidence of Twitter suspensions.

Next, we saw a focus away from the Middle East as Dr Pius Eromonsele Akhimien presented his research on Boko Haram and their social media war narratives. His research focused on linguistics from YouTube videos between 2014 when the Chobok girls were abducted until 2016 when some of the girls were released. Dr Akhimien emphasised the use of language as a weapon of war. His research revealed that Boko Haram displayed a lot of confidence in their language choice and reinforced this through the use of strong statements. They additionally used taunts to emphasise their control, for example, “yes I have your girls, what can you do?” Lastly, they used threats, and followed through with these offline.

Continuing the focus away from the Middle East, Dr Lella Nouri, Professor Nuria Lorenzo-Dus and Dr Matteo Di Cristofaro presented their inter-disciplinary research into the far-right’s Britain First (BF) and Reclaim Australia (RA). This research used corpus assisted discourse analysis (CADS) to analyse firstly why these groups are using social media and secondly, the ways in which these groups are achieving their use of social media. The datasets were collected from Twitter and Facebook using the social media analytic tool Blurrt. One of the key findings was that both groups clearly favoured the use of Facebook over Twitter, which is not seen to be the same in other forms of extremism. Also, both groups saliently used othering, with Muslims and immigrants found to be the primary targets. The othering technique was further analysed to find that RA tended to use a specific topic or incident to support their goals and promote their ideology, while BF tended to portray Muslims as paedophiles and groomers to support their goals and ideology.

The diversity continued as Dr Aunshul Rege examined the role of females who have committed hijrah on Twitter. The most interesting finding from Dr Rege’s research was the contradicting duality of the role of these women. Many of the women were complaining post-hijrah of the issues that pushed them into committing hijrah in the first place: loneliness, cultural alienation, language barriers, differential treatment, and freedom restrictions. They tweeted using the hashtag #nobodycaresaboutawidow and advised young women who were thinking of committing hijrah to bring Western home comforts with them, such as make-up.

Friday, 21 July 2017

Save your outrage: online cancer fakers may be suffering a different kind of illness



Peter Bath, University of Sheffield and Julie Ellis, University of Sheffield

Trust is very important in medicine. Increasing numbers of people are using the internet to manage their health by looking for facts about specific illnesses and treatments available. And patients, their carers and the public in general need to trust that this information is accurate, reliable and up to date.

Alongside factual health websites, the internet offers discussion forums, personal blogs and social media for people to access anecdotal information, support and advice from other patients. Individuals share their own experiences, feelings and emotions about their illnesses online. They develop relationships and friendships, particularly with people who have been through illnesses themselves and can empathise with them.

Some health professionals have concerns about the quality of medical information on the internet. But others are advocating that patients should be more empowered and encourage people to use these online communities to share information and experiences.

Within these virtual communities, people don’t just have to trust that the medical information they encounter is factually correct. They are also placing trust in the other users they encounter online. This is the case whether they are sharing their own, often personal, information or reading about the personal experiences of others.


Darker side to sharing


While online sharing can be very beneficial to patients, there is also a potentially darker side. There have been widely-publicised cases of “patients” posting information about themselves that is, at best, factually incorrect and might be considered deliberately deceptive.

Blogger Belle Gibson built a huge following after writing about being diagnosed with a brain tumour at the age of 20 and the experience of having just months to live. She blogged about her illness, treatment, recovery and eventual relapse while developing and marketing a mobile phone app, a website and a book. Through all of this she advocated diet and lifestyle changes over conventional medicine, claiming this approach been key to her survival.

But Gibson’s stories were later revealed to be part of a tangled web of deceit, which also involved her promising to donate money to charities but, allegedly, never delivering the payments.

In one sense, people’s trust was broken when they realised they had paid money under false pretences. In another sense, they may have followed Gibson’s supposed example of halting prescribed treatments and adopting a new diet and lifestyle when there was no real evidence this would work. But, at a deeper level, people may feel betrayed because they sympathised and indeed empathised with a person who was later revealed to be a fraud.

The truth was eventually publicised by online news outlets and Gibson was subject to complaints and abuse on social media. But there is something about the anonymity of the internet that facilitates this kind of deceptive behaviour in the first place. People are far less likely to be taken in by this sort of thing in the real world, but they are online. And it destroys people’s trust in online resources across the board.


Trust in extreme circumstances


Despite this, the moral outrage generated online by this kind of extreme and relatively isolated incident may be misplaced. There is evidence to suggest that people who do this may actually be ill but it’s a very difference sort of illness.

Faking diseases or illnesses – often described as Munchausen’s syndrome – is not unique to the internet and was reported long before its advent. The Roman physician Galen is credited with being the first to identify occasions on which people lied about or induced symptoms in order to simulate illness. More recently, the term “Munchausen by internet” has been used to describe behaviour in which people use chat rooms, blogs and forums to post false information about themselves to gain sympathy, trust or to control others.

Whichever way we view people who post such false information, their behaviour raises the question why people with genuine illnesses still share such intimate details when the potential for dishonesty from others is so evident. Our new research project, “A Shared Space and a Space for Sharing”, led by the University of Sheffield, is trying to understand how trust works in online spaces among people in extreme circumstances, such as the terminally ill.

The ConversationWe need to know why people trust and share so much with others when they have never met them and when there is so much potential for deceit and abuse. It is also important to identify people who fake illness online if we are to ensure there is public trust in genuine online support platforms.


Peter Bath, Professor of Health Informatics, University of Sheffield and Julie Ellis, Research associate, University of Sheffield

This article was originally published on The Conversation. Read the original article.

Friday, 14 July 2017

Book review: The SAGE Handbook of Social Media Research Methods

Charlotte Saunders is a Research Analyst at NatCen Social Research and the newest member of the NSMNSS team!
She works on quantitative and secondary analysis projects across a range of policy areas. Previously, Charlotte spent three years working for Ipsos MORI on a variety of qualitative and quantitative projects. Most recently she spent several months with VSO in Tanzania managing teams of young volunteers working in secondary schools. Charlotte holds an MSc in Public Health from the London School of Hygiene and Tropical Medicine. Her research project there looked at inequalities in access to clean water and sanitation in South America. 

In the past decade social media has transformed many aspects of our lives. It has revolutionised the way we communicate and the widespread adoption of mobile devices means its impact on everyday life continues to grow. As social researchers, we know that social media opens up huge new reserves of naturally occurring data for us to play with. The large volume of data produced is a goldmine for quantitative researchers; the opportunities provided by big data are well documented. But there are also new prospects in qualitative research, with new sources of ‘thick data’ which help us to understand the stories, emotions and worldviews behind the numbers.

Despite all these possibilities, many of us don’t know how to take advantage of them. There are lots of things to consider; from the technical knowledge required to access and store the data to the potential ethical issues involved in using people’s data for research without first asking their permission. The SAGE Handbook of Social Media Research Methods promises to guide researchers through the whole process - from research design and ethical issues, data collection and storage through to analysis and interpretation.

The editors, Luke Sloan (Senior Lecturer at Cardiff University and Deputy Director of the Social Data Science Lab) and Anabel Quan-Haase (Associate Professor of Information and Media Studies and Sociology at the University of Western Ontario) have compiled chapters which cover the whole research process. Discussions of the limitations of naturally occurring data from social media, and some of the techniques that can be used to overcome them are practical and guide the researcher through the issues clearly. The chapters outlining the history, structure and demographics of some less common social media platforms give a good basic overview for those who rarely stray away from Facebook or Twitter.

Overall this is a helpful guide to research using social media. The ethics discussions outline the key issues that researchers need to consider. There are also clear step-by-step guides which walk researchers through some of the technical processes needed to engage with social media sites. Simon Hegelich’s chapter “R for Social Media Analysis” is a good example; simple and easy to follow, Hegelich takes the reader through a simple project analysing and visualising data from Twitter using the free software programming language R.

Most of the book is well written and easy to follow although some chapters are less accessible and require significant existing knowledge. These chapters are likely to be valuable for experienced researchers looking to transfer their knowledge to a social media setting, but students and junior researchers may well find themselves scouring the internet for definitions and context.

Social Media Research Methods fulfils its aim to allow researchers to “apply and tailor the various methodologies to their own research questions”.  The step-by-step guides are logical and easy to follow and the case studies demonstrate how methods can be used in real research. For those with a good existing understanding of the research methodologies and techniques in their field this is an invaluable text opening up the social media research world. Those with less experience will probably need to refer to other resources to get up to speed with some chapters, but even then this is a useful addition to any social research library.


The SAGE Handbook of Social Media Research Methods is available to purchase here.

Friday, 2 June 2017

Q&A Session with Authors of The SAGE Handbook of Social Media Research Methods

Last Friday there was a launch event in San Diego at ICA (International Communication Association) for the new SAGE Handbook of Social Media Research Methods.

The editors Luke Sloan and Anabel Quan-Haase kindly responded to the questions that you submitted. If you missed the event, you can view the Q&A session here: https://storify.com/SAGE_Methods/q-a-with-anabel-quan-haase-and-luke-sloan

Let us know your thoughts by tweeting us @NSMNSS!

Friday, 19 May 2017

The SAGE Handbook of Social Media Research Methods – Questions for Authors

Have you read the recently published SAGE Handbook of Social Media Research Methods? It offers a step-by-step guide to overcoming the challenges inherent in research projects that deal with ‘big and broad data’, from the formulation of research questions to the interpretation of findings. The Handbook includes chapters on specific social media platforms such as Twitter, Sina Weibo and Instagram, as well as a series of critical chapters.

There is a launch event taking place on Friday 26th May in the US, at the Communication and Technology reception at ICA (San Diego), sponsored by SAGE. The editors Luke Sloan and Anabel Quan-Haase are happy to answer any questions you may have about the book, even if you aren’t able to attend in person – their responses will be posted throughout the day on Twitter via @SAGE_Methods.

If you have any questions about particular chapters, or to do with social media research methods generally, please tweet us your questions @NSMNSS using #SMRM or email Keeva.Rooney@natcen.ac.uk or  Franziska.Marcheselli@natcen.ac.uk  by Wednesday 24th May. 

We will pass your questions on and you can look out for the responses during the event!


Friday, 12 May 2017

Anti-Islamic Content on Twitter

This blogpost was written by Carl Miller, Research Director, and Josh Smith, Researcher, at the Centre for the Analysis of Social Media (CASM) at Demos. @carljackmiller

This analysis was presented at the Mayor of London’s Policing and Crime Summit on Monday 24 April, 2017.

The Centre for the Analysis of Social Media at Demos has been conducting research to measure the volume of messages on Twitter algorithmically considered to be derogatory towards Muslims over a year, from March 2016 to March 2017. This is part of a broad effort to understand the scale, scope and nature of uses of social media that are possibly socially problematic and damaging.

Over a year, Demos’ researchers detected 143,920 Tweets sent from the UK considered to be derogatory and anti-Islamic – this is about 393 a day. These Tweets were sent from over 47,000 different users, and fell into a number of different categories – from directed insults to broader political statements.

A random sample of hateful Tweets were manually classified into three broad categories:
  • ‘Insult’ (just under half): Tweets used an anti-Islamic slur in a derogatory way, often directed at a specific individual.
  • ‘Muslims are terrorists’(around one fifth) Derogatory statements generally associating Muslims and Islam with terrorism.
  • ‘Muslims are the enemy’ (just under two fifths): Statements claiming that Muslims, generally, are dedicated toward the cultural and social destruction of the West.
The researchers found that key events, especially terrorist attacks, drive large increases in the volume of messages on Twitter containing this kind of language.

The Brussels, Orlando, Nice, Normandy, Berlin and Quebec attacks all caused large increases. There was a period of heightened activity over Brexit, and sometimes online ‘Twitter storms’ (such as the use of derogatory slurs by Azealia Banks toward Zayn Malik) also drove sharp increases.

Tweets containing this language were sent from every region of the UK, but the most over-represented areas, compared to general Twitter activity, were London and the North-West.

Of the 143,920 Tweets containing this language and classified as being sent from within the UK, 69,674 (48%) contained sufficient information to be located within a broad area of the UK. To measure how many Tweets each region generally sends, a random baseline of 67 Million Tweets were collected over 19 days over late February and early March. The volume of Tweets containing derogatory language towards Muslims was compared to this baseline. This identified regions where the volume was higher or lower than the expectation on the basis of general activity on Twitter.

In London, North London sent markedly more tweets containing language considered derogatory towards Muslims than South London.

27,576 (39%) tweets were sent from Greater London. Of these, 14,953 Tweets (about half) could be located to a more specific region within London (called a ‘NUTS-3 region’; typically either a London Borough or a combination of a small number of London Boroughs).[1]
  • Brent, Redbridge and Waltham Forest sent the highest number of derogatory, anti-Islamic Tweets relative to their baseline average of general Twitter activity.
  • Westminster and Bromley sent the least number of derogatory, anti-Islamic Tweets relative to their baseline average of general Twitter activity.

Demos’ research identified six different online tribes. [2] These were:

Core political anti-Islam. The largest group of about 64,000 users including recipients of Tweets. Politically active group engaged with international politics.
  • Hashtags employed by this group suggest engagement in anti-Islam and right wing political conversations: (#maga #tcot #auspol #banIslam #stopIslam #rapefugees)
  • In aggregate, words in user descriptions emphasise nationality, right-wing political interest and hostility towards Islam (anti, Islam, Brexit, UKIP, proud, country)
Contested reactions to Terrorist attacks. The second largest group, of about 18,000 users, including recipients of tweets.
  • Aggregate overview of user descriptions imply a relatively young group (sc, snapchat, ig, instagram, 17,18,19,20, 21)
  • User descriptions also imply a mix of political opinion (blacklivesmatter, whitelivesmatter, freepalestine)
  • Hashtags engage in conversations emerging in the aftermath of terrorist attacks (#prayforlondon, #munich, #prayforitaly, #prayforistabul, #prayformadinah, #orlando)
  • Likewise, hashtags are a mix of pro- and anti-Islamic (#britainfirst, #whitelivesmatter, #stopislam, #postrefracism, #humanity)
The counter-speechers. A group of 8,700 people; although of course the data collection, by design, only detected the part of the counter-speech conversation containing language that can be used in a way derogatory towards Muslims. It is therefore likely that it did not collect the majority of counter-speech activity.[3]

The shape of the cluster shows a smaller number of highly responded to-/retweeted comments.
  • Hashtags engage predominantly with anti-racist conversations (#racisttrump, postrefracism, #refugeeswelcome, #racism, #islamophobia)
  • In aggregate, user descriptions show mix of political engagement and general identification with left-wing politics (politics, feminist, socialist, Labour).
  • Overall they also show more descriptions of employment than the other clusters (writer, author, journalist, artist).
The Football Fans. 7,530 users are in this cluster, including recipients of Tweets.
  • The bio descriptions of users within his cluster overwhelmingly contain football-related words (fan, football, fc, lfc, united, liverpool, arsenal, support, club, manchester, mufc, chelsea, manutd, westham)
  • No coherent use of hashtags. This cluster engaged in lots of different conversations.
India/Pakistan. Just under 5,000 users are in this cluster (including recipients).
  • Hashtags overwhelmingly engage in conversation to do with India-Pakistan relations or just Pakistan (#kashmir, #surgicalstrike, #pakistan, #actagainstpak).
  • In aggregate, words in user descriptions relate to Indian/nationalist identity and pro-Modi identification (proud, Indian, hindu, proud indian, nationalist, dharma, proud hindu, bhakt,)
The Gamers. 2,813 users are in this cluster (including Tweet recipients).
  • There is no coherent use of hashtags.
  • Overall, aggregate comments in user descriptions either imply young age (16,17,18) or are related to gaming (player, cod [for ‘Call of Duty’], psn)
A small number of accounts overall are responsible for many of the tweets containing language generally considered to be derogatory towards Muslims.
  • 50% of Tweets classified as containing language considered anti-Islamic and derogatory are sent by only 6% of accounts
  • 25% of Tweets classified as containing language considered anti-Islamic and derogatory were sent by 1% of accounts.
Likewise, a small number of accounts were the recipients of the derogatory, anti-Islamic activity that was directed at a particular person.

The full paper, outlining methodology and ethical notes, can be downloaded here. 



NOTES –
[1] An important caveat is that the volumes associated with each of these regions are obviously smaller than the total number of Tweets in the dataset overall
[2] A caveat here is that this network graph includes Tweets that are misclassified and also includes the recipients of abuse. It is also important to note that not everyone who shares Tweets does so with malicious intent; they can be doing so to highlight the abuse to their own followers.
[3] In other work on the subject we have found there are usually more posts about solidarity, support for Muslims than attacks on them.