This is the second part of a two-part series on the #TASM conference. You can read the first part here.
The 27th and 28th June saw the congregation of some of the world’s leading experts in counter-terrorism and 145 delegates from 15 countries embark on Swansea University’s Bay Campus for the Cyberterrorism Project’s Terrorism and Social Media conference (#TASMConf). Over the two days, 59 speakers presented their research into terrorists’ use of social media and responses to this phenomenon. The keynote speakers consisted of Sir John Scarlett (former head of MI6), Max Hill QC (the UK’s Independent Reviewer of Terrorism Legislation), Dr Erin Marie Saltman (Facebook’s Policy Manager for counter-terrorism and counter-extremism in Europe, the Middle East and Africa), Professor Philip Bobbitt, Professor Maura Conway and Professor Bruce Hoffman. The conference oversaw a diverse range of disciplines including law, criminology, psychology, security studies, linguistics, and many more. Amy-Louise Watkin and Joe Whittaker take us through what was discussed (blog originally posted here).
Both Dr Weeda Mehran, and
Amy-Louise Watkin and Sean Looney presented on children in terrorist
organisations and their portrayal through videos and images. Dr Mehran analysed
eight videos and found that children help to create a spectacle as they
generate memorability, novelty, visibility and competitiveness, and display
high levels of confidence while undertaking executions. On the other hand, Watkin
and Looney found in their analysis of images in online jihadist magazines that there
are notable differences between IS and AQ in their use of children with IS
focusing on displaying brutality through images of child soldiers and AQ trying
to create shame and guilt at their Western followers through images of children
as victims of Western-back warfare. They concluded that these differences need
to be taken into account when creating counter-messages and foreign policy.
Joe Whittaker presented his
research on online radicalisation. He began with a literature review of the
field, concluding that the academic consensus was that the Internet is a
facilitator, rather than a driver, of radicalisation. He then offered five
reasons as to why there was good reason to doubt this consensus: the lack of
empirical data, how old the data is compared to the growth of the Internet, the
few dissenting voices in the field, the changing online threat since 2014, and
the wealth of information that can be learned from other academic fields (such
as Internet studies and psychology). He then offered three case studies of
individuals radicalised in the previous three years to learn whether the
academic consensus still holds; finding that although it does in two cases,
there may be good reason to believe that social media could drastically change
the nature of some individuals’ radicalisation.
On the topic of corporate social
responsibility in counter-terrorism, Chelsea Daymon and Sergei Boeke discussed different
aspects of private entities engaging in policing extremist content on the Internet.
Daymon drew upon the different projects and initiatives conducted by industry
leaders, such as Google’s Jigsaw projects and the shared database between
Microsoft, Twitter, Facebook, and YouTube. She, however, warned against the
excessive use of predictive technology for countering violent extremism,
suggesting that it could raise practical and ethical problems in the future. Drawing
from Lawrence Lessig’s models, Boeke outlined four distinct categories of
regulation that can be applied to the Internet: legal, architectural,
market-based, and altering social norms before offering different suggestions
for how this can be used in the context of countering terrorism.
The final panel related to creating
counter-narratives, which included Dr Paul Fitzpatrick, who discussed different
models of radicalisation, and how it related to his work as Prevent Coordinator
at Cardiff Metropolitan University. He began by critiquing a number of
prevalent models including Moghaddam’s staircase, as well as all multi-stage,
sequential models, observing that, having seen over one hundred cases first-hand,
no-one had followed the stages in a linear fashion. He also highlighted the
particular vulnerabilities of students coming to university, who have their
traditional modes of thinking deliberately broken down, and are susceptible to
many forms of extreme thinking. Sarah Carthy, who presented a meta-analysis of
counter-narratives, followed Dr Fitzpatrick. She observed that specific
narratives are particularly powerful because they are simple, present a
singular version of a story, and are rational (but not necessarily reasonable).
Importantly, Carthy noted that despite many assuming that counter-narratives
can do little harm – the worst thing that can happen is that they are ignored –
some were shown to have a detrimental effect on the target audience, raising
important ethical considerations. The final member of the counter-narrative
panel was Dr Haroro Ingram, who presented his strategic framework for
countering terrorist propaganda. Ingram’s framework, which draws on findings
from the field of behavioural economics, aims to disrupt the “linkages” between
extremist groups’ “system of meaning”. Dr Ingram observes that the majority of
IS propaganda leverages automatic, heuristic-based thinking, and encouraging
more deliberative thinking when constructing a counter-narrative could yield
positive results.
The last day of the conference saw
keynote Max Hill QC argue that there is a strong place for counter-narratives
to be put into place to discredit extremist narratives, and spoke of his
experiences visiting British Muslims who have been affected by the recent UK
terrorist attacks. He told of the powerful counter-narratives that these
British Muslims hold and argued their importance in countering extremist
propaganda both online and offline. Hill also argued against the criminalising
of tech companies who ‘don’t do enough’, asking the question of how we measure
‘enough’? His presentation was shortly followed by Dr Erin Marie Saltman who
discussed Facebook’s advancing efforts in countering terrorism and extremism.
She argued that both automated techniques and human intervention are required
to tackle this and minimise errors on the site that sees visits from 1.28
billion people daily. Saltman gave an overview of Facebook’s Violent Extremism
Policies and spoke of the progress the organisation has made regarding
identifying the ability of actors to make new accounts. Overall, Saltman made
it crystal clear that Facebook are strongly dedicated to eradicating all forms
of terrorism and violent extremism from their platform.
With the wealth of knowledge that
was shared from the academics, practitioners and private sector companies that
attended TASM, and the standard of research proposals that followed from the
post-TASM research sandpit, it is clear that TASM was a success. The research
presented made it very clear that online terrorism is a threat that affects
society as a whole and the solutions will need to come from multiple
directions, multiple disciplines, and multiple collaborations.
You can find Max
Hill QC’s TASM speech in full here and follow
us on Twitter @CTP_Swansea to find out when we will be releasing videos of TASM
presentations.