Thursday 30 October 2014

Innovations in knowledge sharing: creating our book of blogs

Kandy Woodfield is the Learning and Enterprise Director at NatCen Social Research, and the co-founder of the NSMNSS network. You can reach Kandy on Twitter @jess1ecat.

Yesterday the NSMNSS network published its first ebook, a collection of over fifty blogs penned by researchers from around the world who are using social media in their social research. To the best of our knowledge this is the first book of blogs in the social sciences.  It draws on the insights of experienced and well-known commentators on social media research through to the thoughts of researchers new to the field.

Why did we choose to publish a book of blogs rather than a textbook or peer-reviewed article?

 In my view there is space in the academic publishing world for peer reviewed works and self-published books. We chose to publish a book of blogs rather than a traditional academic tome because we wanted to create something quickly which reflected the concerns and voices of our members. Creating a digital text, built on people’s experiences and use of social media seemed an obvious choice. Many of our network members were already blogging about their use of social media for research, for those who weren’t this was an opportunity to write something short and have their voices heard.

Unlike other fields of social research,  social media research is not yet populated with established authors and leading writers, the constant state of flux of the field means it is unlikely to ever settle in quite the same way as ethnography say or survey research. The tools, platforms and approaches to studying them are constantly changing. In this context works which are published quickly to continue to feed the plentiful discussions about the methods, ethics and practicalities of social media research seem an important counterpoint to more scholarly articles and texts.

How did we do it?

Step 1 – Create a call for action: We used social media channels to publicise the call for authors, posting tweets with links to the network blog which gave authors a clear brief on what we were looking for. Within less than a fortnight we had over 40 authors signed up.

Step 2 -  Decide on the editorial control you want to have: We let authors know that we were not peer reviewing content, if someone was prepared to contribute we would accept that contribution unless it was off theme. In the end we used every submitted blog with one exception. This was an important principle for us, the network is member-led and we wanted this book to reflect the concerns of our members not those of an editor or peer-review panel. The core team at NatCen undertook light touch editing to formatting and spelling but otherwise the contributions are unadulterated. We also organised the contributions into themes to make it easier for readers to navigate.

Step 2 – Manage your contributions: We used Google Drive to host an author’s sign-up spreadsheet asking for contact information and also an indication of the blog title and content. We also invited people to act as informal peer reviewers. Some of our less experienced authors wanted feedback and this was provided by other authors. This saved time because we did not have to create a database ourselves and was invaluable when it came to contacting authors along the way.

Step 3 – Keep a buzz going and keep in touch with authors: We found it important to keep the book of blogs uppermost in contributors minds, we did this through a combination of social media (using the #bookofblogs) and regular blogs and email updates to authors.

Step 4 – Set milestones: we set not just an end date for contributions but several milestones along the way tgo achieve 40% and 60% of contributions, this helped keep the momentum going.

Step 5 – Choose your publishing platform: there are a number of self-publishing platforms. We chose to use Press Books which has a very smooth and simple user interface similar to many blogging tools like Wordpress. We did this because we wanted authors to upload their own contributions, saving administrative time. By and large this worked fine although inevitably we ended up uploading some for authors and dealing with formatting issues!

Step 6 – Decide on format and distribution channels - You will need to consider whether to have just an e-book, an e-book and a traditional book and where to sell your book. We chose Amazon and Kindle (Mobi) format for coverage and global reach but you can publish into various formats and there are a range of channels for selling your book. 

Step 7 – Stick with it… when you’re creating a co-authored text like this with multiple authors you need to stick with it, have a clear vision of what you are trying to create and belief that you will reach your launch ready to go. And we did, we hope you enjoy it.

Watch a short video featuring a few of the authors from the Book of Blogs discussing what their pieces are about, here
Join the conversation today; Buy the e-book here!

Tuesday 21 October 2014

It started with a tweet...

Nsmnss

Kandy Woodfield is the Learning and Enterprise Director at NatCen Social Research, and the co-founder of the NSMNSS network. You can reach Kandy on Twitter @jess1ecat.


It started with a tweet, a blog post and a nervous laugh. Three months later I found  myself looking at a book of blogs. How did that happen?! Being involved in the NSMNSS network since its beginning has been an ongoing delight for me. It's full of researchers who aren't afraid to push the boundaries, question established thinking and break down a few silos. When I began my social research career, mobile phones were suitcase-sized and collecting your data meant lugging a tape recorder and tapes around with you. That world is gone, the smartphone most of us carry in our pockets now replaces most of the researcher's kitbag, and one single device is our street atlas, translator, digital recorder, video camera and so much more. Our research world today is a different place from 20 years ago, social media are common and we don't bat an eyelid at running a virtual focus group or online survey. We navigate and manage our social relationships using a plethora of tools, apps and platforms and the worlds we inhabit physically no longer limit our ability to make connections.

Social research as a craft, a profession, is all about making sense of the worlds and networks we and others live in, how strange would it be then if the methods and tools we use to navigate these new social worlds were not also changing and flexing.  Our network set out to give researchers a space to reflect on how social media and new forms of data were challenging conventional research practice and how we engage with research participants and audiences. If we had found little to discuss and little change it would have been worrying, I am relieved to report the opposite, researchers have been eager to share their experiences, dissect their success at using new methods and explore knotty questions about robustness, ethics and methods.

Our forthcoming  book of blogs is our members take on what that changing methodological world feels like to them, it's about where the boundaries are blurring between disciplines and methods, roles and realities. It is not a peer reviewed collection and it's not meant to be used as a text book, what we hope it offers is a series of challenging, interesting, topical perspectives on how social research is adapting, or not, in the face of huge technological and social change.

We are holding a launch event on Wednesday 29th October at NatCen Social Research if you would like more details please contact us.

I want to thank every single author from the established bloggers to the new writers who have shared their thoughts with us in this volume. I hope you enjoy the book as much as I have enjoyed curating it. Remember you can follow the network and join in the discussion @NSMNSS, #NSMNSS or at our http://nsmnss.blogspot.co.uk/

Thursday 16 October 2014

Analytics, Social Media Management and Research Impact

Sebastian Stevens is an Associate Lecturer and Research Assistant at Plymouth University. He teaches research methods to social science students specialising in quantitative methods. He is on twitter @sebstevens99 and has a blog site at www.everydaysocialresearch.com. 

A key benefit that social media can bring to social science research is through impact and engagement. Demonstrating how a research project will achieve impact and engage the public is a key requirement of most social science research bids today, with many funders looking for more than the traditional conference and journal article as being sufficient. Funders today want to see not only how your research will contribute to the current body of knowledge, but also how your research could impact other areas of academia as well as providing public engagement and economic and societal wide benefits.

To promote your research to the widest possible audience, it is often necessary to use a number of Social Media platforms in order to access different populations. It is also now possible to measure this level of engagement through the use of web analytics with the two most common social media platforms (Facebook and Twitter) both providing free access to analytic software for their users. Managing the content and evaluating the impact of a number of social media platforms can however become tiresome and laborious, an issue overcome by the use of a Social Media Management System (SMMS).

The benefits of using a SMMS are vast and take the hassle out of managing multiple social media platforms for your research for a reasonable yearly subscription. There are many SMMS on the market today with an example that I am currently using on a project being Hootsuite. This particular SMMS provides a research team the benefits of:

1.    Scheduling – Researchers are busy people and have little time to manage multiple social media accounts. With a SMMS you can schedule posts to be sent to multiple social media platforms at times of the day known to deliver the largest impact.

2.    Enhanced analytics – The standard analytics of the accounts included in the SMMS are available in one place, alongside extra features including Google Analytics and Klout scores.  

3.    Streams – These provide the opportunity to keep up to date with features of your accounts such as your newsfeeds, retweets, mentions, hashtag usage plus many others.

4.    Multiple Authors – Multiple authors can be added to the system taking the responsibility away from one member of the team.

5.    RSS/Atom feeds – You can keep up with updates of other websites related to your research by adding the RSS/Atom feeds to the system.

By adopting the use of a SMMS a research team has a centralised, hassle free dashboard in which to create and post content alongside evaluating its impact. Each management system comes at a different price and includes different features, however most will take the hassle out of managing your social media platforms and provide greater opportunities to evaluate your research impact.

 

 

 

Thursday 9 October 2014

Sentiment And Semantic Analysis

                                              
Michalis founded DigitalMR in 2010 following a corporate career in market research with Synovate and MEMRB since 1991. This post was first published on the DigitalMR blog. Explore the blog here: www.digital-mr.com/blog

It took a bit longer than anticipated to write Part 3 of a series of posts about the content proliferation around social media research and social media marketing. In the previous two parts, we talked about Enterprise Feedback Management (December 2013) and Short -event-driven- Intercept Surveys (February 2014). This post is about sentiment and semantic analysis: two interrelated terms in the “race” to reach the highest sentiment accuracy that a social media monitoring tool can achieve. From where we sit, this seems to be a race that DigitalMR is running on its own, competing against its best score.
 
The best academic institution in this field, Stanford University, announced a few months ago that they had reached 80% sentiment accuracy; they since elevated it to 85% but this has only been achieved in the English language, based on comments for one vertical, namely movies -a rather straight-forward case of: “I liked the movie” or “I did not like it and here is why…”. Not to say that there will not be people sitting on the fence with their opinion about a movie, but even neutral comments in this case, will have less ambiguity than other product categories or subjects. The DigitalMR team of data scientists has been consistently achieving over 85% sentiment accuracy in multiple languages and multiple product categories since September 2013; this is when a few brilliant scientists (engineers and psychologists mainly) cracked the code of multilingual sentiment accuracy!
Let’s dive into sentiment and semantics in order to have a closer look on why these two types of analysis are important and useful for next-generation market research.
 
Sentiment Analysis
 
The sentiment accuracy from most automated social media monitoring tools (we know of about 300 of them) is lower than 60%. This means that if you take 100 posts that are supposed to be positive about a brand, only 60 of them will actually be positive; the rest will be neutral, negative or irrelevant. This is almost like the flip of a coin, so why do companies subscribe to SaaS tools with such unacceptable data quality? Does anyone know? The caveat around sentiment accuracy is that the maximum achievable accuracy using an automated method is not 100% but rather 90% or even less. This is so, because when humans are asked to annotate sentiment to a number of comments, they will not agree at least 1 in 10 times. DigitalMR has achieved 91% in the German language but the accuracy was established by 3 specific DigitalMR curators. If we were to have 3 different people curate the comments we may come up with a different accuracy; sarcasm -and in more general ambiguity- is the main reason for this disagreement. Some studies (such as the one mentioned in the paper Semi-Supervised Recognition of Sarcastic Sentences in Online Product Reviews) of large numbers of tweets, have shown that less than 5% of the total number of tweets reviewed were sarcastic. The question is: does it make sense to solve the problem of sarcasm in machine learning-based sentiment analysis? We think it does and we find it exciting that no-one else has solved it yet.
Automated sentiment analysis allows us to create structure around large amounts of unstructured data without having to read each document or post one by one. We can analyse sentiment by: brand, topic, sub-topic, attribute, topic within brands and so on; this is when social analytics becomes a very useful source of insights for brand performance. The WWW is the largest focus group in the world and it is always on. We just need a good way to turn qualitative information into robust contextualised quantitative information.
 
Semantic Analysis
 
Some describe semantic analysis as “keyword analysis” which could also be referred to as “topic analysis”, and as described in the previous paragraph, we can even drill down to report on sub-topics and attributes.
 
Semantics is the study of meaning and understanding language. As researchers we need to provide context that goes along with the sentiment because without the right context the intended meaning can easily be misunderstood. Ambiguity makes this type of analytics difficult, for example, when we say “apple”, do we mean the brand or the fruit? When we say “mine”, do we mean the possessive proposition, the explosive device, or the place from which we extract useful raw materials?
Semantic analysis can help:
  • extract relevant and useful information from large bodies of unstructured data i.e. text.
  • find an answer to a question without having to ask anyone!
  • discover the meaning of colloquial speech in online posts and
  • uncover specific meanings to words used in foreign languages mixed with our own
What does high accuracy sentiment and semantic analysis of social media listening posts mean for market research? It means that a 50 billion US$ industry can finally divert some of the spending- from asking questions to a sample, using long and boring questionnaires- to listening to unsolicited opinions of the whole universe (census data) of their product category’s users.
 
This is big data analytics at its best and once there is confidence that sentiment and semantics are accurate, the sky is the limit for social analytics. Think about detection and scoring of specific emotions and not just varying degrees of sentiment; think, automated relevance ranking of posts in order to allocate them in vertical reports correctly; think, rating purchase intent and thus identifying hot leads. After all, accuracy was the only reason why Google beat Yahoo and became the most used search engine in the world. 

Thursday 2 October 2014

7 reasons you should read Qualitative Data Analysis with NVivo

Kath McNiff is a Technical Communicator at QSR. You can contact Kath on @KMcNiff. This post was originally published on the NVivo blog. You can read more by Kath and other NVivo bloggers by visiting http://blog.qsrinternational.com/

Somewhere on your computer there are articles to review and interviews to analyze. You also have survey results, a few videos and some social media conversations to contend with.

Where to begin?

Well, here’s one approach: Push a few buttons and bring everything into NVivo. Then dive head-first into your material and code the emerging themes. Become strangely addicted to coding and get caught up in a drag and drop frenzy. Then come up for air – only to be faced with 2000 random nodes and a supervisor/client demanding to know what it all means.

Or, you could do what successful NVivo users have been doing for the past six years – take a sip of your coffee and open Qualitative Data Analysis with NVivo.

This well-thumbed classic (published by SAGE) has been revised and updated by Pat Bazeley and co-author Kristi Jackson.

Here are 7 reasons why you should read it:

1. Pat and Kristi guide you through the research process and show you how NVivo can help at each stage. This means you learn to use NVivo and, at the same time, get an expert perspective on ‘doing qual’.
2. No matter what kind of source material you’re working with (text, audio, video, survey datasets or web pages)—this updated edition gives you sensible, actionable techniques for managing and analyzing the data.
3. The authors share practical coding strategies (gleaned from years of experience) and encourage you to develop good habits—keep a research journal, make models, track concepts with memos, don’t let your nodes go viral. Enjoy the ride.
4. The book is especially strong at helping you to think about (and setup) the ‘cases’ in your project—this might be the people you interviewed or the organizations you’re evaluating. Setting-up these cases and their attributes helps you to unleash the power of NVivo’s queries. How are different sorts of cases expressing an idea? Why does this group say one thing and this group another? What are the factors influencing these contrasts? Hey wait a minute, I just evolved my theme into a theory. Memo that.
5. If you’re doing a literature review in NVivo – chapter 8 is a gold mine (especially if you use NCapture to gather material from the web or if you use bibliographic software like EndNote.)
6. Each chapter outlines possible approaches, gives concrete examples and then provides step-by-step instructions (including screenshots) for getting things done. All in a friendly and approachable style.
7. This book makes a great companion piece to Pat’s other new text – Qualitative Data Analysis Practical Strategies. Read the ‘strategies’ book for a comprehensive look at the research process (in all its non-linear, challenging and exhilarating glory) and read this latest book to bring your project to life in NVivo. - See more at: http://blog.qsrinternational.com/qualitative-data-analysis-with-nvivo/#sthash.8odh8Olf.dpuf