Home > News > This is how social media data can help NGOs
153 views 14 min 0 Comment

This is how social media data can help NGOs

- October 20, 2017
Loic Venance/AFP/Getty Images

Stefaan Verhulst is the chief research and development officer, and Andrew Young is the knowledge director at the Governance Laboratory at New York University. Together, they have written a recent report on the role that social media data can play in the nonprofit sector. I asked them a series of questions about it.

HF — Your report suggests that social media data can help NGOs [nongovernmental organizations] and nonprofit organizations help understand the situations that they face. How is this data valuable?

SV & AY — When collected, analyzed and used responsibly, social media data can add value in five ways:

First, it can provide information on the situation on the ground, in real time. For example, Facebook’s Disaster Maps initiative seeks to provide organizations such as UNICEF, the Red Cross and the World Food Program, with information on people’s location (at an aggregate level), how they move from one point to another, and whether they have used the platform’s Safety Check to mark themselves as safe following a disaster. This provides information on where people are, what evacuation routes they are following and how they are doing, helping humanitarian organizations and public sector entities in myriads of ways.

Second, it can combine widely dispersed data sets with social media data to create new knowledge, and ensure that those responsible for solving problems have the most useful information at hand. The Yelp Dataset Challenge, for example, provides public access to its crowdsourced review and ratings data through a prize-backed challenge. Yelp offers cash rewards to winning teams of university students who submit original research using the data in innovative ways.

Third, social media data can help model service delivery in a targeted, evidence-based manner. For instance, Waze, the most widely used crowdsourced traffic and navigation platform, partnered with cities and government agencies to share publicly available incident and road closure data through its free Connected Citizens Program.

Fourth, social media data can power prediction, helping institutions respond to problems before they occur. Researchers are, for example, trying to help government drug regulators identify adverse drug reactions (ADRs) through social media, rather than just the clinical trial period.

Finally social media data can help institutions monitor and evaluate the real-world impacts of policies or messaging. In 2014, Sport England analyzed over 10 million posts by women on sport and exercise to develop a strategy to address the country’s gender gap in sport. The analysis revealed that while women felt positive about sport, different groups of women experienced different barriers to exercise. These insights informed a messaging strategy based around the manifesto: “Women come in all shapes and sizes and all levels of ability. It doesn’t matter if you’re a bit rubbish or an expert. The point is you’re a woman and you’re doing something.”

HF — You discuss “data collaboratives” — partnerships in which actors from different sectors exchange information for public benefit. What examples do we have of successful data collaboratives and how have they worked?

SF & AY — There are many types and flavors of data collaboratives and there are inspiring examples in each category. These include:

Orange’s Data for Development Challenge in Africa: Orange Telecom hosted a challenge that provided researchers with anonymized, aggregated Call Detail Record (CDR) data to solve development problems, such as transportation, health and agriculture. Winners included research on the use of mobile phone data for electrification planning, analyzing how mobile phone access affects millet prices, and how waterborne parasites might spread through human movement.

Yelp offers their globally crowdsourced user data on restaurants to students and researchers in their Yelp Dataset Challenge, which runs for four months, providing cash prizes and support for conference travel. The company challenges participants to discover insights and innovations in topics such as photo classification algorithms, natural language processing and sentiment analysis, change points and events, graph mining, urban planning and more.

The California Data Collaborative automates the collection, analysis and secure storage of data on metered water use from participating city and state government agencies. This information allows for the creation of a more accurate data set that details how much, when and where water was used by California residents.

Zillow, an online listing service for single family residences, condominiums, and co-op homes that has developed a tool by pooling information from credit bureau TransUnion, the U.S. Census Bureau, the Freddie Mac Primary Mortgage Market Survey, and the Bureau of Labor Statistics’ Employment Cost Index with their own data collected from buyers, sellers and renters that use their website. Zillow provides a “Zestimate” home price index, which, in addition to other home value, historical values, rental, forecast, and geographic affordability data, provides a more comprehensive picture of the housing market in North America.

The JP Morgan Chase Institute taps into JP Morgan’s proprietary data, experience and market access to create analyses and convene stakeholders. For their 2016 Online Platform Economy Report, the JP Morgan Chase Institute used anonymized account data from October 2012 to June 2016 from samples of more than 240,000 Chase customers who received income from 42 different platforms, such as TaskRabbit, Airbnb or Uber. This report detailed the burgeoning online economy to better inform policy and public response to the field.

And finally at the GovLab, with funding from Data2X, we’re in the process of developing a Data Collaborative on Gender and Urban Mobility together with UNICEF, the ISI Foundation, the Universidad del Desarrollo/Telefónica R&D Center and DigitalGlobe, focusing on how data held by the private sector could provide greater insight into the mobility challenges experienced by women and girls in Santiago, Chile, (and in other global megacities).

HF — Are there any examples of less successful data collaboratives, and if so, what can we learn from them?

SV & AY — The data collaboratives space is certainly still in its early days, so it’s not surprising that many initiatives have yet to create major, tangible impacts. The Google Flu Trends initiative is likely the most instructive “failure” that we’ve studied to date and indicates the limits of using private data for good. The project analyzed user search queries to help predict the number of flu cases throughout the year. After receiving some initial praise and attention thanks to the accuracy of the early predictions, Google Flu Trends failed to predict the H1N1 pandemic of 2009-10 that rocked the United States. Later inquiry found that Google Flu Trends was actually massively overestimating the number of flu-related doctor visits over a two-year period.

As a result of these shortcomings, Google Flu Trends was shuttered in 2015, but its data is still publicly available for download. Perhaps the fatal flaw for Google Flu Trends was that the project was not collaborative enough in its design. Google clearly saw an opportunity to put the data it collects and holds to positive public use. But by making that data and analysis accessible to the public with minimal engagement with public and civil sector actors who could help validate it, inaccuracies went unnoticed. In the report, we suggest that public value can be created when corporations make data more accessible through an application programming interface (API), or when researchers manually scrape private-sector data from the Web. Public value is often greater, however, when data is exchanged in a more collaborative fashion, with actors from different sectors directly engaged in the process.

HF — You discuss how relatively few social media companies have a well developed data stewardship function. What is data stewardship?

SV & AY — Despite the potential of social media data collaboratives, current initiatives usually have limited scale and life span. Access to data is often determined by personal connections rather than by any systematic or policy-driven initiative, which means that they often shift over time. In addition, there are no transparent procedures for handling requests for data. These problems can be addressed if companies have data stewards to set policies and priorities for data sharing.

Data stewards are already in place in some companies and organizations: Facebook is establishing a whole team around Data 4 Good while MasterCard has a Data Philanthropy office; their numbers are small but growing. We would like to see a profession and network of data stewards, who could shepherd the process, demystify data sharing, and determine how to share data in a responsible manner.

HF — What means are useful when there isn’t good or representative social media data on a given population?

SV & AY — Social media data are often biased. They may reflect a particular demographic subset — possibly ignoring the so-called “data invisibles.” This means we need to be cautious in extrapolating general claims about the population at large on the basis of social media. The problem can be mitigated through e.g. “qualitative pullouts” of their sample to check variations in behavior to ultimately comparing the validity of social media data with more representative data sets — including administrative or census data.

Social media data is also inherently noisy, comprising information unrelated to the question at hand, and often dirty, containing factual errors or duplicative data. This results in the need for extensive data filtering and cleansing prior to extracting insights from the data.

Additionally, the opaqueness of algorithms used to analyze data flowing through data collaboratives leads to an inability to reproduce or meaningfully scrutinize these analyses. As a result, biases and quality issues are more likely to go unnoticed.

Despite the representativeness and data quality challenge it is important to point out that ultimately social media data may still provide for wider samples than, for instance, traditional focus groups. UNICEF Brazil began their campaign against Zika infection, expecting to target women aged 18 to 24, but were guided by social media insights to create content specifically geared to engage fathers, and to link Zika to other mosquito-borne illnesses to broaden their reach and generate greater audience trust. Key is to determine what data, and what level of data quality, is fit for purpose.

This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the Network is responsible for the article’s specific content. Other posts in the series can be found here.