News Item

May 2019

Mobilise citizens to rein in social media giants, study finds

The just released “Digital Threats to Democracy” research report makes recommendations on dealing with the risks and threats caused by social media and digital platform monopolies.

Smart regulation and “human responses,” rather than a narrow focus on content moderation alone, are needed to counter the threat to democracy posed by digital media platforms like Facebook as they currently operate, a newly-launched study has found.

The Law Foundation-backed research says the recent “Christchurch call,” seeking global agreement on preventing terrorism promotion on social media, is a positive initiative but falls short of dealing with the scale of the challenge.

Image of Digital Threats to Democracy research report by The Workshop

“Clearly there is value in starting with a specific goal, such as ending the spread of terrorism online,” says lead researcher Marianne Elliott. “But it is critical that the Prime Minister and her advisors look beyond immediate concerns about violent extremism and content moderation, to consider the wider context in which digital media is having a growing and increasingly negative impact on our democracy.”

As well as regulating social media platforms, Marianne’s study team calls for several far-reaching and as yet untested “human responses” to rein in the ill-effects of “platform monopolies,” the dominance of the social media market by a few players.

The team’s proposals include collective action to influence the major platforms, through groups like technology workers and digital media users using their leverage to demand ethical product design. The study argues that fake news can be countered by investing more in public interest media and alternative platforms, leading to a more democratic internet.

It also points to evidence that online platforms enabling citizen participation in decision-making can improve public trust and lead to more citizen-oriented policies.

“It’s critical that this moment of global cooperation is used to address the wider, structural drivers of the biggest threats posed to democracy by digital media. These include the power that a handful of privately-owned platforms wield over so many aspects of our lives,” Marianne says.

“We must do this while maintaining and building upon the many opportunities digital media simultaneously offer to tackle some of the biggest challenges facing democracy, including inequity of access and declining engagement.”

In terms of democratic values, social media is a two-edged sword, Marianne says. Because it gives people direct access to each other, it can give historically excluded people a voice, as seen for example in the “Arab Spring” uprising. But the impacts of fake news, filter bubbles, populism, polarisation, hate speech, trolls, shills and bots, all hosted by virtually unregulated platforms like Facebook, shows social media is undermining active citizenship and disrupting democracy.

The study points to the increasing influence of “opaque” algorithmic engines over what we think and do, with little transparency about how they work or accountability for their impact.

It also describes the “attention economy”, the underlying business model of the major platforms, which prioritises content that grabs attention and avoids responsibility for its impact on our collective wellbeing and democracy.

Combined, says Marianne, “these problems pose serious threats, so it’s critical that our responses don’t further undermine our democratic institutions. The history of digital media has shown that good intentions can cause more harm, if not informed by the diverse experiences of users and the research evidence.”

To avoid repeating these mistakes, the study recommends that a diverse range of civil society representatives are included in the multi-stakeholder talks initiated by the Prime Minister.

On the specific topic of content moderation, academic Dr Kathleen Kuehn says existing evidence suggests a combination of technical and human responses would offer the most promising workable solution.

“Advances in semi or fully automated systems, including deep learning, show increased promise in identifying inappropriate content and drastically reducing the number of messages human moderators then need to review. But neither automated nor manual classifications systems can ever be “neutral” or free from human bias. Therefore, the combination of automated classification and deletion systems and human efforts remains the most effective content moderation strategy currently on offer.”

The study recommends action in the following areas:

1.  Restore a genuinely multi-stakeholder approach to internet governance, including meaningful mechanisms for collective engagement by citizens/users;

2.  Refresh antitrust and competition regulation, taxation regimes and related enforcement mechanisms to align them across like-minded liberal democracies and restore competitive fairness;

3. Recommit to publicly funded democratic infrastructure including public interest media and the online platforms that afford citizen participation and deliberation;

4.  Regulate for greater transparency and accountability from the platforms including algorithmic transparency and accountability for verifying the sources of political advertising;

5.  Revisit regulation of privacy and data protection to better protect indigenous rights to data sovereignty and redress the failures of a consent-based approach to data management; and

6.  Recalibrate policies and protections to address not only individual rights and privacy but also collective impact and wellbeing.

Marianne Elliott is co-director of research, policy and communication think-tank The Workshop. Her research team on the year-long Digital Threats to Democracy project included Dr Jess Berentson-Shaw (The Workshop), Dr Kathleen Kuehn (Victoria University of Wellington), Dr Leon Salter (Massey University) and Ella Brownlie (The Workshop).

Research funding was mainly provided by the New Zealand Law Foundation’s Information Law and Policy Project, with additional research funding from the Luminate Group.

The Luminate Group, which provided $24,000 funding for the project, is a global philanthropic organisation with the goal of empowering people and institutions to work together to build just and fair societies.

The New Zealand Law Foundation provided $56,660 funding for this project

Full report “Digital Threats to Democracy” PDF, 246 pages

Link to the Principal Investigator’s webpage