Tag Archives: Content moderation

Workshopping trust and speech at EDMO

It was a great pleasure to convene a workshop at the European Digital Media Observatory today featuring Claire Wardle (Brown), Craig Matasick (OECD), Daniela Stockmann (Hertie), Kalypso Nicolaidis (Oxford), Lisa Ginsborg (EUI), Emma Briant (Bard) and (briefly) Alicia Wanless (Carnegie Endowment for International Peace). The title was “Information flows and institutional reputation: leveraging social trust in times of crisis” and the far-ranging discussion touched on disinformation, trust vs. trustworthiness, different models of content moderation, institutional design, preemptive red-teaming of policies, algorithmic amplification, and the successes and limits of multi-stakeholder frameworks. A very productive meeting, with more to come in the near future on this front.

Future publishing on disinformation

My chapter abstract entitled “Censorship Choices and the Legitimacy Challenge: Leveraging Institutional Trustworthiness in Crisis Situations” has been accepted for publication in the volume Defending Democracy in the Digital Age, edited by Scott Shackelford (of Indiana University) et al., to appear with Cambridge UP in 2024.

In other news, I am writing a book review of the very interesting grassroots study by Francesca Tripodi entitled The Propagandists’ Playbook: How Conservative Elites Manipulate Search and Threaten Democracy (Yale UP) for the Italian journal Etnografia e Ricerca Qualitativa.

Excess skepticism and the media trust deficit

An interesting presentation at the MISDOOM 2022 conference earlier this week: Sacha Altay (Oxford) on the effectiveness of interventions against misinformation [pre-print here].

Altay lays out some established facts in the academic literature that at times get lost in the policy debate. The main one is that explicit disinformation, i.e. unreliable news such as that generated on propaganda websites that run coordinated influence operations, represents a minuscule segment of everyday people’s media consumption; however, the public has been induced to be indiscriminately skeptical of all news, and therefore doubts the validity even of bona fide information.

Thus, it would appear that a policy intervention aimed at explaining the verification techniques employed by professional journalists to vet reliable information should be more effective, all else being equal, than one that exposes the workings of purposeful disinformation. On the other hand, as Altay recognizes, misinformation is, at heart, a mere symptom of a deeper polarization, an attitude of political antagonism in search of content to validate it. But while such active seeking of misinformation may be fringe, spontaneous, and not particularly dangerous for democracy, generalized excess skepticism and the ensuing media trust deficit are much more serious wins for the enemies of open public discourse.

Rightwing algorithms?

A long blog post on Olivier Ertzscheid’s personal website [in French] tackles the ideological orientation of the major social media platforms from a variety of points of view (the political leanings of software developers, of bosses, of companies, the politics of content moderation, political correctness, the revolving door with government and political parties, the intrinsic suitability of different ideologies to algorithmic amplification, and so forth).

The conclusions are quite provocative: although algorithms and social media platforms are both demonstrably biased and possessed of independent causal agency, amplifying, steering, and coarsening our public debate, in the end it is simply those with greater resources, material, social, cultural, etc., whose voices are amplified. Algorithms skew to the right because so does our society.

A global take on the mistrust moment

My forthcoming piece on Ethan Zuckerman’s Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them for the Italian Political Science Review.