It was a great pleasure to convene a workshop at the European Digital Media Observatory today featuring Claire Wardle (Brown), Craig Matasick (OECD), Daniela Stockmann (Hertie), Kalypso Nicolaidis (Oxford), Lisa Ginsborg (EUI), Emma Briant (Bard) and (briefly) Alicia Wanless (Carnegie Endowment for International Peace). The title was “Information flows and institutional reputation: leveraging social trust in times of crisis” and the far-ranging discussion touched on disinformation, trust vs. trustworthiness, different models of content moderation, institutional design, preemptive red-teaming of policies, algorithmic amplification, and the successes and limits of multi-stakeholder frameworks. A very productive meeting, with more to come in the near future on this front.
Tag Archives: Politicization
Future publishing on disinformation
My chapter abstract entitled “Censorship Choices and the Legitimacy Challenge: Leveraging Institutional Trustworthiness in Crisis Situations” has been accepted for publication in the volume Defending Democracy in the Digital Age, edited by Scott Shackelford (of Indiana University) et al., to appear with Cambridge UP in 2024.
In other news, I am writing a book review of the very interesting grassroots study by Francesca Tripodi entitled The Propagandists’ Playbook: How Conservative Elites Manipulate Search and Threaten Democracy (Yale UP) for the Italian journal Etnografia e Ricerca Qualitativa.
Russian pre-electoral disinformation in Italy
An interesting blog post by the Institute for Strategic Dialogue discusses Russian propaganda in the run-up to the recent Italian general elections.
Basically, the study identifies 500 Twitter accounts of super-sharers of Russian propaganda in Italian and plots their sentiments with regard to party politics, the conflict in Ukraine, and health/pandemic-response policy during the electoral campaign. This is not, therefore, a network of coordinated inauthentic behavior, but rather a bona fide consumption of Russian propaganda.
There are some interesting takeaways from the data, the main one being the catalyst function of coverage of the Covid-19 response: a significant proportion of users in the group began sharing content from Russian propaganda websites in the context of vaccine hesitancy and resistance to public health measures such as the “green pass“, and then stayed on for Ukraine and Italian election news.
What remains unclear, however, is the extent of the influence in question. The examples given of Kremlin-friendly messages hardly suggest viewpoints without grassroots support in the country: it is fairly easy, for instance, to find the same arguments voiced by mainstream news outlets without any suspicion of collusion with Russia. Also, the analysis of candidate valence does not support the conclusion of a successful misinformation campaign: the eventual winner of the election, Giorgia Meloni, comes in for similar amounts of opprobrium as the liberal establishment Partito Democratico, while the two major parties portrayed in a positive light, Matteo Salvini’s Lega and the 5 Star Movement, were punished at the polls. Perhaps the aspect of the political views of the group that was most attuned to the mood of the electorate was a generalized skepticism of the entire process: #iononvoto (#IDontVote) was a prominent hashtag among these users, and in the end more than a third of eligible voters did just that on September 25th (turnout was down 9% from the 2018 elections). But, again, antipolitical sentiment has deep roots in Italian political culture, well beyond what can be ascribed to Russian meddling.
In the end, faced with the evidence presented by the ISD study one is left with some doubt regarding the direction of causation: were RT and the other Kremlin-friendly outlets steering the political beliefs of users and thus influencing Italian public discourse, or were they merely providing content in agreement with what these users already believed, in order to increase their readership?
Limits of trustbuilding as policy objective
Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.
While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.
An interesting report in Medium (via /.) discusses the PRC’s new pervasive surveillance program, Sharp Eyes. The program, which complements several other mass surveillance initiatives by the Chinese government, such as SkyNet, is aimed especially at rural communities and small towns. With all the caveats related to the fragmentary nature of the information available to outside researchers, it appears that Sharp Eyes’ main characteristic is being community-driven: the feeds from CCTV cameras monitoring public spaces are made accessible to individuals in the community, whether at home from their TVs and monitors or through smartphone apps. Hence, local communities become responsible for monitoring themselves (and providing denunciations of deviants to the authorities).
This outsourcing of social control is clearly a labor-saving initiative, which itself ties in to a long-run, classic theme in Chinese governance. It is not hard to perceive how such a scheme may encourage social homogeneization and irregimentation dynamics, and be especially effective against stigmatized minorities. After all, the entire system of Chinese official surveillance is more or less formally linked to the controversial Social Credit System, a scoring of the population for ideological and financial conformity.
However, I wonder whether a community-driven surveillance program, in rendering society more transparent to itself, does not also potentially offer accountability tools to civil society vis-à-vis the government. After all, complete visibility of public space by all members of society also can mean exposure and documentation of specific public instances of abuse of authority, such as police brutality. Such cases could of course be blacked out of the feeds, but such a heavy-handed tactic would cut into the propaganda value of the transparency initiative and affect public trust in the system. Alternatively, offending material could be removed more seamlessly through deep fake interventions, but the resources necessary for such a level of tampering, including the additional layer of bureaucracy needed to curate live feeds, would seem ultimately self-defeating in terms of the cost-cutting rationale.
In any case, including the monitored public within the monitoring loop (and emphasizing the collective responsibility aspect of the practice over the atomizing, pervasive-suspicion one) promises to create novel practical and theoretical challenges for mass surveillance.