Tag Archives: Politicization

Russian pre-electoral disinformation in Italy

An interesting blog post by the Institute for Strategic Dialogue discusses Russian propaganda in the run-up to the recent Italian general elections.

Basically, the study identifies 500 Twitter accounts of super-sharers of Russian propaganda in Italian and plots their sentiments with regard to party politics, the conflict in Ukraine, and health/pandemic-response policy during the electoral campaign. This is not, therefore, a network of coordinated inauthentic behavior, but rather a bona fide consumption of Russian propaganda.

There are some interesting takeaways from the data, the main one being the catalyst function of coverage of the Covid-19 response: a significant proportion of users in the group began sharing content from Russian propaganda websites in the context of vaccine hesitancy and resistance to public health measures such as the “green pass“, and then stayed on for Ukraine and Italian election news.

What remains unclear, however, is the extent of the influence in question. The examples given of Kremlin-friendly messages hardly suggest viewpoints without grassroots support in the country: it is fairly easy, for instance, to find the same arguments voiced by mainstream news outlets without any suspicion of collusion with Russia. Also, the analysis of candidate valence does not support the conclusion of a successful misinformation campaign: the eventual winner of the election, Giorgia Meloni, comes in for similar amounts of opprobrium as the liberal establishment Partito Democratico, while the two major parties portrayed in a positive light, Matteo Salvini’s Lega and the 5 Star Movement, were punished at the polls. Perhaps the aspect of the political views of the group that was most attuned to the mood of the electorate was a generalized skepticism of the entire process: #iononvoto (#IDontVote) was a prominent hashtag among these users, and in the end more than a third of eligible voters did just that on September 25th (turnout was down 9% from the 2018 elections). But, again, antipolitical sentiment has deep roots in Italian political culture, well beyond what can be ascribed to Russian meddling.

In the end, faced with the evidence presented by the ISD study one is left with some doubt regarding the direction of causation: were RT and the other Kremlin-friendly outlets steering the political beliefs of users and thus influencing Italian public discourse, or were they merely providing content in agreement with what these users already believed, in order to increase their readership?

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.

Sharp Eyes

An interesting report in Medium (via /.) discusses the PRC’s new pervasive surveillance program, Sharp Eyes. The program, which complements several other mass surveillance initiatives by the Chinese government, such as SkyNet, is aimed especially at rural communities and small towns. With all the caveats related to the fragmentary nature of the information available to outside researchers, it appears that Sharp Eyes’ main characteristic is being community-driven: the feeds from CCTV cameras monitoring public spaces are made accessible to individuals in the community, whether at home from their TVs and monitors or through smartphone apps. Hence, local communities become responsible for monitoring themselves (and providing denunciations of deviants to the authorities).

This outsourcing of social control is clearly a labor-saving initiative, which itself ties in to a long-run, classic theme in Chinese governance. It is not hard to perceive how such a scheme may encourage social homogeneization and irregimentation dynamics, and be especially effective against stigmatized minorities. After all, the entire system of Chinese official surveillance is more or less formally linked to the controversial Social Credit System, a scoring of the population for ideological and financial conformity.

However, I wonder whether a community-driven surveillance program, in rendering society more transparent to itself, does not also potentially offer accountability tools to civil society vis-à-vis the government. After all, complete visibility of public space by all members of society also can mean exposure and documentation of specific public instances of abuse of authority, such as police brutality. Such cases could of course be blacked out of the feeds, but such a heavy-handed tactic would cut into the propaganda value of the transparency initiative and affect public trust in the system. Alternatively, offending material could be removed more seamlessly through deep fake interventions, but the resources necessary for such a level of tampering, including the additional layer of bureaucracy needed to curate live feeds, would seem ultimately self-defeating in terms of the cost-cutting rationale.

In any case, including the monitored public within the monitoring loop (and emphasizing the collective responsibility aspect of the practice over the atomizing, pervasive-suspicion one) promises to create novel practical and theoretical challenges for mass surveillance.

Free speech and monetization

Yesterday, I attended an Electronic Frontier Foundation webinar in the ‘At Home with EFF’ series on Twitch: the title was ‘Online Censorship Beyond Trump and Parler’. Two panels hosted several veterans and heavyweights in the content moderation/trust & safety field, followed by a wrap-up session presenting EFF positions on the topics under discussion.

Several interesting points emerged with regard to the interplay of market concentration, free speech concerns, and the incentives inherent in the dominant social media business model. The panelists reflected on the long run, identifying recurrent patterns, such as the economic imperative driving infrastructure companies from being mere conduits of information to becoming active amplifiers, hence inevitably getting embroiled in moderation. While neutrality and non-interference may be the preferred ideological stance for tech companies, at least publicly, editorial decisions are made a necessity by the prevailing monetization model, the market for attention and engagement.

Perhaps the most interesting insight, however, emerged from the discussion of the intertwining of free speech online with the way in which such speech is (or is not) allowed to make itself financially sustainable. Specifically, the case was made for the importance of the myriad choke points up and down the stack where those who wish to silence speech can exert pressure: if cloud computing cannot be denied to a platform in the name of anti-discrimination, should credit card verification or merch, for instance, also be protected categories?

All in all, nothing shockingly novel; it is worth being reminded, however, that a wealth of experience in the field has already accrued over the years, so that single companies (and legislators, academics, the press, etc.) need not reinvent the wheel each time trust & safety or content moderation are on the agenda.

Victory lap for the EIP

Today, I followed the webcast featuring the initial wrap-up of the Electoral Integrity Partnership (I have discussed their work before). All the principal institutions composing the partnership (Stanford, University of Washington, Graphika, and the Atlantic Council) were represented on the panel. It was, in many respects, a victory lap, given the consensus view that foreign disinformation played a marginal role in the election, compared to 2016, also due to proactive engagement on the part of the large internet companies facilitated by projects such as the EIP.

In describing the 2020 disinformation ecosystem, Alex Stamos (Stanford Internet Observatory) characterized it as mostly home-grown, non-covert, and reliant on major influencers, which in turn forced platforms to transform their content moderation activities into a sort of editorial policy (I have remarked on this trend before). Also, for all the focus on social media, TV was seen to play a very important role, especially for the purpose of building background narratives in the long run, day to day.

Camille François (Graphika) remarked on the importance of alternative platforms, and indeed the trend has been for an expansion of political discourse to all manner of fora previously insulated from it (on this, more soon).

Of the disinformation memes that made it into the mainstream conversation (Stamos mentioned the example of Sharpie-gate), certain characteristics stand out: they tended to appeal to rival technical authority, so as to project an expert-vs-expert dissonance; they were made sticky by official endorsement, which turned debunking into a partisan enterprise. However, their very predictability rendered the task of limiting their spread easier for the platforms. Kate Starbird (UW) summed it up: if the story in 2016 was coordinated inauthentic foreign action, in 2020 it was authentic, loosely-coordinated domestic action, and the major institutional players (at least on the tech infrastructure side) were ready to handle it.

It makes sense for the EIP to celebrate how the disinformation environment was stymied in 2020 (as Graham Brookie of the Atlantic Council put it, it was a win for defence in the ongoing arms race on foreign interference), but it is not hard to see how such an assessment masks a bigger crisis, which would have been evident had there been a different victor. Overall trust in the informational ecosystem has been dealt a further, massive blow by the election, and hyper-polarized post-truth politics are nowhere near over. Indeed, attempts currently underway to fashion a Dolchstoßlegende are liable to have very significant systemic effects going forward. The very narrow focus on disinformation the EIP pursued may have paid off for now, but it is the larger picture of entrenched public distrust that guarantees that these problems will persist into the future.