Tag Archives: Attribution

Workshopping trust and speech at EDMO

It was a great pleasure to convene a workshop at the European Digital Media Observatory today featuring Claire Wardle (Brown), Craig Matasick (OECD), Daniela Stockmann (Hertie), Kalypso Nicolaidis (Oxford), Lisa Ginsborg (EUI), Emma Briant (Bard) and (briefly) Alicia Wanless (Carnegie Endowment for International Peace). The title was “Information flows and institutional reputation: leveraging social trust in times of crisis” and the far-ranging discussion touched on disinformation, trust vs. trustworthiness, different models of content moderation, institutional design, preemptive red-teaming of policies, algorithmic amplification, and the successes and limits of multi-stakeholder frameworks. A very productive meeting, with more to come in the near future on this front.

Disinformation isn’t Destiny

As the war in Ukraine enters its sixth week, it may prove helpful to look back on an early assessment of the informational sphere of the conflict, the snapshot taken by Maria Giovanna Sessa of the EU Disinfo Lab on March 14th.

Sessa summed up her findings succintly:

Strategy-wise, malign actors mainly produce entirely fabricated content, while the most recurrent tactic to disinform is the use of decontexualised photos and videos, followed by content manipulation (doctored image or false subtitles). As evidence of the high level of polarisation, the same narratives have been exploited to serve either pro-Ukrainian or pro-Russian messages.

This general picture, by most all accounts, largely holds half a month later. The styles of disinformation campaigns have not morphed significantly, although (as Sessa predicted) there has been a shift to weaponize the refugee angle of the crisis.

Most observers have been struck overall by the failure of the Russians to replicate previous information successes. The significant resources allotted from the very beginning of the conflict to fact-checking and debunking by a series of actors, public and private, in Western countries are part of the explanation for this outcome. More broady, however, it may be the case that Russian tactics in this arena have lost the advantage of surprise, so that as the informational sphere becomes more central to strategic power competition, relative capabilities revert to the mean of the general balance of power.

More interesting cybersecurity journalism (finally)

A study (PDF) by a team led by Sean Aday at the George Washington University School of Media and Public Affairs (commissioned by the Hewlett Foundation) sheds light on the improving quality of the coverage of cybersecurity incidents in mainstream US media. Ever since 2014, cyber stories in the news have been moving steadily away from the sensationalist hack-and-attack template of yore toward a more nuanced description of the context, the constraints of the cyber ecosystem, the various actors’ motivations, and the impactof incidents on the everyday lives of ordinary citizens.

The report shows how an understanding of the mainstream importance of cyber events has progressively percolated into newsrooms across the country over the past half-decade, leading to a broader recognition of the substantive issues at play in this field. An interesting incidental finding is that, over the course of this same period of time, coverage of the cyber beat has focused critical attention not only on the ‘usual suspects’ (Russia, China, shadowy hacker groups) but also, increasingly, on big tech companies themselves: an aspect of this growing sophistication of coverage is a foregrounding of the crucial role platform companies play as gatekeepers of our digital lives.

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.

Panel on election disruption

Yesterday I attended an online panel organized by the Atlantic Council with government (Matt Masterson of CISA), think-tank (Alicia Wanless of the Carnegie Endowment for International Peace and Clara Tsao of the AC’s DFRLab) and industry figures (Nathaniel Gleicher of FB and Yoel Roth of Twitter) on steps being taken to guarantee the integrity of the electoral process in the US this Fall. The general sense was that the current ecosystem is much less vulnerable to disinformation than the last presidential cycle, four years ago, and this despite the unprecedented challenges of the current election. However, the most interesting panelist, Wanless, was also the least bullish about the process.