It was a great pleasure to convene a workshop at the European Digital Media Observatory today featuring Claire Wardle (Brown), Craig Matasick (OECD), Daniela Stockmann (Hertie), Kalypso Nicolaidis (Oxford), Lisa Ginsborg (EUI), Emma Briant (Bard) and (briefly) Alicia Wanless (Carnegie Endowment for International Peace). The title was “Information flows and institutional reputation: leveraging social trust in times of crisis” and the far-ranging discussion touched on disinformation, trust vs. trustworthiness, different models of content moderation, institutional design, preemptive red-teaming of policies, algorithmic amplification, and the successes and limits of multi-stakeholder frameworks. A very productive meeting, with more to come in the near future on this front.
Tag Archives: Content moderation
Excess skepticism and the media trust deficit
An interesting presentation at the MISDOOM 2022 conference earlier this week: Sacha Altay (Oxford) on the effectiveness of interventions against misinformation [pre-print here].
Altay lays out some established facts in the academic literature that at times get lost in the policy debate. The main one is that explicit disinformation, i.e. unreliable news such as that generated on propaganda websites that run coordinated influence operations, represents a minuscule segment of everyday people’s media consumption; however, the public has been induced to be indiscriminately skeptical of all news, and therefore doubts the validity even of bona fide information.
Thus, it would appear that a policy intervention aimed at explaining the verification techniques employed by professional journalists to vet reliable information should be more effective, all else being equal, than one that exposes the workings of purposeful disinformation. On the other hand, as Altay recognizes, misinformation is, at heart, a mere symptom of a deeper polarization, an attitude of political antagonism in search of content to validate it. But while such active seeking of misinformation may be fringe, spontaneous, and not particularly dangerous for democracy, generalized excess skepticism and the ensuing media trust deficit are much more serious wins for the enemies of open public discourse.
A global take on the mistrust moment
My forthcoming piece on Ethan Zuckerman’s Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them for the Italian Political Science Review.
Limits of trustbuilding as policy objective
Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.
While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.
Bridle’s vision
Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:
a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.
One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)
Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.