Tag Archives: Influence operations

Geopolitical splintering, decentralization, impartiality

Meta and Twitter have discovered and dismantled a network of coordinated inauthentic behavior spreading pro-US (and anti-China/Russia/Iran) narratives in Central Asia and the Middle East (Al Jazeera, Axios stories). Undoubtedly, this kind of intervention bolsters the platforms’ image as neutral purveyors of information and entertainment, determined to enforce the rules of the game no matter what the ideological flavor of the transgression may be. In a way, paradoxically, such impartiality may even play well in Washington, where the companies would certainly welcome the support, given the current unfavorable political climate.

The type of universalism on display in this instance harkens back to an earlier era of the internet, the techno-libertarian heyday of the 1990s. Arguably, however, that early globalist vision of the world-wide web has already been eviscerated at the infrastructural level, with the growth of distinctive national versions of online life, in a long-term process that has only been made more visible by the conflict in Ukraine. Hence, the impartiality and universality of Meta and Twitter can be seen ultimately as an internal claim by and for the West, since users in countries like Russia, China, or Iran are unable to access these platforms in the first place. Of course, geopolitical splintering was one of the ills the web3 movement set out to counter. How much decentralization can resist the prevailing ideological headwinds, however, is increasingly unclear. Imperfect universalisms will have to suffice for the foreseeable future.

Disinformation isn’t Destiny

As the war in Ukraine enters its sixth week, it may prove helpful to look back on an early assessment of the informational sphere of the conflict, the snapshot taken by Maria Giovanna Sessa of the EU Disinfo Lab on March 14th.

Sessa summed up her findings succintly:

Strategy-wise, malign actors mainly produce entirely fabricated content, while the most recurrent tactic to disinform is the use of decontexualised photos and videos, followed by content manipulation (doctored image or false subtitles). As evidence of the high level of polarisation, the same narratives have been exploited to serve either pro-Ukrainian or pro-Russian messages.

This general picture, by most all accounts, largely holds half a month later. The styles of disinformation campaigns have not morphed significantly, although (as Sessa predicted) there has been a shift to weaponize the refugee angle of the crisis.

Most observers have been struck overall by the failure of the Russians to replicate previous information successes. The significant resources allotted from the very beginning of the conflict to fact-checking and debunking by a series of actors, public and private, in Western countries are part of the explanation for this outcome. More broady, however, it may be the case that Russian tactics in this arena have lost the advantage of surprise, so that as the informational sphere becomes more central to strategic power competition, relative capabilities revert to the mean of the general balance of power.

A global take on the mistrust moment

My forthcoming piece on Ethan Zuckerman’s Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them for the Italian Political Science Review.

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.

Bridle’s vision

Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:

a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.

One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)

Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.