Tag Archives: Intelligence

Geopolitical splintering, decentralization, impartiality

Meta and Twitter have discovered and dismantled a network of coordinated inauthentic behavior spreading pro-US (and anti-China/Russia/Iran) narratives in Central Asia and the Middle East (Al Jazeera, Axios stories). Undoubtedly, this kind of intervention bolsters the platforms’ image as neutral purveyors of information and entertainment, determined to enforce the rules of the game no matter what the ideological flavor of the transgression may be. In a way, paradoxically, such impartiality may even play well in Washington, where the companies would certainly welcome the support, given the current unfavorable political climate.

The type of universalism on display in this instance harkens back to an earlier era of the internet, the techno-libertarian heyday of the 1990s. Arguably, however, that early globalist vision of the world-wide web has already been eviscerated at the infrastructural level, with the growth of distinctive national versions of online life, in a long-term process that has only been made more visible by the conflict in Ukraine. Hence, the impartiality and universality of Meta and Twitter can be seen ultimately as an internal claim by and for the West, since users in countries like Russia, China, or Iran are unable to access these platforms in the first place. Of course, geopolitical splintering was one of the ills the web3 movement set out to counter. How much decentralization can resist the prevailing ideological headwinds, however, is increasingly unclear. Imperfect universalisms will have to suffice for the foreseeable future.

Spyware as diplomatic agenda item

Commercial spyware has become a mainstream news item: Politico this week profiled a story about NSO Group in the context of President Biden’s official visit to Israel and Saudi Arabia. Both Middle Eastern countries have ties with this private company, the former as the seat of its headquarters, the second as a customer of its services. The general context of the trip is broadly defensive for the US Administration, as it seeks help to stem the runaway growth in oil prices triggered by the Ukraine war, while emerging from under the shadow of its predecessor’s regional policies, from Jerusalem to Iran to the Abraham Accords. Given Biden’s objectively weak hand, raising the issue of NSO Group and the misuse of their spyware with two strategic partners is particularly complicated. At the same time, many domestic forces, from major companies damaged by Pegasus breaches (Apple, Meta…) to liberals in Congress (such as Oregon Senator Ron Wyden), are clamoring for an assertive stance. Naturally, the agencies of the US National Security State are also in the business of developing functionally similar spyware capabilities. Hence, the couching of the international policy problem follows the pattern of nonproliferation, with all the attendant rhetorical risks of special pleading and hypocrisy. The issue, however, is unlikely to fade away as an agenda item in the near future, a clear illustration of the risks to conventional diplomatic strategy of a situation in which military-grade cryptanalysis is made available on the open market.

Disinformation isn’t Destiny

As the war in Ukraine enters its sixth week, it may prove helpful to look back on an early assessment of the informational sphere of the conflict, the snapshot taken by Maria Giovanna Sessa of the EU Disinfo Lab on March 14th.

Sessa summed up her findings succintly:

Strategy-wise, malign actors mainly produce entirely fabricated content, while the most recurrent tactic to disinform is the use of decontexualised photos and videos, followed by content manipulation (doctored image or false subtitles). As evidence of the high level of polarisation, the same narratives have been exploited to serve either pro-Ukrainian or pro-Russian messages.

This general picture, by most all accounts, largely holds half a month later. The styles of disinformation campaigns have not morphed significantly, although (as Sessa predicted) there has been a shift to weaponize the refugee angle of the crisis.

Most observers have been struck overall by the failure of the Russians to replicate previous information successes. The significant resources allotted from the very beginning of the conflict to fact-checking and debunking by a series of actors, public and private, in Western countries are part of the explanation for this outcome. More broady, however, it may be the case that Russian tactics in this arena have lost the advantage of surprise, so that as the informational sphere becomes more central to strategic power competition, relative capabilities revert to the mean of the general balance of power.

Bridle’s vision

Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:

a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.

One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)

Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.

FB as Great Game arbitrator in Africa?

French-language news outlets, among others, have been reporting a Facebook takedown operation (here is the full report by Stanford University and Graphika) against three separate influence and disinformation networks, active in various sub-Saharan African countries since 2018. Two of these have been traced back to the well-known Russian troll farm Internet Research Agency; the third, however, appears to be linked to individuals in the French military (which is currently deployed in the Sahel). In some instances, and notably in the Central African Republic, the Russian and French operations competed directly with one another, attempting to doxx and discredit each other through fake fact-checking and news organization impersonations, as well as using AI to create fake online personalities posing as local residents.

The report did not present conclusive evidence for attribution of the French influence operation directly to the French government. Also, it argues that the French action was in many ways reactive to the Russian disinfo campaign. Nonetheless, as the authors claim,

[b]y creating fake accounts and fake “anti-fake-news” pages to combat the trolls, the French operators were perpetuating and implicitly justifying the problematic behavior they were trying to fight […] using “good fakes” to expose “bad fakes” is a high-risk strategy likely to backfire when a covert operation is detected […] More importantly, for the health of broader public discourse, the proliferation of fake accounts and manipulated evidence is only likely to deepen public suspicion of online discussion, increase polarization, and reduce the scope for evidence-based consensus.

What was not discussed, either in the report or in news coverage of it, is the emerging geopolitical equilibrium in which a private company can act as final arbitrator in an influence struggle between two Great Powers in a third country. Influence campaigns by foreign State actors are in no way a 21st-century novelty: the ability of a company such as Facebook to insert itself into them most certainly is. Media focus on disinformation-fighting activities of the major social media platforms in the case of the US elections (hence, on domestic ground) has had the effect of minimizing the strategic importance these companies now wield in international affairs. The question is to what extent they will be allowed to operate in complete independence by the US government, or, otherwise put, to what extent will foreign Powers insert this dossier into their general relation with the US going forward.