Category Archives: International relations and power politics

Disinformation isn’t Destiny

As the war in Ukraine enters its sixth week, it may prove helpful to look back on an early assessment of the informational sphere of the conflict, the snapshot taken by Maria Giovanna Sessa of the EU Disinfo Lab on March 14th.

Sessa summed up her findings succintly:

Strategy-wise, malign actors mainly produce entirely fabricated content, while the most recurrent tactic to disinform is the use of decontexualised photos and videos, followed by content manipulation (doctored image or false subtitles). As evidence of the high level of polarisation, the same narratives have been exploited to serve either pro-Ukrainian or pro-Russian messages.

This general picture, by most all accounts, largely holds half a month later. The styles of disinformation campaigns have not morphed significantly, although (as Sessa predicted) there has been a shift to weaponize the refugee angle of the crisis.

Most observers have been struck overall by the failure of the Russians to replicate previous information successes. The significant resources allotted from the very beginning of the conflict to fact-checking and debunking by a series of actors, public and private, in Western countries are part of the explanation for this outcome. More broady, however, it may be the case that Russian tactics in this arena have lost the advantage of surprise, so that as the informational sphere becomes more central to strategic power competition, relative capabilities revert to the mean of the general balance of power.

Lye machines

Josephine Wolff (Slate) reports on the recent hack of the water processing plant in Oldsmar, FL. Unknown intruders remotely accessed the plant’s controls and attempted to increase the lye content of the town’s water supply to potentially lethal levels. The case is notable in that the human fail-safe (the plant operator on duty) successfully counterbalanced the machine vulnerability, catching the hack as it was taking place and overriding the automatic controls, so no real-world adverse effects ultimately occurred.

What moral can be drawn? It is reasonable to argue, as Wolff does, against full automation: human supervision still has a critical role to play in the resiliency of critical control systems through human-machine redundancy. However, what Wolff does not mention is that this modus operandi may itself be interpreted as a signature of sorts (although no attribution has appeared in the press so far): it speaks of amateurism or of a proof-of-concept stunt; in any case, of an actor not planning to do any serious damage. Otherwise, it is highly improbable that there would have been no parallel attempt at social engineering of (or other types of attacks against) on-site technicians. After all, as the old security engineering nostrum states, rookies target technology, pros target people.

FB as Great Game arbitrator in Africa?

French-language news outlets, among others, have been reporting a Facebook takedown operation (here is the full report by Stanford University and Graphika) against three separate influence and disinformation networks, active in various sub-Saharan African countries since 2018. Two of these have been traced back to the well-known Russian troll farm Internet Research Agency; the third, however, appears to be linked to individuals in the French military (which is currently deployed in the Sahel). In some instances, and notably in the Central African Republic, the Russian and French operations competed directly with one another, attempting to doxx and discredit each other through fake fact-checking and news organization impersonations, as well as using AI to create fake online personalities posing as local residents.

The report did not present conclusive evidence for attribution of the French influence operation directly to the French government. Also, it argues that the French action was in many ways reactive to the Russian disinfo campaign. Nonetheless, as the authors claim,

[b]y creating fake accounts and fake “anti-fake-news” pages to combat the trolls, the French operators were perpetuating and implicitly justifying the problematic behavior they were trying to fight […] using “good fakes” to expose “bad fakes” is a high-risk strategy likely to backfire when a covert operation is detected […] More importantly, for the health of broader public discourse, the proliferation of fake accounts and manipulated evidence is only likely to deepen public suspicion of online discussion, increase polarization, and reduce the scope for evidence-based consensus.

What was not discussed, either in the report or in news coverage of it, is the emerging geopolitical equilibrium in which a private company can act as final arbitrator in an influence struggle between two Great Powers in a third country. Influence campaigns by foreign State actors are in no way a 21st-century novelty: the ability of a company such as Facebook to insert itself into them most certainly is. Media focus on disinformation-fighting activities of the major social media platforms in the case of the US elections (hence, on domestic ground) has had the effect of minimizing the strategic importance these companies now wield in international affairs. The question is to what extent they will be allowed to operate in complete independence by the US government, or, otherwise put, to what extent will foreign Powers insert this dossier into their general relation with the US going forward.

Violence, content moderation, and IR

Interesting article by James Vincent in The Verge about a decision by Zoom, Facebook, and YouTube to shut down a university webinar over fears of disseminating opinions advocating violence “carried out by […] criminal or terrorist organizations”. The case is strategically placed at the intersection of several recent trends.

On the one hand, de-platforming as a means of struggle to express outrage at the views of an invited speaker is a tactic that has been used often, especially on college campuses, even before the beginning of the pandemic and for in-person events. However, it appears that the pressure in this specific case was brought to bear by external organizations and lobby groups, without a visible grassroots presence within the higher education institution in question, San Francisco State University. Moreover, such pressure was exerted by means of threats of legal liability not against SFSU, but rather targeting the third-party, commercial platforms enabling diffusion of the event, which was to be held as a remote-only webinar for epidemiological concerns. Therefore, the university’s decision to organize the event was thwarted not by the pressure of an in-person crowd and the risk of public disturbances, but by the choice of a separate, independent actor, imposing external limitations derived from its own Terms of Service, when faced with potential litigation.

The host losing agency to the platform is not the only story these events tell, though. It is not coincidental that the case involves the Israeli-Palestinian struggle, and that the de-platformed individual was a member of the Popular Front for the Liberation of Palestine who participated in two plane hijackings in 1969-70. The transferral of an academic discussion to an online forum short-circuited the ability academic communities have traditionally enjoyed to re-frame discussions on all topics –even dangerous, taboo, or divisive ones– as being about analyzing and discussing, not about advocating and perpetrating. At the same time, post-9/11 norms and attitudes in the US have applied a criminal lens to actions and events that in their historical context represented moves in an ideological and geopolitical struggle. Such a transformation may represent a shift in the pursuit of the United States’ national interest, but what is striking about this case is that a choice made at a geo-strategic, Great Power level produces unmediated consequences for the opinions and rights of expression of individual citizens and organizations.

This aspect in turn ties in to the debate on the legitimacy grounds of platform content moderation policies: the aspiration may well be to couch such policies in universalist terms, and even take international human rights law as a framework or a model; however, in practice common moral prescriptions against violence scale poorly from the level of individuals in civil society to that of power politics and international relations, while the content moderation norms of the platforms are immersed in a State-controlled legal setting which, far from being neutral, is decisively shaped by their ideological and strategic preferences.

Surveillance acquiescence conundrum

Politico.eu recently ran an interview with Ciaran Martin, the outgoing chief of the UK’s National Cyber Security Centre. In it, Martin raises the alarm against Chinese attempts at massive data harvesting in the West (specifically in regard to the development of AI). This issue naturally dovetails with the US debate on the banning of TikTok. Herein lies the problem. Both national security agencies and major social media companies have endeavored to normalize perceptions of industrial data collection and surveillance over the past decade or two: that public opinion might be desensitized to the threat posed by foreign actors with access to similar data troves is therefore not surprising. The real challenge in repurposing a Cold War mentality for competition with China in the cyber domain today, in other words, is not so much a lag in Western –especially European– ICT innovation (Martin is himself slipping into a pantouflage position with a tech venture capital firm): it is a lack of urgency, of political will in the society at large, an apathy bred in part of acquiescence in surveillance capitalism.