Tag Archives: Attribution

Trust among thieves

An item that recently appeared on NBC News (via /.) graphically illustrates the pervasiveness of the problem of trust across organizations, cultures, and value systems. It also speaks to the routinization of ransomware extortion and other forms of cybercrime as none-too-glamorous career paths, engendering their own disgruntled and underpaid line workers.

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.

Media manipulation convergence

Adam Satariano in the NYT reports on the latest instance of platform manipulation, this time by Chinese tech giant Huawei against unfavorable 5G legislation being considered in Belgium. There’s nothing particularly novel about the single pieces of the process: paid expert endorsement, amplified on social media by coordinated fake profiles, with the resultant appearance of virality adduced by the company as a sign of support in public opinion at large. If anything, it appears to have been rather crudely executed, leading to a fairly easy discovery by Graphika: from a pure PR cost-benefit standpoint, the blowback from the unmasking of this operation did much more damage to Huawei’s image than any benefit that might have accrued to the company had it not been exposed. However, the main take-away from the story is the adding of yet another data point to the process of convergence between traditional government-sponsored influence operations and corporate astroturfing ventures. Their questionable effectiveness notwithstanding, these sorts of interventions are becoming default, mainstream tools in the arsenal of all PR shops, whatever their principals’ aims. The fact that they also tend to erode an already fragile base of public trust suggests that at the aggregate level this may be a negative-sum game.

FB as Great Game arbitrator in Africa?

French-language news outlets, among others, have been reporting a Facebook takedown operation (here is the full report by Stanford University and Graphika) against three separate influence and disinformation networks, active in various sub-Saharan African countries since 2018. Two of these have been traced back to the well-known Russian troll farm Internet Research Agency; the third, however, appears to be linked to individuals in the French military (which is currently deployed in the Sahel). In some instances, and notably in the Central African Republic, the Russian and French operations competed directly with one another, attempting to doxx and discredit each other through fake fact-checking and news organization impersonations, as well as using AI to create fake online personalities posing as local residents.

The report did not present conclusive evidence for attribution of the French influence operation directly to the French government. Also, it argues that the French action was in many ways reactive to the Russian disinfo campaign. Nonetheless, as the authors claim,

[b]y creating fake accounts and fake “anti-fake-news” pages to combat the trolls, the French operators were perpetuating and implicitly justifying the problematic behavior they were trying to fight […] using “good fakes” to expose “bad fakes” is a high-risk strategy likely to backfire when a covert operation is detected […] More importantly, for the health of broader public discourse, the proliferation of fake accounts and manipulated evidence is only likely to deepen public suspicion of online discussion, increase polarization, and reduce the scope for evidence-based consensus.

What was not discussed, either in the report or in news coverage of it, is the emerging geopolitical equilibrium in which a private company can act as final arbitrator in an influence struggle between two Great Powers in a third country. Influence campaigns by foreign State actors are in no way a 21st-century novelty: the ability of a company such as Facebook to insert itself into them most certainly is. Media focus on disinformation-fighting activities of the major social media platforms in the case of the US elections (hence, on domestic ground) has had the effect of minimizing the strategic importance these companies now wield in international affairs. The question is to what extent they will be allowed to operate in complete independence by the US government, or, otherwise put, to what extent will foreign Powers insert this dossier into their general relation with the US going forward.

Disinformation at the weakest link

There was an interesting article recently in Quartz about 2020 electoral disinformation in Spanish-language social media. While the major platforms have taken credit for the fact that the election did not feature a repeat of the coordinated foreign influence operations of 2016, arguably the victory lap came somewhat too soon, as the problem cases in the information sphere this cycle are only gradually coming to light. Post-electoral myth-building about a rigged process and a stolen victory, for one, while of little practical import for the present, has the potential to plant a deep, lasting sense of grievance in conservative political culture in the US over the long term. Similarly, the fact that less public attention, less civil-society scrutiny, less community-based new-media literacy, and less algorithmic refinement were available for Spanish-speaking electoral discourse meant that disinformation was allowed to expand much more broadly and deeply than in English. The mainstream liberal narrative that such a fact per se helps explain the disappointing electoral results of the Democratic Party with this demographic (especially in States like Florida or Texas) is itself fairly insensitive and stereotyped. The Latinx electorate in the US is not a monolith, and segments within it have distinct political preferences, which are no more the product of disinformation than anyone else’s. Yet, it seems clear that in this electoral campaign factually false political statements received less pushback, both from above and from below, when they were uttered in Spanish.

Two general facts are illustrated by this example. On the one hand, because of the production and distribution dynamics of disinformation, it is clear that its spread follows a path of least resistance: minority languages, like peripheral countries or minor social media platforms, while unlikely to be on the cutting edge of new disinformation, tend to be more permeable to stock disinformation that has been rebutted elsewhere. On the other hand, where disinformation has the ability to do the most damage is in spaces where it is unexpected, in fields that are considered separate and independent, subject to different rules of engagement. In this sense, fake news does not simply provide partisans with ‘factual’ reasons to feel how they already felt about their adversaries: it can legitimately catch the unsuspecting unawares. One of the reasons for disinformation’s massive impact on American public discourse is that in a hyper-partisan era all manner of domains in everyday life once completely divorced from politics have been reached by political propaganda, and in these contexts a weary habituation with such wrangling has not yet set in, effectively tuning them out. This dynamic has been reflected in social media platforms: the ‘repurposing’ of LinkedIn and NextDoor in connection with the BLM protests is telling.

So, if disinformation at its most effective is the insertion of narratives where they are least expected, and if its spread follows a path of least resistance, seeking out the weakest link (while its containment follows an actuarial logic, the most effort being placed where the highest return is expected), what does this portend for the possibility of a unitary public sphere?

There is reason to believe that these are long-run concerns, and that the Presidential campaign may have been the easy part. As Ellen Goodman and Karen Kornbluh mention in their platform electoral performance round-up,

That there was clearly authoritative local information about voting and elections made the platforms’ task easier. It becomes harder in other areas of civic integrity where authority is more contested.

Foreign counterexamples such as that of Taiwan reinforce the conundrum: cohesive societies are capable of doing well against disinformation, but in already polarized ones a focus on such a fight is perceived as being a partisan stance itself.