Tag Archives: Social media

FB as Great Game arbitrator in Africa?

French-language news outlets, among others, have been reporting a Facebook takedown operation (here is the full report by Stanford University and Graphika) against three separate influence and disinformation networks, active in various sub-Saharan African countries since 2018. Two of these have been traced back to the well-known Russian troll farm Internet Research Agency; the third, however, appears to be linked to individuals in the French military (which is currently deployed in the Sahel). In some instances, and notably in the Central African Republic, the Russian and French operations competed directly with one another, attempting to doxx and discredit each other through fake fact-checking and news organization impersonations, as well as using AI to create fake online personalities posing as local residents.

The report did not present conclusive evidence for attribution of the French influence operation directly to the French government. Also, it argues that the French action was in many ways reactive to the Russian disinfo campaign. Nonetheless, as the authors claim,

[b]y creating fake accounts and fake “anti-fake-news” pages to combat the trolls, the French operators were perpetuating and implicitly justifying the problematic behavior they were trying to fight […] using “good fakes” to expose “bad fakes” is a high-risk strategy likely to backfire when a covert operation is detected […] More importantly, for the health of broader public discourse, the proliferation of fake accounts and manipulated evidence is only likely to deepen public suspicion of online discussion, increase polarization, and reduce the scope for evidence-based consensus.

What was not discussed, either in the report or in news coverage of it, is the emerging geopolitical equilibrium in which a private company can act as final arbitrator in an influence struggle between two Great Powers in a third country. Influence campaigns by foreign State actors are in no way a 21st-century novelty: the ability of a company such as Facebook to insert itself into them most certainly is. Media focus on disinformation-fighting activities of the major social media platforms in the case of the US elections (hence, on domestic ground) has had the effect of minimizing the strategic importance these companies now wield in international affairs. The question is to what extent they will be allowed to operate in complete independence by the US government, or, otherwise put, to what extent will foreign Powers insert this dossier into their general relation with the US going forward.

Babies and bathwater

Just attended an EFF-run ‘Fireside Chat’ with US Senator Ron Wyden (D-OR) on Section 230. As one of the original drafters of the legislation, the Senator was eager to point out the core values it was meant to shield from legal challenge, permitting the full deployment of constitutionally-protected speech online without imposing an undue burden of liability on those hosting such speech.

The Electronic Frontier Foundation and other digital rights organizations find themselves in a complicated political position, for, having spoken out against the risks and abuses originating from Big Tech long before there was widespread public consciousness of any problem, they now have to push against a bipartisan current that has crystallized in opposition to Section 230. Even some generalist news outlets have seized on the matter, giving scant play to the values and legitimate interests the law was originally intended to safeguard.

It seems fairly clear that mainstream political discourse has been extremely superficial in considering key aspects of the problem: Section 230 has become a symbol rather than a mere tool of governance. It may also be the case that the wide bipartisan consensus on its ills is in fact illusory, simply being a placemarker for incompatible views on how to move beyond the status quo, with the most likely outcome being paralysis of any reform effort. However, the risk that the imperative to do something cause the passage of hasty measures with lasting damage is real.

In a way, the present situation is the poisoned fruit of a narrative that linked the unfettered expansion of the Big Tech giants over the past decade to the libertarian and countercultural ideals of the early internet: when the former came to be perceived as intolerable, the latter were seen at best as acceptable collateral damage. Most of the popular animus against Section 230 that politicians are attempting to channel stems from resentment at the power of social media platforms and digital gatekeepers. Therefore (and although there may well be a case for the need to curb mindless amplification of certain types of speech online), perhaps antitrust action (in Congress or in the courts) is more suitable for obtaining the results the public seeks. Comparative policymaking will also be extremely relevant, as the European Union pursues its own aggressive agenda on content moderation, permissible speech, and monopoly power.

Disinformation at the weakest link

There was an interesting article recently in Quartz about 2020 electoral disinformation in Spanish-language social media. While the major platforms have taken credit for the fact that the election did not feature a repeat of the coordinated foreign influence operations of 2016, arguably the victory lap came somewhat too soon, as the problem cases in the information sphere this cycle are only gradually coming to light. Post-electoral myth-building about a rigged process and a stolen victory, for one, while of little practical import for the present, has the potential to plant a deep, lasting sense of grievance in conservative political culture in the US over the long term. Similarly, the fact that less public attention, less civil-society scrutiny, less community-based new-media literacy, and less algorithmic refinement were available for Spanish-speaking electoral discourse meant that disinformation was allowed to expand much more broadly and deeply than in English. The mainstream liberal narrative that such a fact per se helps explain the disappointing electoral results of the Democratic Party with this demographic (especially in States like Florida or Texas) is itself fairly insensitive and stereotyped. The Latinx electorate in the US is not a monolith, and segments within it have distinct political preferences, which are no more the product of disinformation than anyone else’s. Yet, it seems clear that in this electoral campaign factually false political statements received less pushback, both from above and from below, when they were uttered in Spanish.

Two general facts are illustrated by this example. On the one hand, because of the production and distribution dynamics of disinformation, it is clear that its spread follows a path of least resistance: minority languages, like peripheral countries or minor social media platforms, while unlikely to be on the cutting edge of new disinformation, tend to be more permeable to stock disinformation that has been rebutted elsewhere. On the other hand, where disinformation has the ability to do the most damage is in spaces where it is unexpected, in fields that are considered separate and independent, subject to different rules of engagement. In this sense, fake news does not simply provide partisans with ‘factual’ reasons to feel how they already felt about their adversaries: it can legitimately catch the unsuspecting unawares. One of the reasons for disinformation’s massive impact on American public discourse is that in a hyper-partisan era all manner of domains in everyday life once completely divorced from politics have been reached by political propaganda, and in these contexts a weary habituation with such wrangling has not yet set in, effectively tuning them out. This dynamic has been reflected in social media platforms: the ‘repurposing’ of LinkedIn and NextDoor in connection with the BLM protests is telling.

So, if disinformation at its most effective is the insertion of narratives where they are least expected, and if its spread follows a path of least resistance, seeking out the weakest link (while its containment follows an actuarial logic, the most effort being placed where the highest return is expected), what does this portend for the possibility of a unitary public sphere?

There is reason to believe that these are long-run concerns, and that the Presidential campaign may have been the easy part. As Ellen Goodman and Karen Kornbluh mention in their platform electoral performance round-up,

That there was clearly authoritative local information about voting and elections made the platforms’ task easier. It becomes harder in other areas of civic integrity where authority is more contested.

Foreign counterexamples such as that of Taiwan reinforce the conundrum: cohesive societies are capable of doing well against disinformation, but in already polarized ones a focus on such a fight is perceived as being a partisan stance itself.

Victory lap for the EIP

Today, I followed the webcast featuring the initial wrap-up of the Electoral Integrity Partnership (I have discussed their work before). All the principal institutions composing the partnership (Stanford, University of Washington, Graphika, and the Atlantic Council) were represented on the panel. It was, in many respects, a victory lap, given the consensus view that foreign disinformation played a marginal role in the election, compared to 2016, also due to proactive engagement on the part of the large internet companies facilitated by projects such as the EIP.

In describing the 2020 disinformation ecosystem, Alex Stamos (Stanford Internet Observatory) characterized it as mostly home-grown, non-covert, and reliant on major influencers, which in turn forced platforms to transform their content moderation activities into a sort of editorial policy (I have remarked on this trend before). Also, for all the focus on social media, TV was seen to play a very important role, especially for the purpose of building background narratives in the long run, day to day.

Camille François (Graphika) remarked on the importance of alternative platforms, and indeed the trend has been for an expansion of political discourse to all manner of fora previously insulated from it (on this, more soon).

Of the disinformation memes that made it into the mainstream conversation (Stamos mentioned the example of Sharpie-gate), certain characteristics stand out: they tended to appeal to rival technical authority, so as to project an expert-vs-expert dissonance; they were made sticky by official endorsement, which turned debunking into a partisan enterprise. However, their very predictability rendered the task of limiting their spread easier for the platforms. Kate Starbird (UW) summed it up: if the story in 2016 was coordinated inauthentic foreign action, in 2020 it was authentic, loosely-coordinated domestic action, and the major institutional players (at least on the tech infrastructure side) were ready to handle it.

It makes sense for the EIP to celebrate how the disinformation environment was stymied in 2020 (as Graham Brookie of the Atlantic Council put it, it was a win for defence in the ongoing arms race on foreign interference), but it is not hard to see how such an assessment masks a bigger crisis, which would have been evident had there been a different victor. Overall trust in the informational ecosystem has been dealt a further, massive blow by the election, and hyper-polarized post-truth politics are nowhere near over. Indeed, attempts currently underway to fashion a Dolchstoßlegende are liable to have very significant systemic effects going forward. The very narrow focus on disinformation the EIP pursued may have paid off for now, but it is the larger picture of entrenched public distrust that guarantees that these problems will persist into the future.

Violence, content moderation, and IR

Interesting article by James Vincent in The Verge about a decision by Zoom, Facebook, and YouTube to shut down a university webinar over fears of disseminating opinions advocating violence “carried out by […] criminal or terrorist organizations”. The case is strategically placed at the intersection of several recent trends.

On the one hand, de-platforming as a means of struggle to express outrage at the views of an invited speaker is a tactic that has been used often, especially on college campuses, even before the beginning of the pandemic and for in-person events. However, it appears that the pressure in this specific case was brought to bear by external organizations and lobby groups, without a visible grassroots presence within the higher education institution in question, San Francisco State University. Moreover, such pressure was exerted by means of threats of legal liability not against SFSU, but rather targeting the third-party, commercial platforms enabling diffusion of the event, which was to be held as a remote-only webinar for epidemiological concerns. Therefore, the university’s decision to organize the event was thwarted not by the pressure of an in-person crowd and the risk of public disturbances, but by the choice of a separate, independent actor, imposing external limitations derived from its own Terms of Service, when faced with potential litigation.

The host losing agency to the platform is not the only story these events tell, though. It is not coincidental that the case involves the Israeli-Palestinian struggle, and that the de-platformed individual was a member of the Popular Front for the Liberation of Palestine who participated in two plane hijackings in 1969-70. The transferral of an academic discussion to an online forum short-circuited the ability academic communities have traditionally enjoyed to re-frame discussions on all topics –even dangerous, taboo, or divisive ones– as being about analyzing and discussing, not about advocating and perpetrating. At the same time, post-9/11 norms and attitudes in the US have applied a criminal lens to actions and events that in their historical context represented moves in an ideological and geopolitical struggle. Such a transformation may represent a shift in the pursuit of the United States’ national interest, but what is striking about this case is that a choice made at a geo-strategic, Great Power level produces unmediated consequences for the opinions and rights of expression of individual citizens and organizations.

This aspect in turn ties in to the debate on the legitimacy grounds of platform content moderation policies: the aspiration may well be to couch such policies in universalist terms, and even take international human rights law as a framework or a model; however, in practice common moral prescriptions against violence scale poorly from the level of individuals in civil society to that of power politics and international relations, while the content moderation norms of the platforms are immersed in a State-controlled legal setting which, far from being neutral, is decisively shaped by their ideological and strategic preferences.