Tag Archives: Manipulation

Perspectives on data activism: Aventine secessions and sabotage

Interesting article in the MIT Tech Review (via /.) detailing research performed at Northwestern University (paper on ArXiv) on how potentially to leverage the power of collective action in order to counter pervasive data collection strategies by internet companies. Three such methods are discussed: data strikes (refusal to use data-invasive services), data poisoning (providing false and misleading data), and conscious data contribution (to privacy-respecting competitors).

Conscious data contribution and data strikes are relatively straightforward Aventine secessions, but depend decisively on the availability of alternative services (or the acceptability of degraded performance for the mobilized users on less-than-perfect substitutes).

The effectiveness of data poisoning, on the other hand, turns on the type of surveillance one is trying to stifle (as I have argued in I labirinti). If material efficacy is at stake, it can be decisive (e.g., faulty info can make a terrorist manhunt fail). Unsurprisingly, this type of strategic disinformation has featured in the plot of many works of fiction, both featuring and not featuring AIs. But if what’s at stake is the perception of efficacy, data poisoning is only an effective counterstrategy inasmuch as it destroys the legitimacy of the decisions made on the basis of the collected data (at what point, for instance, do advertisers stop working with Google because its database is irrevocably compromised?). In some cases of AI/ML adoption, in which the offloading of responsibility and the containment of costs are the foremost goals, there already is very broad tolerance for bias (i.e., faulty training data).

Hence in general the fix is not exclusively technical: political mobilization must be activated to cash in on the contradictions these data activism interventions bring to light.

Media manipulation convergence

Adam Satariano in the NYT reports on the latest instance of platform manipulation, this time by Chinese tech giant Huawei against unfavorable 5G legislation being considered in Belgium. There’s nothing particularly novel about the single pieces of the process: paid expert endorsement, amplified on social media by coordinated fake profiles, with the resultant appearance of virality adduced by the company as a sign of support in public opinion at large. If anything, it appears to have been rather crudely executed, leading to a fairly easy discovery by Graphika: from a pure PR cost-benefit standpoint, the blowback from the unmasking of this operation did much more damage to Huawei’s image than any benefit that might have accrued to the company had it not been exposed. However, the main take-away from the story is the adding of yet another data point to the process of convergence between traditional government-sponsored influence operations and corporate astroturfing ventures. Their questionable effectiveness notwithstanding, these sorts of interventions are becoming default, mainstream tools in the arsenal of all PR shops, whatever their principals’ aims. The fact that they also tend to erode an already fragile base of public trust suggests that at the aggregate level this may be a negative-sum game.

Turning the page on disinformation?

With the inauguration of a new Administration, speculation is rife on the chances of moving on from the more toxic aspects of the political media ecosystem of the past half decade. An op-ed by Rob Faris and Joan Donovan of the Shorenstein Center (Harvard Kennedy School) spells out these aspirations concretely: with Biden in the White House, conservative media such as Fox News have the opportunity to distance themselves decisively from the more fringe disinformation beliefs of the conservative base, and return political discussion to a debate of ideas rather than the reinforcement of antagonistic social realities. In their own words,

The only way out of this hole is to rediscover a collective understanding of reality and to reinstall the mechanisms of accountability in media where they are missing, to ensure that accuracy and objectivity are rewarded and disinformation is not given the space to metastasize.

I think there is good reason not to be particularly sanguine about these goals. Faris and Donovan’s proposed solutions read more as a restatement of the intractability of the problem. For one thing, their discussion is very top-down, focusing on what the upper echelons of the Republican Party, the conservative-leaning media, and their financial backers can do. The trouble with US political disinformation, I would argue, is that at this point in the cycle it is largely demand-driven: there is a strong appetite for it in the (GOP) electorate at large, to the point that one could speak of a general social antagonism in search of arguments. Hence, focusing on the infrastructure of production of disinformation is merely going to elicit creative responses, such as the flight to alternative social media platforms, which will be viable given the size, means, capabilities, and diversity of the public involved.

The alternative, however, is equally fraught. Focusing on the transformation of mass beliefs in order to discourage the demand for disinformation amounts, in essence, to a domestic ‘hearts and minds’ mission. The historical record for such attempts is hardly promising. The trouble, of course, is that political adversaries cannot at the same time be treated as respectable dissenters in the common task of running the commonwealth and also as fundamentally wrong in their factual beliefs: respecting and correcting struggle to coexist in the same interpersonal relationship.

One of the problems with such an approach is that it is incomplete to say that the US media ecosystem is fragmented and siloed:

Since its inception, conservative media in America has operated under different rules […] The outcome: a cleavage in the U.S. public sphere and a schism in the marketplace of ideas. The news media of the center and left, with all its flaws and faults, operates in a milieu in which fact checkers have influence and the standards and practices of objectivity and accuracy still hold sway.

In other words, conservatives have largely seceded from the traditional, 20th-century unified media sphere of print and broadcast outlets, toward a smaller, insular, homogeneous but culturally dominated one of their own. The rump ‘mainstream media’ has maintained its old ‘fourth estate’ ethos, but not its bipartisan audience. Hence, its loss of cross-party authoritativeness.

The accountability void created by this partisan segregation of US public opinion offers concrete inducements to ambitious populist politicians, which will prove hard to resist. The belief that the system contains self-correcting mechanisms appears ill-founded. Yet, it is unclear that the current administration has the stomach for the protracted effort necessary to change mass beliefs, or that it would be supported consistently by external power centers, especially in the business community, in doing so.

Addiction vs. dependency

A long, powerful essay in The Baffler about the new antitrust actions against Big Tech in the US and the parallels being drawn with the tobacco trials of the 1990s. I agree with its core claim, that equating the problem Big Tech poses with a personal addiction one (a position promoted inter alios by the documentary The Social Dilemma) minimizes the issue of economic dependency and the power it confers on the gatekeepers of key digital infrastructure. I have argued previously that this is at the heart of popular mistrust of the big platforms. However, the pursuit of the tech giants in court risks to be hobbled because of the lasting effect of neoliberal thought on antitrust architecture in US jurisprudence and regulation. Concentrating on consumer prices in the short run risks missing the very real ways in which tech companies can exert systemic social power. In their quest to rein in Big Tech, US lawmakers and attorneys will be confronted with much deeper and more systemic political economy issues. It is unclear they will be able to win this general philosophical argument against such powerful special interests.

FB as Great Game arbitrator in Africa?

French-language news outlets, among others, have been reporting a Facebook takedown operation (here is the full report by Stanford University and Graphika) against three separate influence and disinformation networks, active in various sub-Saharan African countries since 2018. Two of these have been traced back to the well-known Russian troll farm Internet Research Agency; the third, however, appears to be linked to individuals in the French military (which is currently deployed in the Sahel). In some instances, and notably in the Central African Republic, the Russian and French operations competed directly with one another, attempting to doxx and discredit each other through fake fact-checking and news organization impersonations, as well as using AI to create fake online personalities posing as local residents.

The report did not present conclusive evidence for attribution of the French influence operation directly to the French government. Also, it argues that the French action was in many ways reactive to the Russian disinfo campaign. Nonetheless, as the authors claim,

[b]y creating fake accounts and fake “anti-fake-news” pages to combat the trolls, the French operators were perpetuating and implicitly justifying the problematic behavior they were trying to fight […] using “good fakes” to expose “bad fakes” is a high-risk strategy likely to backfire when a covert operation is detected […] More importantly, for the health of broader public discourse, the proliferation of fake accounts and manipulated evidence is only likely to deepen public suspicion of online discussion, increase polarization, and reduce the scope for evidence-based consensus.

What was not discussed, either in the report or in news coverage of it, is the emerging geopolitical equilibrium in which a private company can act as final arbitrator in an influence struggle between two Great Powers in a third country. Influence campaigns by foreign State actors are in no way a 21st-century novelty: the ability of a company such as Facebook to insert itself into them most certainly is. Media focus on disinformation-fighting activities of the major social media platforms in the case of the US elections (hence, on domestic ground) has had the effect of minimizing the strategic importance these companies now wield in international affairs. The question is to what extent they will be allowed to operate in complete independence by the US government, or, otherwise put, to what extent will foreign Powers insert this dossier into their general relation with the US going forward.