Tag Archives: Manipulation

Disinformation isn’t Destiny

As the war in Ukraine enters its sixth week, it may prove helpful to look back on an early assessment of the informational sphere of the conflict, the snapshot taken by Maria Giovanna Sessa of the EU Disinfo Lab on March 14th.

Sessa summed up her findings succintly:

Strategy-wise, malign actors mainly produce entirely fabricated content, while the most recurrent tactic to disinform is the use of decontexualised photos and videos, followed by content manipulation (doctored image or false subtitles). As evidence of the high level of polarisation, the same narratives have been exploited to serve either pro-Ukrainian or pro-Russian messages.

This general picture, by most all accounts, largely holds half a month later. The styles of disinformation campaigns have not morphed significantly, although (as Sessa predicted) there has been a shift to weaponize the refugee angle of the crisis.

Most observers have been struck overall by the failure of the Russians to replicate previous information successes. The significant resources allotted from the very beginning of the conflict to fact-checking and debunking by a series of actors, public and private, in Western countries are part of the explanation for this outcome. More broady, however, it may be the case that Russian tactics in this arena have lost the advantage of surprise, so that as the informational sphere becomes more central to strategic power competition, relative capabilities revert to the mean of the general balance of power.

A.utomated I.dentity?

An interesting, thoughtful article by Michelle Santiago Cortés in The Cut last week looks at affective relationships with algorithms and their role in shaping our identities.

Three parts of the analysis specifically stood out to me. The first revolves around our typical lack of knowledge of algorithms: Cortés’ story about

some YouTube alpha male […] out there uploading videos promising straight men advice on how to “hack” the Tinder algorithm to date like kings

is clearly only the tip of a gigantic societal iceberg, a cargo-culture-as-way-of-life involving pretty much everyone in the remotest, most diverse walks of life. The ever-evolving nature of these algorithms compounds the obfuscation effect, making end-users’ strategic attempts, whether exploitation- or resistance-focused, generally appear puerile.

Second, the clarity with which Cortés encapsulated the main tradeoff in the relationship was truly apt:

[w]e are, to varying degrees, okay with being surveilled as long as we get to feel seen.

The assertion of visibility and assurance of recognition are two of the key assets algorithmic systems offer their users, and their value can hardly be minimized as mere late-consumerist narcissism.

Finally, the comparison between algorithmic portraits of personality and astrology was extremely telling: closing the behavioral loop from algorithmic interaction to the redefinition of one’s own identity on the basis of the algorithm’s inevitably distorting mirror is still a matter of choice, or rather, a sensibility that can be honed and socialized regarding the most empowering and nurturing use of what is ultimately a hermeneutic tool. Of course, such a benign conclusion rests on the ambit of application of such technologies: music videos, entertainment, dating. As soon as our contemporary astrological devices are put in charge of directing outcomes in the field of political economy and public policy, the moral calculus rapidly shifts.

Rightwing algorithms?

A long blog post on Olivier Ertzscheid’s personal website [in French] tackles the ideological orientation of the major social media platforms from a variety of points of view (the political leanings of software developers, of bosses, of companies, the politics of content moderation, political correctness, the revolving door with government and political parties, the intrinsic suitability of different ideologies to algorithmic amplification, and so forth).

The conclusions are quite provocative: although algorithms and social media platforms are both demonstrably biased and possessed of independent causal agency, amplifying, steering, and coarsening our public debate, in the end it is simply those with greater resources, material, social, cultural, etc., whose voices are amplified. Algorithms skew to the right because so does our society.

A global take on the mistrust moment

My forthcoming piece on Ethan Zuckerman’s Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them for the Italian Political Science Review.

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.