Tag Archives: Algorithms

A.utomated I.dentity?

An interesting, thoughtful article by Michelle Santiago Cortés in The Cut last week looks at affective relationships with algorithms and their role in shaping our identities.

Three parts of the analysis specifically stood out to me. The first revolves around our typical lack of knowledge of algorithms: Cortés’ story about

some YouTube alpha male […] out there uploading videos promising straight men advice on how to “hack” the Tinder algorithm to date like kings

is clearly only the tip of a gigantic societal iceberg, a cargo-culture-as-way-of-life involving pretty much everyone in the remotest, most diverse walks of life. The ever-evolving nature of these algorithms compounds the obfuscation effect, making end-users’ strategic attempts, whether exploitation- or resistance-focused, generally appear puerile.

Second, the clarity with which Cortés encapsulated the main tradeoff in the relationship was truly apt:

[w]e are, to varying degrees, okay with being surveilled as long as we get to feel seen.

The assertion of visibility and assurance of recognition are two of the key assets algorithmic systems offer their users, and their value can hardly be minimized as mere late-consumerist narcissism.

Finally, the comparison between algorithmic portraits of personality and astrology was extremely telling: closing the behavioral loop from algorithmic interaction to the redefinition of one’s own identity on the basis of the algorithm’s inevitably distorting mirror is still a matter of choice, or rather, a sensibility that can be honed and socialized regarding the most empowering and nurturing use of what is ultimately a hermeneutic tool. Of course, such a benign conclusion rests on the ambit of application of such technologies: music videos, entertainment, dating. As soon as our contemporary astrological devices are put in charge of directing outcomes in the field of political economy and public policy, the moral calculus rapidly shifts.

Rightwing algorithms?

A long blog post on Olivier Ertzscheid’s personal website [in French] tackles the ideological orientation of the major social media platforms from a variety of points of view (the political leanings of software developers, of bosses, of companies, the politics of content moderation, political correctness, the revolving door with government and political parties, the intrinsic suitability of different ideologies to algorithmic amplification, and so forth).

The conclusions are quite provocative: although algorithms and social media platforms are both demonstrably biased and possessed of independent causal agency, amplifying, steering, and coarsening our public debate, in the end it is simply those with greater resources, material, social, cultural, etc., whose voices are amplified. Algorithms skew to the right because so does our society.

Digital Welfare Systems

An extremely interesting series of talks hosted by the Digital Freedom Fund: the automation of welfare system decisons is where the neoliberal agenda and digitalization intersect in the most socially explosive fashion. All six events look good, but I am particularly looking forward to the discussion of the Dutch System Risk Indication (SyRI) scandal on Oct. 27th. More info and free registration on the DFF’s website.

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.

Bridle’s vision

Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:

a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.

One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)

Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.