Tag Archives: Violence

Bridle’s vision

Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:

a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.

One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)

Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.

Violence, content moderation, and IR

Interesting article by James Vincent in The Verge about a decision by Zoom, Facebook, and YouTube to shut down a university webinar over fears of disseminating opinions advocating violence “carried out by […] criminal or terrorist organizations”. The case is strategically placed at the intersection of several recent trends.

On the one hand, de-platforming as a means of struggle to express outrage at the views of an invited speaker is a tactic that has been used often, especially on college campuses, even before the beginning of the pandemic and for in-person events. However, it appears that the pressure in this specific case was brought to bear by external organizations and lobby groups, without a visible grassroots presence within the higher education institution in question, San Francisco State University. Moreover, such pressure was exerted by means of threats of legal liability not against SFSU, but rather targeting the third-party, commercial platforms enabling diffusion of the event, which was to be held as a remote-only webinar for epidemiological concerns. Therefore, the university’s decision to organize the event was thwarted not by the pressure of an in-person crowd and the risk of public disturbances, but by the choice of a separate, independent actor, imposing external limitations derived from its own Terms of Service, when faced with potential litigation.

The host losing agency to the platform is not the only story these events tell, though. It is not coincidental that the case involves the Israeli-Palestinian struggle, and that the de-platformed individual was a member of the Popular Front for the Liberation of Palestine who participated in two plane hijackings in 1969-70. The transferral of an academic discussion to an online forum short-circuited the ability academic communities have traditionally enjoyed to re-frame discussions on all topics –even dangerous, taboo, or divisive ones– as being about analyzing and discussing, not about advocating and perpetrating. At the same time, post-9/11 norms and attitudes in the US have applied a criminal lens to actions and events that in their historical context represented moves in an ideological and geopolitical struggle. Such a transformation may represent a shift in the pursuit of the United States’ national interest, but what is striking about this case is that a choice made at a geo-strategic, Great Power level produces unmediated consequences for the opinions and rights of expression of individual citizens and organizations.

This aspect in turn ties in to the debate on the legitimacy grounds of platform content moderation policies: the aspiration may well be to couch such policies in universalist terms, and even take international human rights law as a framework or a model; however, in practice common moral prescriptions against violence scale poorly from the level of individuals in civil society to that of power politics and international relations, while the content moderation norms of the platforms are immersed in a State-controlled legal setting which, far from being neutral, is decisively shaped by their ideological and strategic preferences.