Violence, content moderation, and IR

Interesting article by James Vincent in The Verge about a decision by Zoom, Facebook, and YouTube to shut down a university webinar over fears of disseminating opinions advocating violence “carried out by […] criminal or terrorist organizations”. The case is strategically placed at the intersection of several recent trends.

On the one hand, de-platforming as a means of struggle to express outrage at the views of an invited speaker is a tactic that has been used often, especially on college campuses, even before the beginning of the pandemic and for in-person events. However, it appears that the pressure in this specific case was brought to bear by external organizations and lobby groups, without a visible grassroots presence within the higher education institution in question, San Francisco State University. Moreover, such pressure was exerted by means of threats of legal liability not against SFSU, but rather targeting the third-party, commercial platforms enabling diffusion of the event, which was to be held as a remote-only webinar for epidemiological concerns. Therefore, the university’s decision to organize the event was thwarted not by the pressure of an in-person crowd and the risk of public disturbances, but by the choice of a separate, independent actor, imposing external limitations derived from its own Terms of Service, when faced with potential litigation.

The host losing agency to the platform is not the only story these events tell, though. It is not coincidental that the case involves the Israeli-Palestinian struggle, and that the de-platformed individual was a member of the Popular Front for the Liberation of Palestine who participated in two plane hijackings in 1969-70. The transferral of an academic discussion to an online forum short-circuited the ability academic communities have traditionally enjoyed to re-frame discussions on all topics –even dangerous, taboo, or divisive ones– as being about analyzing and discussing, not about advocating and perpetrating. At the same time, post-9/11 norms and attitudes in the US have applied a criminal lens to actions and events that in their historical context represented moves in an ideological and geopolitical struggle. Such a transformation may represent a shift in the pursuit of the United States’ national interest, but what is striking about this case is that a choice made at a geo-strategic, Great Power level produces unmediated consequences for the opinions and rights of expression of individual citizens and organizations.

This aspect in turn ties in to the debate on the legitimacy grounds of platform content moderation policies: the aspiration may well be to couch such policies in universalist terms, and even take international human rights law as a framework or a model; however, in practice common moral prescriptions against violence scale poorly from the level of individuals in civil society to that of power politics and international relations, while the content moderation norms of the platforms are immersed in a State-controlled legal setting which, far from being neutral, is decisively shaped by their ideological and strategic preferences.

Data security and surveillance legitimacy

To no-one’s surprise, the Department of Homeland Security and the Customs and Border Patrol have become victims of successful hacks of the biometric data they mass-collect at the border. The usual neoliberal dance of private subcontracting of public functions further exacerbated the problem. According to the DHS Office of the Inspector General,

[t]his incident may damage the public’s trust in the Government’s ability to safeguard biometric data and may result in travelers’ reluctance to permit DHS to capture and use their biometrics at U.S. ports of entry.

No kidding. Considering the oft-documented invasiveness of data harvesting practices by the immigration-control complex and the serious real-world repercussions in terms of policies and ordinary people’s lives, the problem of data security should be front-and-center in public policy debates. The trade-off between the expected value to be gained from surveillance and the risk of unauthorized access to the accumulated information (which also implies the potential for the corruption of the database) must be considered explicitly: as it is, these leaks and hacks are externalities the public is obliged to absorb because the agencies have scant incentive to monitor their data troves properly.

Media manipulation

Earlier, I attended an online workshop organized by the Harvard Kennedy School’s Technology and Social Change Project on media manipulation in the context of the 2020 US presidential campaign. Very productive conversation on the tailoring of disinformation memes to the various minority communities. I also learned about the Taiwanese “humor over rumor” strategy…

Long-term erosion of trust

Interesting article on Slashdot about the historical roots of the weaponization of doubt and scientific disagreement by special interests.

It is notable that these phenomena start at scale with the pervasive political engagement of corporations with American politics in the 1970s and ’80s: this is the moment in which business as a whole detaches from automatic support for a particular political party (choosing its battles and the champions for them –whether financing an insurgent movement, litigation, legislative lobbying, and so forth– on a case-by-case basis), and also the dawn of the end-of-ideologies era. These themes are well discussed by Edward Walker in Grassroots for Hire (2014).

As for the present predicament, one is reminded of an NYT op-ed from last year by William Davies, “Everything Is War and Nothing Is True” on public political discourse:

Social media has introduced games of strategy into public discourse, with deception and secrecy — information warfare — now normal parts of how arguments play out

or of a similarly-dated piece by Z. Tufekci on the commercial side of things:

The internet is increasingly a low-trust society—one where an assumption of pervasive fraud is simply built into the way many things function.

There definitely seem to be systemic aspects to this problem.

Surveillance acquiescence conundrum recently ran an interview with Ciaran Martin, the outgoing chief of the UK’s National Cyber Security Centre. In it, Martin raises the alarm against Chinese attempts at massive data harvesting in the West (specifically in regard to the development of AI). This issue naturally dovetails with the US debate on the banning of TikTok. Herein lies the problem. Both national security agencies and major social media companies have endeavored to normalize perceptions of industrial data collection and surveillance over the past decade or two: that public opinion might be desensitized to the threat posed by foreign actors with access to similar data troves is therefore not surprising. The real challenge in repurposing a Cold War mentality for competition with China in the cyber domain today, in other words, is not so much a lag in Western –especially European– ICT innovation (Martin is himself slipping into a pantouflage position with a tech venture capital firm): it is a lack of urgency, of political will in the society at large, an apathy bred in part of acquiescence in surveillance capitalism.