Tag Archives: Platforms

Victory lap for the EIP

Today, I followed the webcast featuring the initial wrap-up of the Electoral Integrity Partnership (I have discussed their work before). All the principal institutions composing the partnership (Stanford, University of Washington, Graphika, and the Atlantic Council) were represented on the panel. It was, in many respects, a victory lap, given the consensus view that foreign disinformation played a marginal role in the election, compared to 2016, also due to proactive engagement on the part of the large internet companies facilitated by projects such as the EIP.

In describing the 2020 disinformation ecosystem, Alex Stamos (Stanford Internet Observatory) characterized it as mostly home-grown, non-covert, and reliant on major influencers, which in turn forced platforms to transform their content moderation activities into a sort of editorial policy (I have remarked on this trend before). Also, for all the focus on social media, TV was seen to play a very important role, especially for the purpose of building background narratives in the long run, day to day.

Camille François (Graphika) remarked on the importance of alternative platforms, and indeed the trend has been for an expansion of political discourse to all manner of fora previously insulated from it (on this, more soon).

Of the disinformation memes that made it into the mainstream conversation (Stamos mentioned the example of Sharpie-gate), certain characteristics stand out: they tended to appeal to rival technical authority, so as to project an expert-vs-expert dissonance; they were made sticky by official endorsement, which turned debunking into a partisan enterprise. However, their very predictability rendered the task of limiting their spread easier for the platforms. Kate Starbird (UW) summed it up: if the story in 2016 was coordinated inauthentic foreign action, in 2020 it was authentic, loosely-coordinated domestic action, and the major institutional players (at least on the tech infrastructure side) were ready to handle it.

It makes sense for the EIP to celebrate how the disinformation environment was stymied in 2020 (as Graham Brookie of the Atlantic Council put it, it was a win for defence in the ongoing arms race on foreign interference), but it is not hard to see how such an assessment masks a bigger crisis, which would have been evident had there been a different victor. Overall trust in the informational ecosystem has been dealt a further, massive blow by the election, and hyper-polarized post-truth politics are nowhere near over. Indeed, attempts currently underway to fashion a Dolchstoßlegende are liable to have very significant systemic effects going forward. The very narrow focus on disinformation the EIP pursued may have paid off for now, but it is the larger picture of entrenched public distrust that guarantees that these problems will persist into the future.

Censorship about censorship

In further news on a story I posted about in late September, it has now surfaced that Zoom has cancelled academic events scheduled to discuss its previous cancellation. Beyond the political merits of the issue, from a business standpoint I suspect that the company’s position will quickly become untenable and that if it persists in its current interpretation of its ToS its competitors will crowd it out of the academic market (already, one of the cancelled events was able to migrate to Google Meets). Telling universities to refrain from discussion is like telling a rivet factory to refrain from metalworking; the fact that in this situation it has become impossible to draw a frame around an issue, so as to modify the sociolinguistic norms that preside over it, has produced a surreal outcome: this is not an equilibrium, and is destined to change.

The hustle and the algorithm

Various interesting new pieces on the experience of the algorithmically-directed gig economy. The proximate cause for interest is the upcoming vote in California on Prop. 22, a gig industry-sponsored ballot initiative to overturn some of the labor protections for gig workers enacted by the California legislature last year with AB 5.

Non-compliance with the regulations enacted by this statute has been widespread and brazen by the market leaders in the gig economy, who now hope to cancel the law directly, using direct democracy (as has often been done by special interests in California in the past). Ride-sharing companies such as Uber and Lyft have threatened to leave the state altogether unless these regulations are dropped, thus putting pressure on their workforce to support the ballot initiative at the polls.

Of course, the exploitative potential in US labor law and relations long pre-dates the platforms and the gig economy. However, with respect to at least some of these firms, it is a legitimate question to ask whether there is any substantial value being produced via technological innovation, or whether their market profitability relies essentially on the ability to squeeze more labor out of their workers.

In this sense, and in parallel with the (COVID-accelerated) transition out of a jobs-based model of employment, the gig economy co-opts the evocative potential of entrepreneurialism, especially in its actually-existing form as the self-exploitation dynamics of American immigrant culture. Also, it is hard to miss the gender and race subtexts of this appeal to entrepreneurialism. As one thoughtful article in Dissent puts it, many of the innovative platforms are really targeted to subprime markets:

[t]he platform economy is a stopgap to overcome exclusion, and a tool used to target people for predatory inclusion.

Hence the algorithm as flashpoint in labor relations: it is where the idealized notion of individual striving and the hustle meets the systemic limits of an extractive economy; its very opacity fuels mistrust in the intentions of the platforms.

Violence, content moderation, and IR

Interesting article by James Vincent in The Verge about a decision by Zoom, Facebook, and YouTube to shut down a university webinar over fears of disseminating opinions advocating violence “carried out by […] criminal or terrorist organizations”. The case is strategically placed at the intersection of several recent trends.

On the one hand, de-platforming as a means of struggle to express outrage at the views of an invited speaker is a tactic that has been used often, especially on college campuses, even before the beginning of the pandemic and for in-person events. However, it appears that the pressure in this specific case was brought to bear by external organizations and lobby groups, without a visible grassroots presence within the higher education institution in question, San Francisco State University. Moreover, such pressure was exerted by means of threats of legal liability not against SFSU, but rather targeting the third-party, commercial platforms enabling diffusion of the event, which was to be held as a remote-only webinar for epidemiological concerns. Therefore, the university’s decision to organize the event was thwarted not by the pressure of an in-person crowd and the risk of public disturbances, but by the choice of a separate, independent actor, imposing external limitations derived from its own Terms of Service, when faced with potential litigation.

The host losing agency to the platform is not the only story these events tell, though. It is not coincidental that the case involves the Israeli-Palestinian struggle, and that the de-platformed individual was a member of the Popular Front for the Liberation of Palestine who participated in two plane hijackings in 1969-70. The transferral of an academic discussion to an online forum short-circuited the ability academic communities have traditionally enjoyed to re-frame discussions on all topics –even dangerous, taboo, or divisive ones– as being about analyzing and discussing, not about advocating and perpetrating. At the same time, post-9/11 norms and attitudes in the US have applied a criminal lens to actions and events that in their historical context represented moves in an ideological and geopolitical struggle. Such a transformation may represent a shift in the pursuit of the United States’ national interest, but what is striking about this case is that a choice made at a geo-strategic, Great Power level produces unmediated consequences for the opinions and rights of expression of individual citizens and organizations.

This aspect in turn ties in to the debate on the legitimacy grounds of platform content moderation policies: the aspiration may well be to couch such policies in universalist terms, and even take international human rights law as a framework or a model; however, in practice common moral prescriptions against violence scale poorly from the level of individuals in civil society to that of power politics and international relations, while the content moderation norms of the platforms are immersed in a State-controlled legal setting which, far from being neutral, is decisively shaped by their ideological and strategic preferences.

Inter-institutional trust deficit

Piece in Axios about tech companies’ contingency planning for election night and its aftermath. The last paragraph sums up the conundrum:

Every group tasked with assuring Americans that their votes get counted — unelected bureaucrats, tech companies and the media — already faces a trust deficit among many populations, particularly Trump supporters.

In this case it is not even clear whether a concurrence of opinion and a unified message would strengthen the credibility of these actors and of their point of view or rather confirm sceptics even further in their conspiracy beliefs.