Tag Archives: Platforms

The hustle and the algorithm

Various interesting new pieces on the experience of the algorithmically-directed gig economy. The proximate cause for interest is the upcoming vote in California on Prop. 22, a gig industry-sponsored ballot initiative to overturn some of the labor protections for gig workers enacted by the California legislature last year with AB 5.

Non-compliance with the regulations enacted by this statute has been widespread and brazen by the market leaders in the gig economy, who now hope to cancel the law directly, using direct democracy (as has often been done by special interests in California in the past). Ride-sharing companies such as Uber and Lyft have threatened to leave the state altogether unless these regulations are dropped, thus putting pressure on their workforce to support the ballot initiative at the polls.

Of course, the exploitative potential in US labor law and relations long pre-dates the platforms and the gig economy. However, with respect to at least some of these firms, it is a legitimate question to ask whether there is any substantial value being produced via technological innovation, or whether their market profitability relies essentially on the ability to squeeze more labor out of their workers.

In this sense, and in parallel with the (COVID-accelerated) transition out of a jobs-based model of employment, the gig economy co-opts the evocative potential of entrepreneurialism, especially in its actually-existing form as the self-exploitation dynamics of American immigrant culture. Also, it is hard to miss the gender and race subtexts of this appeal to entrepreneurialism. As one thoughtful article in Dissent puts it, many of the innovative platforms are really targeted to subprime markets:

[t]he platform economy is a stopgap to overcome exclusion, and a tool used to target people for predatory inclusion.

Hence the algorithm as flashpoint in labor relations: it is where the idealized notion of individual striving and the hustle meets the systemic limits of an extractive economy; its very opacity fuels mistrust in the intentions of the platforms.

Violence, content moderation, and IR

Interesting article by James Vincent in The Verge about a decision by Zoom, Facebook, and YouTube to shut down a university webinar over fears of disseminating opinions advocating violence “carried out by […] criminal or terrorist organizations”. The case is strategically placed at the intersection of several recent trends.

On the one hand, de-platforming as a means of struggle to express outrage at the views of an invited speaker is a tactic that has been used often, especially on college campuses, even before the beginning of the pandemic and for in-person events. However, it appears that the pressure in this specific case was brought to bear by external organizations and lobby groups, without a visible grassroots presence within the higher education institution in question, San Francisco State University. Moreover, such pressure was exerted by means of threats of legal liability not against SFSU, but rather targeting the third-party, commercial platforms enabling diffusion of the event, which was to be held as a remote-only webinar for epidemiological concerns. Therefore, the university’s decision to organize the event was thwarted not by the pressure of an in-person crowd and the risk of public disturbances, but by the choice of a separate, independent actor, imposing external limitations derived from its own Terms of Service, when faced with potential litigation.

The host losing agency to the platform is not the only story these events tell, though. It is not coincidental that the case involves the Israeli-Palestinian struggle, and that the de-platformed individual was a member of the Popular Front for the Liberation of Palestine who participated in two plane hijackings in 1969-70. The transferral of an academic discussion to an online forum short-circuited the ability academic communities have traditionally enjoyed to re-frame discussions on all topics –even dangerous, taboo, or divisive ones– as being about analyzing and discussing, not about advocating and perpetrating. At the same time, post-9/11 norms and attitudes in the US have applied a criminal lens to actions and events that in their historical context represented moves in an ideological and geopolitical struggle. Such a transformation may represent a shift in the pursuit of the United States’ national interest, but what is striking about this case is that a choice made at a geo-strategic, Great Power level produces unmediated consequences for the opinions and rights of expression of individual citizens and organizations.

This aspect in turn ties in to the debate on the legitimacy grounds of platform content moderation policies: the aspiration may well be to couch such policies in universalist terms, and even take international human rights law as a framework or a model; however, in practice common moral prescriptions against violence scale poorly from the level of individuals in civil society to that of power politics and international relations, while the content moderation norms of the platforms are immersed in a State-controlled legal setting which, far from being neutral, is decisively shaped by their ideological and strategic preferences.

Inter-institutional trust deficit

Piece in Axios about tech companies’ contingency planning for election night and its aftermath. The last paragraph sums up the conundrum:

Every group tasked with assuring Americans that their votes get counted — unelected bureaucrats, tech companies and the media — already faces a trust deficit among many populations, particularly Trump supporters.

In this case it is not even clear whether a concurrence of opinion and a unified message would strengthen the credibility of these actors and of their point of view or rather confirm sceptics even further in their conspiracy beliefs.

Ideological balance in banning

Interesting article in The Intercept about Facebook’s attempt to achieve ideological balance in its banning practices by juxtaposing its purge of QAnon-related accounts with one of Antifa ones. Whether such equivalence is at all warranted on its merits is largely beside the point: FB finds itself in exactly the same situation as the old-media publishers of yore, desperate for the public to retain the perception of its equidistance. Antifa was merely the most media-salient target available for this type of operation.

It is unclear to me that there still is a significant middle-ground public who cares about this type of equidistance in its editorial gatekeepers, so perhaps the more cynical suspicions, such as Natasha Lennard’s, that this is simply a move to curry favor with the current Administration in the middle of an election might not be off-track. What is more significant in the long term is that the content-moderation scrutiny FB now undergoes, chiefly because of its size, will only intensify going forward, forcing it to conduct ever more of these censoring operations. This restriction on debate will, in turn, eventually and progressively push more radical political discourse elsewhere online.

On the whole, I think this is a positive development: organizations that think of themselves as radically anti-establishment should own up to the fact that there is no reason they should count on being platformed by so integral a part of the contemporary establishment as FB. Public space for political mobilization is not confined to the internet, and the internet is not confined to giant social media platforms.

Evolving channels of communication for protests

I just read a story by Tanya Basu (in the MIT Technology Review) about the use of single-page websites (created through services such as Bio.fm and Carrd) to convey information about recent political mobilizations in the US. It’s very interesting how the new generation of social-justice activists is weaning itself from exclusive reliance on the major social media platforms in its search for anonymity, simplicity and accessibility. These ways of communicating information, as Basu underlines, bespeak an anti-influencer mentality: it’s the info that comes first, not the author.

It is early to say whether the same issues of content moderation, pathological speech, and censorship will crop up on these platforms, as well, but for the time being it is good to see some movement in this space.