The recent, decisive defeat of the unionization drive in Amazon’s fulfillment facility of Bessemer, Alabama can be understood to teach many lessons, not necessarily mutually complementary. First of all, specifically local conditions were in play, which can call into question the overall strategy of attempting to start the U.S. unionization of Amazon from the Deep South. The outcome, however, can equally be read as a sign that, in the current crisis economy, workers are prepared to put up with more or less any employer practices and work conditions whatsoever in order not to jeopardize their employment status, especially for jobs with efficiency wages. It can, alternatively, be seen as confirmation that giant tech companies, for all their claims to discontinuity and disruption, have mastered the traditional playbook of pugilistic industrial relations developed by old-economy businesses in the past fifty years. It can be interpreted as a statement that the progressive electoral coalition that swept the Democratic Party back into power at the federal level between November and January has not effected a sea-change in public opinion with regard to labor rights and representation. It can further be considered, in conjunction with the easy passage of Prop. 22 in California last Fall, as evidence that there is scant public belief that the ills of the soft underbelly of the tech economy can be righted by means of twentieth-century policy solutions.
Whatever the lessons learned, the unavoidable conclusion is that, in the United States at least, the power of Big Tech will not be reined in by organized labor alone (despite the fact that industrial militancy in the Amazon workforce continues, in less conventional and institutionalized ways). Nonetheless, recent media attention focused on Amazon workplace practices has created a series of PR embarrassments for the company: it remains to be seen whether they will ultimately cement a certain organizational reputation, and if such a reputation in turn can have regulatory or, especially, financial implications down the line (as has recently been the case in other jurisdictions).
Yesterday I attended the online launch event for Edgelands, a pop-up institute that is being incubated at Harvard’s Berkman Klein Center. The Institute’s goal is to study how our social contract is being redrawn, especially in urban areas, as a consequence of technological changes such as pervasive surveillance and unforeseen crises such as the global pandemic. The design of the EI is very distinctive: it is time-limited (5 years), radically decentralized, and aiming to bridge gaps between perspectives and methodologies as diverse as academic research, public policy, and art. It is also notable for its focus on rest-of-world urban dynamics outside the North-Atlantic space (Beirut, Nairobi, and Medellín are among the pilot cities). Some of its initiatives, from what can be gleaned at the outset, appear a bit whimsical, but it will be interesting to follow the Institute’s development, as a fresh approach to these topics could prove extremely inspiring.
I managed to catch a screening of the new Shalini Kantayya documentary, Coded Bias, through EDRi. It tells the story of Joy Bualomwini‘s discovery of systematic discrepancies in the performance of algorithms across races and genders. The tone was lively and accessible, with a good tempo, and the cast of characters presented did a good job showcasing a cross-section of female voices in the tech policy space. It was particularly good to see several authors that appear on my syllabus, such as Cathy O’Neil, Zeynep Tufekci, and Virginia Eubanks.
How is the influencer ecosystem evolving? Opposing forces are in play.
On the one hand, a NYT story describes symptoms of consolidation in the large-organic-online-following-to-brand-ambassadorship pathway. As influencing becomes a day job that is inserted in a stable fashion in the consumer-brand/advertising nexus, the type of informal, supposedly unmediated communication over social media becomes quickly unwieldy for business negotiations: at scale, professional intermediaries are necessary to manage transactions between the holders of social media capital/cred and the business interests wishing to leverage it. A rather more disenchanted and normalized workaday image of influencer life thereby emerges.
On the other hand, a Vulture profile of an influencer whose personal magnetism is matched only by her ability to offend (warning: NSFW) signals that normalization may ultimately be self-defeating. The intense and disturbing personal trajectory of Trisha Paytas suggests that the taming of internet celebrity for commercial purposes is by definition a neverending Sisyphean endeavor, for the currency involved is authenticity, whose seal of approval lies outside market transactions. The biggest crowds on the internet are still drawn by titillation of outrage, although their enactors may not thereby be suited to sell much of anything, except themselves.
Interesting article in the MIT Tech Review (via /.) detailing research performed at Northwestern University (paper on ArXiv) on how potentially to leverage the power of collective action in order to counter pervasive data collection strategies by internet companies. Three such methods are discussed: data strikes (refusal to use data-invasive services), data poisoning (providing false and misleading data), and conscious data contribution (to privacy-respecting competitors).
Conscious data contribution and data strikes are relatively straightforward Aventine secessions, but depend decisively on the availability of alternative services (or the acceptability of degraded performance for the mobilized users on less-than-perfect substitutes).
The effectiveness of data poisoning, on the other hand, turns on the type of surveillance one is trying to stifle (as I have argued in I labirinti). If material efficacy is at stake, it can be decisive (e.g., faulty info can make a terrorist manhunt fail). Unsurprisingly, this type of strategic disinformation has featured in the plot of many works of fiction, both featuring and not featuring AIs. But if what’s at stake is the perception of efficacy, data poisoning is only an effective counterstrategy inasmuch as it destroys the legitimacy of the decisions made on the basis of the collected data (at what point, for instance, do advertisers stop working with Google because its database is irrevocably compromised?). In some cases of AI/ML adoption, in which the offloading of responsibility and the containment of costs are the foremost goals, there already is very broad tolerance for bias (i.e., faulty training data).
Hence in general the fix is not exclusively technical: political mobilization must be activated to cash in on the contradictions these data activism interventions bring to light.