Vice reports on a Tokyo-based company, DeepScore, pitching software for the automatic recognition of ‘trustworthiness’, e.g. in loan applicants. Although their claimed false-negative rate of 30% may not sound particularly impressive, it must of course be compared to well-known human biases in lending decisions. Perhaps more interesting is the instrumentalization cycle, which is all but assured to take place if DeepScore’s algorithm gains wide acceptance. On the one hand, the algorithm’s goal is to create a precise definition for a broad and vague human characteristic like trustworthiness—that is to say, to operationalize it. Then, if the algorithm is successful on its training sample and becomes adopted by real-world decision-makers, the social power of the adopters reifies the research hypothesis: trustworthiness becomes what the algorithm says it is (because money talks). Thus, the behavioral redefinition of a folk psychology concept comes to fruition. On the other hand, however, instrumentalization immediately kicks in, as users attempt to game the operationalized definition, by managing to present the algorithmically-approved symptoms without the underlying condition (sincerity). Hence, the signal loses strength, and the cycle completes. The fact that DeepScore’s trustworthiness algorithm is intended for credit markets in South-East Asia, where there exist populations without access to traditional credit-scoring channels, merely clarifies the ‘predatory inclusion’ logic of such practices (v. supra).
Just followed the Medium book launch event for the print edition of Cory Doctorow’s latest, How to Destroy Surveillance Capitalism (free online version here). The pamphlet, from August 2020, was originally intended as a rebuttal of Shoshana Zuboff’s The Age of Surveillance Capitalism [v. supra]. The main claim is that the political consequences of surveillance capitalism were not, as Zuboff maintains, unintended, but rather are central and systemic to the functioning of the whole. Hence, proposed solutions cannot be limited to the technological or economic sphere, but must be political as well. Specifically, Doctorow identifies in trust-busting the main policy tool for reining in Big Tech.
With hindsight of the 2020 election cycle and its aftermath, two points Doctorow made in the presentation stand out most vividly. The first is the link between market power and the devaluing of expert opinion that is a necessary forerunner of disinformation. The argument is that “monopolies turn truth-seeking operations [such as parliamentary committee hearings, expert testimony in court, and so forth] into auctions” (where the deepest pockets buy the most favorable advice), thereby completely discrediting their information content for the general public. The second point is that most all of the grievances currently voiced about Section 230 (the liability shield for online publishers of third-party materials) are at some level grievances about monopoly power.
I am following (just like everyone else) the developing GameStop story. Beyond the financial technicalities, what is interesting for present purposes is that the dynamics of internet virality seem to be finding a close parallel in stock valuation. The term “meme stock” is telling. In other words, at present the online coordination mechanisms, the capital, and the nihilistic boredom are all available to craft an alternative description of reality, which in turn is self-reinforcing (until it isn’t).
A long, powerful essay in The Baffler about the new antitrust actions against Big Tech in the US and the parallels being drawn with the tobacco trials of the 1990s. I agree with its core claim, that equating the problem Big Tech poses with a personal addiction one (a position promoted inter alios by the documentary The Social Dilemma) minimizes the issue of economic dependency and the power it confers on the gatekeepers of key digital infrastructure. I have argued previously that this is at the heart of popular mistrust of the big platforms. However, the pursuit of the tech giants in court risks to be hobbled because of the lasting effect of neoliberal thought on antitrust architecture in US jurisprudence and regulation. Concentrating on consumer prices in the short run risks missing the very real ways in which tech companies can exert systemic social power. In their quest to rein in Big Tech, US lawmakers and attorneys will be confronted with much deeper and more systemic political economy issues. It is unclear they will be able to win this general philosophical argument against such powerful special interests.
Various interesting new pieces on the experience of the algorithmically-directed gig economy. The proximate cause for interest is the upcoming vote in California on Prop. 22, a gig industry-sponsored ballot initiative to overturn some of the labor protections for gig workers enacted by the California legislature last year with AB 5.
Non-compliance with the regulations enacted by this statute has been widespread and brazen by the market leaders in the gig economy, who now hope to cancel the law directly, using direct democracy (as has often been done by special interests in California in the past). Ride-sharing companies such as Uber and Lyft have threatened to leave the state altogether unless these regulations are dropped, thus putting pressure on their workforce to support the ballot initiative at the polls.
Of course, the exploitative potential in US labor law and relations long pre-dates the platforms and the gig economy. However, with respect to at least some of these firms, it is a legitimate question to ask whether there is any substantial value being produced via technological innovation, or whether their market profitability relies essentially on the ability to squeeze more labor out of their workers.
In this sense, and in parallel with the (COVID-accelerated) transition out of a jobs-based model of employment, the gig economy co-opts the evocative potential of entrepreneurialism, especially in its actually-existing form as the self-exploitation dynamics of American immigrant culture. Also, it is hard to miss the gender and race subtexts of this appeal to entrepreneurialism. As one thoughtful article in Dissent puts it, many of the innovative platforms are really targeted to subprime markets:
[t]he platform economy is a stopgap to overcome exclusion, and a tool used to target people for predatory inclusion.
Hence the algorithm as flashpoint in labor relations: it is where the idealized notion of individual striving and the hustle meets the systemic limits of an extractive economy; its very opacity fuels mistrust in the intentions of the platforms.