Behavioral redefinition

Vice reports on a Tokyo-based company, DeepScore, pitching software for the automatic recognition of ‘trustworthiness’, e.g. in loan applicants. Although their claimed false-negative rate of 30% may not sound particularly impressive, it must of course be compared to well-known human biases in lending decisions. Perhaps more interesting is the instrumentalization cycle, which is all but assured to take place if DeepScore’s algorithm gains wide acceptance. On the one hand, the algorithm’s goal is to create a precise definition for a broad and vague human characteristic like trustworthiness—that is to say, to operationalize it. Then, if the algorithm is successful on its training sample and becomes adopted by real-world decision-makers, the social power of the adopters reifies the research hypothesis: trustworthiness becomes what the algorithm says it is (because money talks). Thus, the behavioral redefinition of a folk psychology concept comes to fruition. On the other hand, however, instrumentalization immediately kicks in, as users attempt to game the operationalized definition, by managing to present the algorithmically-approved symptoms without the underlying condition (sincerity). Hence, the signal loses strength, and the cycle completes. The fact that DeepScore’s trustworthiness algorithm is intended for credit markets in South-East Asia, where there exist populations without access to traditional credit-scoring channels, merely clarifies the ‘predatory inclusion’ logic of such practices (v. supra).

Trustworthiness of unfree code

Several reports are circulating (e.g., via /.) of a court case in New Jersey in which the defendant won the right to audit proprietary genetic testing software for errors or potential sources of bias. It being a murder trial, this is about as close to a life-or-death use-case as possible.

Given the stakes, it is understandable that a low-trust standard should prevail in  forensic matters, rendering an audit indispensable (nor is the firm’s “complexity defence” anything short of untenable). What is surprising, rather, is how long it took to obtain this type of judicial precedent. The authoritativeness deficit of algorithms is a topic of burning intensity generally; that in such a failure-critical area a business model based on proprietary secrecy has managed to survive is truly remarkable. It is safe to say that this challenge will hardly be the last. Ultimately, freely auditable software would seem to be the superior systemic answer for this type of applications.

Lye machines

Josephine Wolff (Slate) reports on the recent hack of the water processing plant in Oldsmar, FL. Unknown intruders remotely accessed the plant’s controls and attempted to increase the lye content of the town’s water supply to potentially lethal levels. The case is notable in that the human fail-safe (the plant operator on duty) successfully counterbalanced the machine vulnerability, catching the hack as it was taking place and overriding the automatic controls, so no real-world adverse effects ultimately occurred.

What moral can be drawn? It is reasonable to argue, as Wolff does, against full automation: human supervision still has a critical role to play in the resiliency of critical control systems through human-machine redundancy. However, what Wolff does not mention is that this modus operandi may itself be interpreted as a signature of sorts (although no attribution has appeared in the press so far): it speaks of amateurism or of a proof-of-concept stunt; in any case, of an actor not planning to do any serious damage. Otherwise, it is highly improbable that there would have been no parallel attempt at social engineering of (or other types of attacks against) on-site technicians. After all, as the old security engineering nostrum states, rookies target technology, pros target people.

Free speech and monetization

Yesterday, I attended an Electronic Frontier Foundation webinar in the ‘At Home with EFF’ series on Twitch: the title was ‘Online Censorship Beyond Trump and Parler’. Two panels hosted several veterans and heavyweights in the content moderation/trust & safety field, followed by a wrap-up session presenting EFF positions on the topics under discussion.

Several interesting points emerged with regard to the interplay of market concentration, free speech concerns, and the incentives inherent in the dominant social media business model. The panelists reflected on the long run, identifying recurrent patterns, such as the economic imperative driving infrastructure companies from being mere conduits of information to becoming active amplifiers, hence inevitably getting embroiled in moderation. While neutrality and non-interference may be the preferred ideological stance for tech companies, at least publicly, editorial decisions are made a necessity by the prevailing monetization model, the market for attention and engagement.

Perhaps the most interesting insight, however, emerged from the discussion of the intertwining of free speech online with the way in which such speech is (or is not) allowed to make itself financially sustainable. Specifically, the case was made for the importance of the myriad choke points up and down the stack where those who wish to silence speech can exert pressure: if cloud computing cannot be denied to a platform in the name of anti-discrimination, should credit card verification or merch, for instance, also be protected categories?

All in all, nothing shockingly novel; it is worth being reminded, however, that a wealth of experience in the field has already accrued over the years, so that single companies (and legislators, academics, the press, etc.) need not reinvent the wheel each time trust & safety or content moderation are on the agenda.

Research on politics