Vice reports on a Tokyo-based company, DeepScore, pitching software for the automatic recognition of ‘trustworthiness’, e.g. in loan applicants. Although their claimed false-negative rate of 30% may not sound particularly impressive, it must of course be compared to well-known human biases in lending decisions. Perhaps more interesting is the instrumentalization cycle, which is all but assured to take place if DeepScore’s algorithm gains wide acceptance. On the one hand, the algorithm’s goal is to create a precise definition for a broad and vague human characteristic like trustworthiness—that is to say, to operationalize it. Then, if the algorithm is successful on its training sample and becomes adopted by real-world decision-makers, the social power of the adopters reifies the research hypothesis: trustworthiness becomes what the algorithm says it is (because money talks). Thus, the behavioral redefinition of a folk psychology concept comes to fruition. On the other hand, however, instrumentalization immediately kicks in, as users attempt to game the operationalized definition, by managing to present the algorithmically-approved symptoms without the underlying condition (sincerity). Hence, the signal loses strength, and the cycle completes. The fact that DeepScore’s trustworthiness algorithm is intended for credit markets in South-East Asia, where there exist populations without access to traditional credit-scoring channels, merely clarifies the ‘predatory inclusion’ logic of such practices (v. supra).
Several reports are circulating (e.g., via /.) of a court case in New Jersey in which the defendant won the right to audit proprietary genetic testing software for errors or potential sources of bias. It being a murder trial, this is about as close to a life-or-death use-case as possible.
Given the stakes, it is understandable that a low-trust standard should prevail in forensic matters, rendering an audit indispensable (nor is the firm’s “complexity defence” anything short of untenable). What is surprising, rather, is how long it took to obtain this type of judicial precedent. The authoritativeness deficit of algorithms is a topic of burning intensity generally; that in such a failure-critical area a business model based on proprietary secrecy has managed to survive is truly remarkable. It is safe to say that this challenge will hardly be the last. Ultimately, freely auditable software would seem to be the superior systemic answer for this type of applications.