Barlow as Rorschach test

An op-ed by Joshua Benton on the first quarter-century of John Perry Barlow’s Declaration of the Independence of Cyberspace on the Nieman Lab website.

Unpacking the different facets of Barlow’s personality and worldview goes a long way toward mapping out early internet ideology: most everyone finds parts to admire as well as intimations of disasters to come. The protean nature of the author of the Declaration helps in the process. Was Barlow Dick Cheney’s friend or Ed Snowden’s? Was he a scion of Wyoming cattle ranching royalty or a Grateful Dead lyricist? Was he part of the Davos digerati or a defender of civil rights and founder of the EFF? All of these, of course, and much besides. Undeniably, Barlow had a striking way with words, matched only by a consistent ability to show up “where it’s at” in the prevailing cultural winds of the time (including a penchant for association with the rich and famous).

Benton does a good job highlighting how far removed the techno-utopian promises of the Declaration sound from the current zeitgeist regarding the social effects of information technology. But ultimately we see in Barlow a reflection of our own hopes and fears about digital societies: as I previously argued, there is no rigid and inescapable cause-effect relationship between the ideas of the ’90s and the oligopolies of today. Similarly, a course for future action and engagement can be set without espousing or denouncing the Declaration in its entirety.

Behavioral redefinition

Vice reports on a Tokyo-based company, DeepScore, pitching software for the automatic recognition of ‘trustworthiness’, e.g. in loan applicants. Although their claimed false-negative rate of 30% may not sound particularly impressive, it must of course be compared to well-known human biases in lending decisions. Perhaps more interesting is the instrumentalization cycle, which is all but assured to take place if DeepScore’s algorithm gains wide acceptance. On the one hand, the algorithm’s goal is to create a precise definition for a broad and vague human characteristic like trustworthiness—that is to say, to operationalize it. Then, if the algorithm is successful on its training sample and becomes adopted by real-world decision-makers, the social power of the adopters reifies the research hypothesis: trustworthiness becomes what the algorithm says it is (because money talks). Thus, the behavioral redefinition of a folk psychology concept comes to fruition. On the other hand, however, instrumentalization immediately kicks in, as users attempt to game the operationalized definition, by managing to present the algorithmically-approved symptoms without the underlying condition (sincerity). Hence, the signal loses strength, and the cycle completes. The fact that DeepScore’s trustworthiness algorithm is intended for credit markets in South-East Asia, where there exist populations without access to traditional credit-scoring channels, merely clarifies the ‘predatory inclusion’ logic of such practices (v. supra).

Trustworthiness of unfree code

Several reports are circulating (e.g., via /.) of a court case in New Jersey in which the defendant won the right to audit proprietary genetic testing software for errors or potential sources of bias. It being a murder trial, this is about as close to a life-or-death use-case as possible.

Given the stakes, it is understandable that a low-trust standard should prevail in  forensic matters, rendering an audit indispensable (nor is the firm’s “complexity defence” anything short of untenable). What is surprising, rather, is how long it took to obtain this type of judicial precedent. The authoritativeness deficit of algorithms is a topic of burning intensity generally; that in such a failure-critical area a business model based on proprietary secrecy has managed to survive is truly remarkable. It is safe to say that this challenge will hardly be the last. Ultimately, freely auditable software would seem to be the superior systemic answer for this type of applications.

Lye machines

Josephine Wolff (Slate) reports on the recent hack of the water processing plant in Oldsmar, FL. Unknown intruders remotely accessed the plant’s controls and attempted to increase the lye content of the town’s water supply to potentially lethal levels. The case is notable in that the human fail-safe (the plant operator on duty) successfully counterbalanced the machine vulnerability, catching the hack as it was taking place and overriding the automatic controls, so no real-world adverse effects ultimately occurred.

What moral can be drawn? It is reasonable to argue, as Wolff does, against full automation: human supervision still has a critical role to play in the resiliency of critical control systems through human-machine redundancy. However, what Wolff does not mention is that this modus operandi may itself be interpreted as a signature of sorts (although no attribution has appeared in the press so far): it speaks of amateurism or of a proof-of-concept stunt; in any case, of an actor not planning to do any serious damage. Otherwise, it is highly improbable that there would have been no parallel attempt at social engineering of (or other types of attacks against) on-site technicians. After all, as the old security engineering nostrum states, rookies target technology, pros target people.