Tag Archives: Hacking

Barlow as Rorschach test

An op-ed by Joshua Benton on the first quarter-century of John Perry Barlow’s Declaration of the Independence of Cyberspace on the Nieman Lab website.

Unpacking the different facets of Barlow’s personality and worldview goes a long way toward mapping out early internet ideology: most everyone finds parts to admire as well as intimations of disasters to come. The protean nature of the author of the Declaration helps in the process. Was Barlow Dick Cheney’s friend or Ed Snowden’s? Was he a scion of Wyoming cattle ranching royalty or a Grateful Dead lyricist? Was he part of the Davos digerati or a defender of civil rights and founder of the EFF? All of these, of course, and much besides. Undeniably, Barlow had a striking way with words, matched only by a consistent ability to show up “where it’s at” in the prevailing cultural winds of the time (including a penchant for association with the rich and famous).

Benton does a good job highlighting how far removed the techno-utopian promises of the Declaration sound from the current zeitgeist regarding the social effects of information technology. But ultimately we see in Barlow a reflection of our own hopes and fears about digital societies: as I previously argued, there is no rigid and inescapable cause-effect relationship between the ideas of the ’90s and the oligopolies of today. Similarly, a course for future action and engagement can be set without espousing or denouncing the Declaration in its entirety.

Lye machines

Josephine Wolff (Slate) reports on the recent hack of the water processing plant in Oldsmar, FL. Unknown intruders remotely accessed the plant’s controls and attempted to increase the lye content of the town’s water supply to potentially lethal levels. The case is notable in that the human fail-safe (the plant operator on duty) successfully counterbalanced the machine vulnerability, catching the hack as it was taking place and overriding the automatic controls, so no real-world adverse effects ultimately occurred.

What moral can be drawn? It is reasonable to argue, as Wolff does, against full automation: human supervision still has a critical role to play in the resiliency of critical control systems through human-machine redundancy. However, what Wolff does not mention is that this modus operandi may itself be interpreted as a signature of sorts (although no attribution has appeared in the press so far): it speaks of amateurism or of a proof-of-concept stunt; in any case, of an actor not planning to do any serious damage. Otherwise, it is highly improbable that there would have been no parallel attempt at social engineering of (or other types of attacks against) on-site technicians. After all, as the old security engineering nostrum states, rookies target technology, pros target people.

Schools get into the phone-hacking business

A disturbing piece of reporting from Gizmodo (via /.) on the adoption by many US school districts of digital forensic tools to retrieve content from their students’ mobile devices. Of course, such technology was originally developed as a counter-terrorism tool, and then trickled down to regular domestic law enforcement. As we have remarked previously, schools have recently been on the bleeding edge of the social application of intrusive technology, with all the risks and conflicts it engenders; in this instance, however, we see a particularly egregious confluence of technological blowback (from war in the periphery to everyday life in the metropole) and criminal-justice takeover of mass education (of school-to-prison-pipeline fame).

Risk communication

I just read an interesting piece in the Harvard Business Review by three researchers at UC Berkeley’s Center for Long-Term Cybersecurity on how to communicate about risk. It is helpful as a pragmatic, concrete proposal on how to handle institutional communication about fundamentally uncertain outcomes in such a way as to bolster public trust and increase mass literacy about risk.

Data security and surveillance legitimacy

To no-one’s surprise, the Department of Homeland Security and the Customs and Border Patrol have become victims of successful hacks of the biometric data they mass-collect at the border. The usual neoliberal dance of private subcontracting of public functions further exacerbated the problem. According to the DHS Office of the Inspector General,

[t]his incident may damage the public’s trust in the Government’s ability to safeguard biometric data and may result in travelers’ reluctance to permit DHS to capture and use their biometrics at U.S. ports of entry.

No kidding. Considering the oft-documented invasiveness of data harvesting practices by the immigration-control complex and the serious real-world repercussions in terms of policies and ordinary people’s lives, the problem of data security should be front-and-center in public policy debates. The trade-off between the expected value to be gained from surveillance and the risk of unauthorized access to the accumulated information (which also implies the potential for the corruption of the database) must be considered explicitly: as it is, these leaks and hacks are externalities the public is obliged to absorb because the agencies have scant incentive to monitor their data troves properly.