A disturbing piece of reporting from Gizmodo (via /.) on the adoption by many US school districts of digital forensic tools to retrieve content from their students’ mobile devices. Of course, such technology was originally developed as a counter-terrorism tool, and then trickled down to regular domestic law enforcement. As we have remarked previously, schools have recently been on the bleeding edge of the social application of intrusive technology, with all the risks and conflicts it engenders; in this instance, however, we see a particularly egregious confluence of technological blowback (from war in the periphery to everyday life in the metropole) and criminal-justice takeover of mass education (of school-to-prison-pipeline fame).
I just read an interesting piece in the Harvard Business Review by three researchers at UC Berkeley’s Center for Long-Term Cybersecurity on how to communicate about risk. It is helpful as a pragmatic, concrete proposal on how to handle institutional communication about fundamentally uncertain outcomes in such a way as to bolster public trust and increase mass literacy about risk.
To no-one’s surprise, the Department of Homeland Security and the Customs and Border Patrol have become victims of successful hacks of the biometric data they mass-collect at the border. The usual neoliberal dance of private subcontracting of public functions further exacerbated the problem. According to the DHS Office of the Inspector General,
[t]his incident may damage the public’s trust in the Government’s ability to safeguard biometric data and may result in travelers’ reluctance to permit DHS to capture and use their biometrics at U.S. ports of entry.
No kidding. Considering the oft-documented invasiveness of data harvesting practices by the immigration-control complex and the serious real-world repercussions in terms of policies and ordinary people’s lives, the problem of data security should be front-and-center in public policy debates. The trade-off between the expected value to be gained from surveillance and the risk of unauthorized access to the accumulated information (which also implies the potential for the corruption of the database) must be considered explicitly: as it is, these leaks and hacks are externalities the public is obliged to absorb because the agencies have scant incentive to monitor their data troves properly.
Interesting study (via Schneier) on how to use disinformation to attack the power grid. In essence, one is trying to game the profit-maximizing behavior of consumers (in this case, through fake information on discounts in electricity used during peak times), nudging them in precisely the opposite direction of market signals, hence overloading the grid. The general obscurity of electricity pricing for the consumer (much of which may be by design) is an important enabler of this hack.
Just discovered (via the BKC newsletter) a cool publication, Logic. They do three themed issues a year on topics at the intersection of tech and society. Vol. 10: Security (from May this year) looks particularly close to the kind of things I am working on. There’s a long piece by Matt Goerzen and Gabriella Coleman on the intertwined histories of hacking and computer security, and a couple of in-depth interviews with Tawana Petty on facial recognition and with Alison Macrina on Tor. Good stuff: I need to get my hands on a hard copy.