Given the recent salience of news on surveillance and surveillance capitalism, it is to be expected that there would be rising interest in material, technical countermeasures. Indeed, a cottage industry of surveillance-avoidance gear and gadgetry has sprung up. The reviews of these apparatuses tend to agree that the results they achieve are not great. For one thing, they are typically targeted at one type of surveillance vector at a time, thus requiring a specifically tailored attack model rather than being comprehensive solutions. Moreover, they can really only be fine-tuned properly if they have access to the source code of the algorithm they are trying to beat, or at least can test its response in controlled conditions before facing it in the wild. But of course, uncertainty about the outcomes of surveillance, or indeed about whether it is taking place to begin with, is the heart of the matter.
The creators of these countermeasures themselves, whatever their personal intellectual commitment to privacy and anonymity, hardly follow their own advice in eschewing the visibility the large internet platforms afford. Whether these systems try to beat machine-learning algorithms through data poisoning or adversarial attacks, they tend to be more of a political statement and proof of concept than a workable solution, especially in the long term. In general, even when effective, using these countermeasures is seen as extremely cumbersome and self-penalizing: they can be useful in limited situations for operating in ‘stealth mode’, but cannot be lived in permanently.
If this is the technological state of play, are we destined to a future of much greater personal transparency, or is the notion of hiding undergoing an evolution? Certainly, the momentum behind the diffusion of surveillance techniques such as facial recognition appears massive worldwide. Furthermore, it is no longer merely a question of centralized state agencies: the technology is mature for individual consumers to enact private micro-surveillance. This sea change is certainly prompting shifts in acceptable social behavior. But as to the wider problem of obscurity in our social lives, the strategic response may well lie in a mixture of compartimentalization and hiding in plain sight. And of course systems of any kind are easier to beat when one can target the human agent at the other end.
To no-one’s surprise, the Department of Homeland Security and the Customs and Border Patrol have become victims of successful hacks of the biometric data they mass-collect at the border. The usual neoliberal dance of private subcontracting of public functions further exacerbated the problem. According to the DHS Office of the Inspector General,
[t]his incident may damage the public’s trust in the Government’s ability to safeguard biometric data and may result in travelers’ reluctance to permit DHS to capture and use their biometrics at U.S. ports of entry.
No kidding. Considering the oft-documented invasiveness of data harvesting practices by the immigration-control complex and the serious real-world repercussions in terms of policies and ordinary people’s lives, the problem of data security should be front-and-center in public policy debates. The trade-off between the expected value to be gained from surveillance and the risk of unauthorized access to the accumulated information (which also implies the potential for the corruption of the database) must be considered explicitly: as it is, these leaks and hacks are externalities the public is obliged to absorb because the agencies have scant incentive to monitor their data troves properly.
Back in the Spring, digital contact tracing was heralded as the hi-tech path out of the pandemic. With the benefit of six months of hindsight, the limitations of the approach have become clear [see Schneier for a concise summing-up of its shortcomings].
While digital contact tracing’s notional benefits seem to belong squarely in the realm of security theater (i.e., showing the public that Something Is Being Done), its potential for justifying intrusive surveillance remains intact. Two recent news items illustrate this dynamic. A small liberal arts college in Michigan is forcing its students to download a contact-tracing app (and apparently a security vulnerability-riddled one, at that) as a condition for being allowed on campus. Meanwhile, the delegates to the Republican National Convention reportedly are to wear “smart badges” (originally developed for tracking pallets) to record their movements through the convention venue in Charlotte. While higher education has long been a laboratory of choice for surveillance technology experimentation, I would have expected the libertarian wing of the GOP to kick up more of a fuss over this kind of intrusion.
An interesting project (via Slashdot) to track and report the deployment of surveillance technology by property owners. This type of granular, individual surveillance relationship sometimes gets lost in the broader debates about surveillance, where we tend to focus on Nation-States and giant corporations, but it is far more pervasive and potentially insidious (as I discuss in I Labirinti). Unsurprisingly, it is showing up at a social and economic flashpoint in these pandemic times, the residential rental market. The Landlord Tech Watch mapping project is still in its infancy: whether doxxing is an effective counter-strategy to surveillance in this context remains to be seen.
Just discovered (via the BKC newsletter) a cool publication, Logic. They do three themed issues a year on topics at the intersection of tech and society. Vol. 10: Security (from May this year) looks particularly close to the kind of things I am working on. There’s a long piece by Matt Goerzen and Gabriella Coleman on the intertwined histories of hacking and computer security, and a couple of in-depth interviews with Tawana Petty on facial recognition and with Alison Macrina on Tor. Good stuff: I need to get my hands on a hard copy.