Brexit begins to deliver on race-to-the-bottom deregulation: according to reports from UK-based NGO Open Rights Group, the recent free-trade deal with Japan will allow GDPR-level protections on Britons’ data to be circumvented. Specifically, US-based companies will be able to route UK users’ data through Japan, thereby defeating regulatory protections UK law inherited from the EU. It is interesting to see strategies and loopholes traditionally used for internationally produced goods now being applied to user data.
I just read an interesting piece in the Harvard Business Review by three researchers at UC Berkeley’s Center for Long-Term Cybersecurity on how to communicate about risk. It is helpful as a pragmatic, concrete proposal on how to handle institutional communication about fundamentally uncertain outcomes in such a way as to bolster public trust and increase mass literacy about risk.
Given the recent salience of news on surveillance and surveillance capitalism, it is to be expected that there would be rising interest in material, technical countermeasures. Indeed, a cottage industry of surveillance-avoidance gear and gadgetry has sprung up. The reviews of these apparatuses tend to agree that the results they achieve are not great. For one thing, they are typically targeted at one type of surveillance vector at a time, thus requiring a specifically tailored attack model rather than being comprehensive solutions. Moreover, they can really only be fine-tuned properly if they have access to the source code of the algorithm they are trying to beat, or at least can test its response in controlled conditions before facing it in the wild. But of course, uncertainty about the outcomes of surveillance, or indeed about whether it is taking place to begin with, is the heart of the matter.
The creators of these countermeasures themselves, whatever their personal intellectual commitment to privacy and anonymity, hardly follow their own advice in eschewing the visibility the large internet platforms afford. Whether these systems try to beat machine-learning algorithms through data poisoning or adversarial attacks, they tend to be more of a political statement and proof of concept than a workable solution, especially in the long term. In general, even when effective, using these countermeasures is seen as extremely cumbersome and self-penalizing: they can be useful in limited situations for operating in ‘stealth mode’, but cannot be lived in permanently.
If this is the technological state of play, are we destined to a future of much greater personal transparency, or is the notion of hiding undergoing an evolution? Certainly, the momentum behind the diffusion of surveillance techniques such as facial recognition appears massive worldwide. Furthermore, it is no longer merely a question of centralized state agencies: the technology is mature for individual consumers to enact private micro-surveillance. This sea change is certainly prompting shifts in acceptable social behavior. But as to the wider problem of obscurity in our social lives, the strategic response may well lie in a mixture of compartimentalization and hiding in plain sight. And of course systems of any kind are easier to beat when one can target the human agent at the other end.
I just read a story by Tanya Basu (in the MIT Technology Review) about the use of single-page websites (created through services such as Bio.fm and Carrd) to convey information about recent political mobilizations in the US. It’s very interesting how the new generation of social-justice activists is weaning itself from exclusive reliance on the major social media platforms in its search for anonymity, simplicity and accessibility. These ways of communicating information, as Basu underlines, bespeak an anti-influencer mentality: it’s the info that comes first, not the author.
It is early to say whether the same issues of content moderation, pathological speech, and censorship will crop up on these platforms, as well, but for the time being it is good to see some movement in this space.
Back in the Spring, digital contact tracing was heralded as the hi-tech path out of the pandemic. With the benefit of six months of hindsight, the limitations of the approach have become clear [see Schneier for a concise summing-up of its shortcomings].
While digital contact tracing’s notional benefits seem to belong squarely in the realm of security theater (i.e., showing the public that Something Is Being Done), its potential for justifying intrusive surveillance remains intact. Two recent news items illustrate this dynamic. A small liberal arts college in Michigan is forcing its students to download a contact-tracing app (and apparently a security vulnerability-riddled one, at that) as a condition for being allowed on campus. Meanwhile, the delegates to the Republican National Convention reportedly are to wear “smart badges” (originally developed for tracking pallets) to record their movements through the convention venue in Charlotte. While higher education has long been a laboratory of choice for surveillance technology experimentation, I would have expected the libertarian wing of the GOP to kick up more of a fuss over this kind of intrusion.