Tag Archives: Privacy

Addiction vs. dependency

A long, powerful essay in The Baffler about the new antitrust actions against Big Tech in the US and the parallels being drawn with the tobacco trials of the 1990s. I agree with its core claim, that equating the problem Big Tech poses with a personal addiction one (a position promoted inter alios by the documentary The Social Dilemma) minimizes the issue of economic dependency and the power it confers on the gatekeepers of key digital infrastructure. I have argued previously that this is at the heart of popular mistrust of the big platforms. However, the pursuit of the tech giants in court risks to be hobbled because of the lasting effect of neoliberal thought on antitrust architecture in US jurisprudence and regulation. Concentrating on consumer prices in the short run risks missing the very real ways in which tech companies can exert systemic social power. In their quest to rein in Big Tech, US lawmakers and attorneys will be confronted with much deeper and more systemic political economy issues. It is unclear they will be able to win this general philosophical argument against such powerful special interests.

Schools get into the phone-hacking business

A disturbing piece of reporting from Gizmodo (via /.) on the adoption by many US school districts of digital forensic tools to retrieve content from their students’ mobile devices. Of course, such technology was originally developed as a counter-terrorism tool, and then trickled down to regular domestic law enforcement. As we have remarked previously, schools have recently been on the bleeding edge of the social application of intrusive technology, with all the risks and conflicts it engenders; in this instance, however, we see a particularly egregious confluence of technological blowback (from war in the periphery to everyday life in the metropole) and criminal-justice takeover of mass education (of school-to-prison-pipeline fame).

Jurisdictional shopping for data

Brexit begins to deliver on race-to-the-bottom deregulation: according to reports from UK-based NGO Open Rights Group, the recent free-trade deal with Japan will allow GDPR-level protections on Britons’ data to be circumvented. Specifically, US-based companies will be able to route UK users’ data through Japan, thereby defeating regulatory protections UK law inherited from the EU. It is interesting to see strategies and loopholes traditionally used for internationally produced goods now being applied to user data.

Risk communication

I just read an interesting piece in the Harvard Business Review by three researchers at UC Berkeley’s Center for Long-Term Cybersecurity on how to communicate about risk. It is helpful as a pragmatic, concrete proposal on how to handle institutional communication about fundamentally uncertain outcomes in such a way as to bolster public trust and increase mass literacy about risk.

Hiding

Given the recent salience of news on surveillance and surveillance capitalism, it is to be expected that there would be rising interest in material, technical countermeasures. Indeed, a cottage industry of surveillance-avoidance gear and gadgetry has sprung up. The reviews of these apparatuses tend to agree that the results they achieve are not great. For one thing, they are typically targeted at one type of surveillance vector at a time, thus requiring a specifically tailored attack model rather than being comprehensive solutions. Moreover, they can really only be fine-tuned properly if they have access to the source code of the algorithm they are trying to beat, or at least can test its response in controlled conditions before facing it in the wild. But of course, uncertainty about the outcomes of surveillance, or indeed about whether it is taking place to begin with, is the heart of the matter.

The creators of these countermeasures themselves, whatever their personal intellectual commitment to privacy and anonymity, hardly follow their own advice in eschewing the visibility the large internet platforms afford. Whether these systems try to beat machine-learning algorithms through data poisoning or adversarial attacks, they tend to be more of a political statement and proof of concept than a workable solution, especially in the long term. In general, even when effective, using these countermeasures is seen as extremely cumbersome and self-penalizing: they can be useful in limited situations for operating in ‘stealth mode’, but cannot be lived in permanently.

If this is the technological state of play, are we destined to a future of much greater personal transparency, or is the notion of hiding undergoing an evolution? Certainly, the momentum behind the diffusion of surveillance techniques such as facial recognition appears massive worldwide. Furthermore, it is no longer merely a question of centralized state agencies: the technology is mature for individual consumers to enact private micro-surveillance. This sea change is certainly prompting shifts in acceptable social behavior. But as to the wider problem of obscurity in our social lives, the strategic response may well lie in a mixture of compartimentalization and hiding in plain sight. And of course systems of any kind are easier to beat when one can target the human agent at the other end.