Censorship about censorship

In further news on a story I posted about in late September, it has now surfaced that Zoom has cancelled academic events scheduled to discuss its previous cancellation. Beyond the political merits of the issue, from a business standpoint I suspect that the company’s position will quickly become untenable and that if it persists in its current interpretation of its ToS its competitors will crowd it out of the academic market (already, one of the cancelled events was able to migrate to Google Meets). Telling universities to refrain from discussion is like telling a rivet factory to refrain from metalworking; the fact that in this situation it has become impossible to draw a frame around an issue, so as to modify the sociolinguistic norms that preside over it, has produced a surreal outcome: this is not an equilibrium, and is destined to change.

Risk communication

I just read an interesting piece in the Harvard Business Review by three researchers at UC Berkeley’s Center for Long-Term Cybersecurity on how to communicate about risk. It is helpful as a pragmatic, concrete proposal on how to handle institutional communication about fundamentally uncertain outcomes in such a way as to bolster public trust and increase mass literacy about risk.

Hiding

Given the recent salience of news on surveillance and surveillance capitalism, it is to be expected that there would be rising interest in material, technical countermeasures. Indeed, a cottage industry of surveillance-avoidance gear and gadgetry has sprung up. The reviews of these apparatuses tend to agree that the results they achieve are not great. For one thing, they are typically targeted at one type of surveillance vector at a time, thus requiring a specifically tailored attack model rather than being comprehensive solutions. Moreover, they can really only be fine-tuned properly if they have access to the source code of the algorithm they are trying to beat, or at least can test its response in controlled conditions before facing it in the wild. But of course, uncertainty about the outcomes of surveillance, or indeed about whether it is taking place to begin with, is the heart of the matter.

The creators of these countermeasures themselves, whatever their personal intellectual commitment to privacy and anonymity, hardly follow their own advice in eschewing the visibility the large internet platforms afford. Whether these systems try to beat machine-learning algorithms through data poisoning or adversarial attacks, they tend to be more of a political statement and proof of concept than a workable solution, especially in the long term. In general, even when effective, using these countermeasures is seen as extremely cumbersome and self-penalizing: they can be useful in limited situations for operating in ‘stealth mode’, but cannot be lived in permanently.

If this is the technological state of play, are we destined to a future of much greater personal transparency, or is the notion of hiding undergoing an evolution? Certainly, the momentum behind the diffusion of surveillance techniques such as facial recognition appears massive worldwide. Furthermore, it is no longer merely a question of centralized state agencies: the technology is mature for individual consumers to enact private micro-surveillance. This sea change is certainly prompting shifts in acceptable social behavior. But as to the wider problem of obscurity in our social lives, the strategic response may well lie in a mixture of compartimentalization and hiding in plain sight. And of course systems of any kind are easier to beat when one can target the human agent at the other end.

Long-run trust dynamics

Long, thoughtful essay by David Brooks in The Atlantic on the evolution of mistrust in the American body politic. The angle taken is that of the long span of the history of mentalities: Brooks couches his analysis of the current crisis in the recurrence of moral convulsions, which once every sixty years or so fundamentally reshape the terms of American social discourse. What we are witnessing today, therefore, is the final crisis of a regime of liberal individualism whose heyday was in the globalizing 1990s, but whose origin may be traced to the previous moral convulsion, the anti-authoritarian revolt against the status quo of the late 1960s.

The most interesting part of Brooks’ analysis is the linking of data on the decline in generic societal trust with the social-psychological dimension, namely the precipitous –and frankly shocking– decline in reported psychological well-being in America, especially among children and young adults. Where his argument is less persuasive is in the prognosis of a more security-oriented paradigm for the future, based on egalitarianism and communitarian tribalism. It is not clear to me that the country possesses either the means or the will to roll back the atomizing tendencies of globalization.

Official compliance with curbs on surveillance

A new court case has been brought against the City and County of San Francisco for the use of surveillance cameras by the San Francisco Police Department, in violation of a 2019 city ordinance, to control protests in early June following the killing of George Floyd. The Electronic Frontier Foundation and the American Civil Liberties Union of Northern California are representing the three plaintiffs in the suit, community activists who participated in the demonstrations, alleging a chilling effect on freedom of speech and assembly. The surveillance apparatus belonged to a third party, the Union Square Business Improvement Distric, and its use was granted voluntarily to law enforcement, following a request.

Use of surveillance and facial recognition technology is widespread among California law enforcement, but such policies are often opaque and unacknowledged. Police Departments have been able to evade legislative and regulatory curbs on their surveillance activities through third-party arrangements. Such potential for non-compliance strengthens the case for approaches such as that taken in Portland, OR, where facial recognition technology is banned for all, not simply for the public sector.