Hiding

Given the recent salience of news on surveillance and surveillance capitalism, it is to be expected that there would be rising interest in material, technical countermeasures. Indeed, a cottage industry of surveillance-avoidance gear and gadgetry has sprung up. The reviews of these apparatuses tend to agree that the results they achieve are not great. For one thing, they are typically targeted at one type of surveillance vector at a time, thus requiring a specifically tailored attack model rather than being comprehensive solutions. Moreover, they can really only be fine-tuned properly if they have access to the source code of the algorithm they are trying to beat, or at least can test its response in controlled conditions before facing it in the wild. But of course, uncertainty about the outcomes of surveillance, or indeed about whether it is taking place to begin with, is the heart of the matter.

The creators of these countermeasures themselves, whatever their personal intellectual commitment to privacy and anonymity, hardly follow their own advice in eschewing the visibility the large internet platforms afford. Whether these systems try to beat machine-learning algorithms through data poisoning or adversarial attacks, they tend to be more of a political statement and proof of concept than a workable solution, especially in the long term. In general, even when effective, using these countermeasures is seen as extremely cumbersome and self-penalizing: they can be useful in limited situations for operating in ‘stealth mode’, but cannot be lived in permanently.

If this is the technological state of play, are we destined to a future of much greater personal transparency, or is the notion of hiding undergoing an evolution? Certainly, the momentum behind the diffusion of surveillance techniques such as facial recognition appears massive worldwide. Furthermore, it is no longer merely a question of centralized state agencies: the technology is mature for individual consumers to enact private micro-surveillance. This sea change is certainly prompting shifts in acceptable social behavior. But as to the wider problem of obscurity in our social lives, the strategic response may well lie in a mixture of compartimentalization and hiding in plain sight. And of course systems of any kind are easier to beat when one can target the human agent at the other end.

Long-run trust dynamics

Long, thoughtful essay by David Brooks in The Atlantic on the evolution of mistrust in the American body politic. The angle taken is that of the long span of the history of mentalities: Brooks couches his analysis of the current crisis in the recurrence of moral convulsions, which once every sixty years or so fundamentally reshape the terms of American social discourse. What we are witnessing today, therefore, is the final crisis of a regime of liberal individualism whose heyday was in the globalizing 1990s, but whose origin may be traced to the previous moral convulsion, the anti-authoritarian revolt against the status quo of the late 1960s.

The most interesting part of Brooks’ analysis is the linking of data on the decline in generic societal trust with the social-psychological dimension, namely the precipitous –and frankly shocking– decline in reported psychological well-being in America, especially among children and young adults. Where his argument is less persuasive is in the prognosis of a more security-oriented paradigm for the future, based on egalitarianism and communitarian tribalism. It is not clear to me that the country possesses either the means or the will to roll back the atomizing tendencies of globalization.

Official compliance with curbs on surveillance

A new court case has been brought against the City and County of San Francisco for the use of surveillance cameras by the San Francisco Police Department, in violation of a 2019 city ordinance, to control protests in early June following the killing of George Floyd. The Electronic Frontier Foundation and the American Civil Liberties Union of Northern California are representing the three plaintiffs in the suit, community activists who participated in the demonstrations, alleging a chilling effect on freedom of speech and assembly. The surveillance apparatus belonged to a third party, the Union Square Business Improvement Distric, and its use was granted voluntarily to law enforcement, following a request.

Use of surveillance and facial recognition technology is widespread among California law enforcement, but such policies are often opaque and unacknowledged. Police Departments have been able to evade legislative and regulatory curbs on their surveillance activities through third-party arrangements. Such potential for non-compliance strengthens the case for approaches such as that taken in Portland, OR, where facial recognition technology is banned for all, not simply for the public sector.

Robocalling campaigns

Politico‘s Morning Tech reports that the Trump campaign has launched a poll on its website to gauge sentiment as to Twitter’s anti-conservative bias. There is nothing particularly scientific or informative about the poll. In fact, MT speculates that the main purpose of the stunt is to get respondents to agree, in passing, to have their phone numbers robocalled by the campaign (this kind of data-collection-and-authorization has been done before). Robocalling is one of those annoying-but-effective psychological prods, like canned laughter. Participants in the Twitter poll can safely be considered strong fans of the President, but even they might not consent to being robocalled if asked directly, hence this circuitous route. It is remarkable, though, how outrage is commodified as data harvesting, or –seen the other way– how subjection to invasive marketing is the price of interaction with curated forms of political venting.

Enforcement as deflection

Technology policy is often characterized as an area in which governments play catch-up, both cognitively and resource-wise, with the private sector. In these two recent cases, otherwise quite far apart both spatially and thematically, law enforcement can be seen to flip this script by attempting to pin responsibility for social externalities on those it can reliably target: the victims and the small fry. Whether it is criminalizing those who pay to be rid of ransomware or rounding up the café owners who failed to participate in the State’s mass surveillance initiatives, the authorities signal the seriousness of their intentions with regards to combating social ills by targeting bystanders rather than the actual perpetrators. Politically, this is a myopic strategy, and I would not be surprised if it generated a significant amount of pushback.

Research on politics