Interesting article in The Intercept last week about the purchase by the U.S. Treasury Department of app data harvested by private brokers, such as Babel Street, in order to circumvent the necessity of obtaining court warrants for searches of personal data.
Nothing shockingly new, but the article ends with a key quote from Jack Poulson, the founder of Tech Inquiry, a research and advocacy group:
“Babel Street’s support for the IRS increasing its surveillance of small businesses and the self employed — after the IRS has already largely given up on auditing the ultrawealthy — is an example of the U.S. surveillance industry being used to help shift the tax burden to the working class”.
I managed to catch a screening of the new Shalini Kantayya documentary, Coded Bias, through EDRi. It tells the story of Joy Bualomwini‘s discovery of systematic discrepancies in the performance of algorithms across races and genders. The tone was lively and accessible, with a good tempo, and the cast of characters presented did a good job showcasing a cross-section of female voices in the tech policy space. It was particularly good to see several authors that appear on my syllabus, such as Cathy O’Neil, Zeynep Tufekci, and Virginia Eubanks.
Interesting article in the MIT Tech Review (via /.) detailing research performed at Northwestern University (paper on ArXiv) on how potentially to leverage the power of collective action in order to counter pervasive data collection strategies by internet companies. Three such methods are discussed: data strikes (refusal to use data-invasive services), data poisoning (providing false and misleading data), and conscious data contribution (to privacy-respecting competitors).
Conscious data contribution and data strikes are relatively straightforward Aventine secessions, but depend decisively on the availability of alternative services (or the acceptability of degraded performance for the mobilized users on less-than-perfect substitutes).
The effectiveness of data poisoning, on the other hand, turns on the type of surveillance one is trying to stifle (as I have argued in I labirinti). If material efficacy is at stake, it can be decisive (e.g., faulty info can make a terrorist manhunt fail). Unsurprisingly, this type of strategic disinformation has featured in the plot of many works of fiction, both featuring and not featuring AIs. But if what’s at stake is the perception of efficacy, data poisoning is only an effective counterstrategy inasmuch as it destroys the legitimacy of the decisions made on the basis of the collected data (at what point, for instance, do advertisers stop working with Google because its database is irrevocably compromised?). In some cases of AI/ML adoption, in which the offloading of responsibility and the containment of costs are the foremost goals, there already is very broad tolerance for bias (i.e., faulty training data).
Hence in general the fix is not exclusively technical: political mobilization must be activated to cash in on the contradictions these data activism interventions bring to light.
A disturbing piece of reporting from Gizmodo (via /.) on the adoption by many US school districts of digital forensic tools to retrieve content from their students’ mobile devices. Of course, such technology was originally developed as a counter-terrorism tool, and then trickled down to regular domestic law enforcement. As we have remarked previously, schools have recently been on the bleeding edge of the social application of intrusive technology, with all the risks and conflicts it engenders; in this instance, however, we see a particularly egregious confluence of technological blowback (from war in the periphery to everyday life in the metropole) and criminal-justice takeover of mass education (of school-to-prison-pipeline fame).
Given the recent salience of news on surveillance and surveillance capitalism, it is to be expected that there would be rising interest in material, technical countermeasures. Indeed, a cottage industry of surveillance-avoidance gear and gadgetry has sprung up. The reviews of these apparatuses tend to agree that the results they achieve are not great. For one thing, they are typically targeted at one type of surveillance vector at a time, thus requiring a specifically tailored attack model rather than being comprehensive solutions. Moreover, they can really only be fine-tuned properly if they have access to the source code of the algorithm they are trying to beat, or at least can test its response in controlled conditions before facing it in the wild. But of course, uncertainty about the outcomes of surveillance, or indeed about whether it is taking place to begin with, is the heart of the matter.
The creators of these countermeasures themselves, whatever their personal intellectual commitment to privacy and anonymity, hardly follow their own advice in eschewing the visibility the large internet platforms afford. Whether these systems try to beat machine-learning algorithms through data poisoning or adversarial attacks, they tend to be more of a political statement and proof of concept than a workable solution, especially in the long term. In general, even when effective, using these countermeasures is seen as extremely cumbersome and self-penalizing: they can be useful in limited situations for operating in ‘stealth mode’, but cannot be lived in permanently.
If this is the technological state of play, are we destined to a future of much greater personal transparency, or is the notion of hiding undergoing an evolution? Certainly, the momentum behind the diffusion of surveillance techniques such as facial recognition appears massive worldwide. Furthermore, it is no longer merely a question of centralized state agencies: the technology is mature for individual consumers to enact private micro-surveillance. This sea change is certainly prompting shifts in acceptable social behavior. But as to the wider problem of obscurity in our social lives, the strategic response may well lie in a mixture of compartimentalization and hiding in plain sight. And of course systems of any kind are easier to beat when one can target the human agent at the other end.