Tag Archives: Privacy

Perspectives on data activism: Aventine secessions and sabotage

Interesting article in the MIT Tech Review (via /.) detailing research performed at Northwestern University (paper on ArXiv) on how potentially to leverage the power of collective action in order to counter pervasive data collection strategies by internet companies. Three such methods are discussed: data strikes (refusal to use data-invasive services), data poisoning (providing false and misleading data), and conscious data contribution (to privacy-respecting competitors).

Conscious data contribution and data strikes are relatively straightforward Aventine secessions, but depend decisively on the availability of alternative services (or the acceptability of degraded performance for the mobilized users on less-than-perfect substitutes).

The effectiveness of data poisoning, on the other hand, turns on the type of surveillance one is trying to stifle (as I have argued in I labirinti). If material efficacy is at stake, it can be decisive (e.g., faulty info can make a terrorist manhunt fail). Unsurprisingly, this type of strategic disinformation has featured in the plot of many works of fiction, both featuring and not featuring AIs. But if what’s at stake is the perception of efficacy, data poisoning is only an effective counterstrategy inasmuch as it destroys the legitimacy of the decisions made on the basis of the collected data (at what point, for instance, do advertisers stop working with Google because its database is irrevocably compromised?). In some cases of AI/ML adoption, in which the offloading of responsibility and the containment of costs are the foremost goals, there already is very broad tolerance for bias (i.e., faulty training data).

Hence in general the fix is not exclusively technical: political mobilization must be activated to cash in on the contradictions these data activism interventions bring to light.

Sharp Eyes

An interesting report in Medium (via /.) discusses the PRC’s new pervasive surveillance program, Sharp Eyes. The program, which complements several other mass surveillance initiatives by the Chinese government, such as SkyNet, is aimed especially at rural communities and small towns. With all the caveats related to the fragmentary nature of the information available to outside researchers, it appears that Sharp Eyes’ main characteristic is being community-driven: the feeds from CCTV cameras monitoring public spaces are made accessible to individuals in the community, whether at home from their TVs and monitors or through smartphone apps. Hence, local communities become responsible for monitoring themselves (and providing denunciations of deviants to the authorities).

This outsourcing of social control is clearly a labor-saving initiative, which itself ties in to a long-run, classic theme in Chinese governance. It is not hard to perceive how such a scheme may encourage social homogeneization and irregimentation dynamics, and be especially effective against stigmatized minorities. After all, the entire system of Chinese official surveillance is more or less formally linked to the controversial Social Credit System, a scoring of the population for ideological and financial conformity.

However, I wonder whether a community-driven surveillance program, in rendering society more transparent to itself, does not also potentially offer accountability tools to civil society vis-à-vis the government. After all, complete visibility of public space by all members of society also can mean exposure and documentation of specific public instances of abuse of authority, such as police brutality. Such cases could of course be blacked out of the feeds, but such a heavy-handed tactic would cut into the propaganda value of the transparency initiative and affect public trust in the system. Alternatively, offending material could be removed more seamlessly through deep fake interventions, but the resources necessary for such a level of tampering, including the additional layer of bureaucracy needed to curate live feeds, would seem ultimately self-defeating in terms of the cost-cutting rationale.

In any case, including the monitored public within the monitoring loop (and emphasizing the collective responsibility aspect of the practice over the atomizing, pervasive-suspicion one) promises to create novel practical and theoretical challenges for mass surveillance.

Barlow as Rorschach test

An op-ed by Joshua Benton on the first quarter-century of John Perry Barlow’s Declaration of the Independence of Cyberspace on the Nieman Lab website.

Unpacking the different facets of Barlow’s personality and worldview goes a long way toward mapping out early internet ideology: most everyone finds parts to admire as well as intimations of disasters to come. The protean nature of the author of the Declaration helps in the process. Was Barlow Dick Cheney’s friend or Ed Snowden’s? Was he a scion of Wyoming cattle ranching royalty or a Grateful Dead lyricist? Was he part of the Davos digerati or a defender of civil rights and founder of the EFF? All of these, of course, and much besides. Undeniably, Barlow had a striking way with words, matched only by a consistent ability to show up “where it’s at” in the prevailing cultural winds of the time (including a penchant for association with the rich and famous).

Benton does a good job highlighting how far removed the techno-utopian promises of the Declaration sound from the current zeitgeist regarding the social effects of information technology. But ultimately we see in Barlow a reflection of our own hopes and fears about digital societies: as I previously argued, there is no rigid and inescapable cause-effect relationship between the ideas of the ’90s and the oligopolies of today. Similarly, a course for future action and engagement can be set without espousing or denouncing the Declaration in its entirety.

Addiction vs. dependency

A long, powerful essay in The Baffler about the new antitrust actions against Big Tech in the US and the parallels being drawn with the tobacco trials of the 1990s. I agree with its core claim, that equating the problem Big Tech poses with a personal addiction one (a position promoted inter alios by the documentary The Social Dilemma) minimizes the issue of economic dependency and the power it confers on the gatekeepers of key digital infrastructure. I have argued previously that this is at the heart of popular mistrust of the big platforms. However, the pursuit of the tech giants in court risks to be hobbled because of the lasting effect of neoliberal thought on antitrust architecture in US jurisprudence and regulation. Concentrating on consumer prices in the short run risks missing the very real ways in which tech companies can exert systemic social power. In their quest to rein in Big Tech, US lawmakers and attorneys will be confronted with much deeper and more systemic political economy issues. It is unclear they will be able to win this general philosophical argument against such powerful special interests.

Schools get into the phone-hacking business

A disturbing piece of reporting from Gizmodo (via /.) on the adoption by many US school districts of digital forensic tools to retrieve content from their students’ mobile devices. Of course, such technology was originally developed as a counter-terrorism tool, and then trickled down to regular domestic law enforcement. As we have remarked previously, schools have recently been on the bleeding edge of the social application of intrusive technology, with all the risks and conflicts it engenders; in this instance, however, we see a particularly egregious confluence of technological blowback (from war in the periphery to everyday life in the metropole) and criminal-justice takeover of mass education (of school-to-prison-pipeline fame).