Yesterday I attended the online launch event for Edgelands, a pop-up institute that is being incubated at Harvard’s Berkman Klein Center. The Institute’s goal is to study how our social contract is being redrawn, especially in urban areas, as a consequence of technological changes such as pervasive surveillance and unforeseen crises such as the global pandemic. The design of the EI is very distinctive: it is time-limited (5 years), radically decentralized, and aiming to bridge gaps between perspectives and methodologies as diverse as academic research, public policy, and art. It is also notable for its focus on rest-of-world urban dynamics outside the North-Atlantic space (Beirut, Nairobi, and Medellín are among the pilot cities). Some of its initiatives, from what can be gleaned at the outset, appear a bit whimsical, but it will be interesting to follow the Institute’s development, as a fresh approach to these topics could prove extremely inspiring.
Interesting article in the MIT Tech Review (via /.) detailing research performed at Northwestern University (paper on ArXiv) on how potentially to leverage the power of collective action in order to counter pervasive data collection strategies by internet companies. Three such methods are discussed: data strikes (refusal to use data-invasive services), data poisoning (providing false and misleading data), and conscious data contribution (to privacy-respecting competitors).
Conscious data contribution and data strikes are relatively straightforward Aventine secessions, but depend decisively on the availability of alternative services (or the acceptability of degraded performance for the mobilized users on less-than-perfect substitutes).
The effectiveness of data poisoning, on the other hand, turns on the type of surveillance one is trying to stifle (as I have argued in I labirinti). If material efficacy is at stake, it can be decisive (e.g., faulty info can make a terrorist manhunt fail). Unsurprisingly, this type of strategic disinformation has featured in the plot of many works of fiction, both featuring and not featuring AIs. But if what’s at stake is the perception of efficacy, data poisoning is only an effective counterstrategy inasmuch as it destroys the legitimacy of the decisions made on the basis of the collected data (at what point, for instance, do advertisers stop working with Google because its database is irrevocably compromised?). In some cases of AI/ML adoption, in which the offloading of responsibility and the containment of costs are the foremost goals, there already is very broad tolerance for bias (i.e., faulty training data).
Hence in general the fix is not exclusively technical: political mobilization must be activated to cash in on the contradictions these data activism interventions bring to light.
An interesting report in Medium (via /.) discusses the PRC’s new pervasive surveillance program, Sharp Eyes. The program, which complements several other mass surveillance initiatives by the Chinese government, such as SkyNet, is aimed especially at rural communities and small towns. With all the caveats related to the fragmentary nature of the information available to outside researchers, it appears that Sharp Eyes’ main characteristic is being community-driven: the feeds from CCTV cameras monitoring public spaces are made accessible to individuals in the community, whether at home from their TVs and monitors or through smartphone apps. Hence, local communities become responsible for monitoring themselves (and providing denunciations of deviants to the authorities).
This outsourcing of social control is clearly a labor-saving initiative, which itself ties in to a long-run, classic theme in Chinese governance. It is not hard to perceive how such a scheme may encourage social homogeneization and irregimentation dynamics, and be especially effective against stigmatized minorities. After all, the entire system of Chinese official surveillance is more or less formally linked to the controversial Social Credit System, a scoring of the population for ideological and financial conformity.
However, I wonder whether a community-driven surveillance program, in rendering society more transparent to itself, does not also potentially offer accountability tools to civil society vis-à-vis the government. After all, complete visibility of public space by all members of society also can mean exposure and documentation of specific public instances of abuse of authority, such as police brutality. Such cases could of course be blacked out of the feeds, but such a heavy-handed tactic would cut into the propaganda value of the transparency initiative and affect public trust in the system. Alternatively, offending material could be removed more seamlessly through deep fake interventions, but the resources necessary for such a level of tampering, including the additional layer of bureaucracy needed to curate live feeds, would seem ultimately self-defeating in terms of the cost-cutting rationale.
In any case, including the monitored public within the monitoring loop (and emphasizing the collective responsibility aspect of the practice over the atomizing, pervasive-suspicion one) promises to create novel practical and theoretical challenges for mass surveillance.
There were several items in the news recently about Facebook’s dealings with governments around the world. In keeping with the company’s status as a major MNC, these dealings can be seen to amount to the equivalent of a foreign policy, whose complexities and challenges are becoming ever more apparent.
The first data point has to do with the haemorrage of FB users in Hong Kong. It is interesting to note how this scenario differs from the US one: in both societies we witness massive political polarization, spilling out into confrontation on social media, with duelling requests for adversarial content moderation, banning, and so forth. Hence, gatekeepers such as FB are increasingly, forcefully requested to play a referee role. Yet, while in the US it is still possible (conceivably) to aim for an ‘institutional’ middle ground, in HK the squeeze is on both sides of the political divide: the pro-China contingent is tempted to secede to mainland-owned social media platforms, while the opponents of the regime are wary of Facebook’s data-collecting practices and the company’s porousness to official requests for potentially incriminating information. The type of brinkmanship required in this situation may prove beyond the company’s reach.
The second data point derives from Facebook’s recent spat with Australian authorities over the enactment of a new law on news media royalties. Specifically, it deals with the impact of the short-lived FB news ban on small countries in the South Pacific with telco dependency on Australia. Several chickens coming home to roost on this one: not having national control over cellular and data networks as a key curtailment of sovereignty in today’s world, but also the pernicious, unintended consequences of a lack of net neutrality (citizens of these islands overwhelmingly had access to news through FB because their data plans allowed non-capped surfing on the platform, while imposing onerous extra charges for general internet navigation). In this case the company was able to leverage some of its built-in, systemic advantages to obtain a favorable settlement for the time being, at the cost of alerting the general public as to its vulnerability.
The third data point is an exposé by ProPublica of actions taken by the social media platform against the YPG, a Syrian Kurdish military organization. The geoblocking of the YPG page inside Turkey is not the first time the organization (who were the defenders of Kobane against ISIS) has been sold out: previous instances include (famously) the Trump administration in 2018. What is particularly interesting is the presence within FB of a formal method for evaluating whether groups should be included on a ‘terrorist’ list (a method independent of similar blacklisting by the US and other States and supranational bodies); such certification, however, is subject to the same self-interested and short-term unselfconscious manipulation as that seen in other instances of the genre: while YPG was not so labelled, the ban was approved as being in the best interests of the company, in the face of potential suspension of activities throughout Turkey.
These multiple fronts of Facebook’s diplomatic engagement all point to similar conclusions: as a key component of the geopolitical status quo’s establisment, FB is increasingly subject to multiple pressures not only to its stated company culture and philosophy of libertarian cosmopolitism, but also to its long-term profitability. In this phase of its corporate growth cycle, much like MNCs of comparable scale in other industries, the tools for its continued success begin to shift from pure technological and business savvy to lobbying and international dealmaking.
A disturbing piece of reporting from Gizmodo (via /.) on the adoption by many US school districts of digital forensic tools to retrieve content from their students’ mobile devices. Of course, such technology was originally developed as a counter-terrorism tool, and then trickled down to regular domestic law enforcement. As we have remarked previously, schools have recently been on the bleeding edge of the social application of intrusive technology, with all the risks and conflicts it engenders; in this instance, however, we see a particularly egregious confluence of technological blowback (from war in the periphery to everyday life in the metropole) and criminal-justice takeover of mass education (of school-to-prison-pipeline fame).