Tag Archives: Readings

Tropes of the Techlash

A review by Paul Dicken published online a week ago in The New Atlantis is representative of a certain kind of argument in contemporary social critiques of high tech. The piece discusses a book by Ben Schneiderman entitled Human-Centered AI, which came out earlier this year for Oxford UP, and mainly reads as an exposé of a benighted scientism that at best is hopelessly naïve about its potential to effect meaningful emancipatory social change and at worst is disingenuous about the extractive and exploitative agendas that underwrite its deployment.

One would not wish to deny that Schneiderman makes for a good target: computer scientists as a sociological class are hardly more self-reflexive or engagé than any other similarly-defined professional group, and divulgative AI-and-management texts seldom present incisive and counterintuitive social commentary. Nonetheless, it is hard to miss a certain symmetry between the attacks on the political self-awareness of the author in question (how could he have missed the damning social implications??) and the peans to progress through techno-solutionism which characterized public debate on Web2.0 before the techlash.

The fact itself that Dicken refers back to Charles Babbage as a precursor of contemporary AI research and its dark side should suggest that the entwinement of technological advancement with political economy might be a long-run phenomenon. What is different is that in the present conjuncture would-be social critics seem to harbor absolutely no faith that the political and social ills upstream from technological development can be righted, and no plan to do so. New technology changes affordances, and this shift makes certain social dynamics more visible. But in the absence of specifically political work, such visibility is ephemeral, irrelevant. Hence, the exposé of political cluelessness risks becoming the master trope of the techlash, essentially a declaration of social impotence.

VPN in the Subcontinent

A timely piece in Rest of World on the tightening of regulations on the use of VPNs by the Modi administration in India. In this instance, the enforcement of visibility is carried out at the business level, with a regulatory requirement of customer data retention placed on VPN providers. This new policy may precipitate the exit from the Indian market of certain foreign companies, such as Proton VPN (which is headquartered in Switzerland).

A blanket requirement of data collection on VPN use at the source, such as in the Indian case, strongly suggests that the underlying goals of the policy are the unfettered operation of mass surveillance and a chilling effect on stigmatized activity online, since more targeted and discriminating solutions exist technically to deal with specific forms of malfeasance and lawbreaking behind VPNs. In this case (as in the resort to prolonged internet shutdowns), Indian digital policies can be seen to inhabit a troubling hybrid zone, in which a democratic government acts in ways more readily associated with authoritarian regimes. In general, the hardening of India’s policymaking on IT, including a muscular assertion of its data sovereignty, cannot easily be disentangled from geopolitical considerations (specifically, concern over Chinese influence), but has also undeniably benefited the government’s domestic agenda and its electoral and interest coalition.

Jailbreaking North of the 38th Parallel

A recent article in Wired (via /.) describes North Korean experiences with jailbreaking smartphones for access to forbidden foreign content. It would appear that the North Korean government’s system for surveilling online activity is much more invasive than its Chinese counterpart, but less technically sophisticated.

Disinformation isn’t Destiny

As the war in Ukraine enters its sixth week, it may prove helpful to look back on an early assessment of the informational sphere of the conflict, the snapshot taken by Maria Giovanna Sessa of the EU Disinfo Lab on March 14th.

Sessa summed up her findings succintly:

Strategy-wise, malign actors mainly produce entirely fabricated content, while the most recurrent tactic to disinform is the use of decontexualised photos and videos, followed by content manipulation (doctored image or false subtitles). As evidence of the high level of polarisation, the same narratives have been exploited to serve either pro-Ukrainian or pro-Russian messages.

This general picture, by most all accounts, largely holds half a month later. The styles of disinformation campaigns have not morphed significantly, although (as Sessa predicted) there has been a shift to weaponize the refugee angle of the crisis.

Most observers have been struck overall by the failure of the Russians to replicate previous information successes. The significant resources allotted from the very beginning of the conflict to fact-checking and debunking by a series of actors, public and private, in Western countries are part of the explanation for this outcome. More broady, however, it may be the case that Russian tactics in this arena have lost the advantage of surprise, so that as the informational sphere becomes more central to strategic power competition, relative capabilities revert to the mean of the general balance of power.

A.utomated I.dentity?

An interesting, thoughtful article by Michelle Santiago Cortés in The Cut last week looks at affective relationships with algorithms and their role in shaping our identities.

Three parts of the analysis specifically stood out to me. The first revolves around our typical lack of knowledge of algorithms: Cortés’ story about

some YouTube alpha male […] out there uploading videos promising straight men advice on how to “hack” the Tinder algorithm to date like kings

is clearly only the tip of a gigantic societal iceberg, a cargo-culture-as-way-of-life involving pretty much everyone in the remotest, most diverse walks of life. The ever-evolving nature of these algorithms compounds the obfuscation effect, making end-users’ strategic attempts, whether exploitation- or resistance-focused, generally appear puerile.

Second, the clarity with which Cortés encapsulated the main tradeoff in the relationship was truly apt:

[w]e are, to varying degrees, okay with being surveilled as long as we get to feel seen.

The assertion of visibility and assurance of recognition are two of the key assets algorithmic systems offer their users, and their value can hardly be minimized as mere late-consumerist narcissism.

Finally, the comparison between algorithmic portraits of personality and astrology was extremely telling: closing the behavioral loop from algorithmic interaction to the redefinition of one’s own identity on the basis of the algorithm’s inevitably distorting mirror is still a matter of choice, or rather, a sensibility that can be honed and socialized regarding the most empowering and nurturing use of what is ultimately a hermeneutic tool. Of course, such a benign conclusion rests on the ambit of application of such technologies: music videos, entertainment, dating. As soon as our contemporary astrological devices are put in charge of directing outcomes in the field of political economy and public policy, the moral calculus rapidly shifts.