Tag Archives: Public interest

Societal trust and the pace of AI research

An open letter from the Future of Life Institute exhorts the leading AI labs to enact a six-month moratorium on further experiments with artificial intelligence. The caliber of some of the early signatories guarantees that significant public conversation will ensue. Beyond the predictable hype, it is worth considering this intervention in the AI ethics and politics debate both on its merits and for what it portends more broadly for the field.

First off, the technicalities. The text locates the key chokepoint in AI development to be exploited in the interests of the moratorium in the scarcity of compute power. Truly, we are at the antipodes of the decentralized mode of innovation that drove, for instance, the original development of the commercial and personal web in the 1990s. However, it remains to be seen whether the compute power barrier has winnowed down the field into enough of an oligopoly for the proposed moratorium to have any chance of application. A closely related point is verifiability: even if there were few enough players to enable a coordination regime to emerge and there was virtually universal buy-in, it would still be necessary to enact some form of verification in order to police the system and ensure nobody is cheating. By comparison, the nuclear non-proliferation regime enjoys vast buy-in and plentiful dedicated enforcement resources (both at the nation-state and at the international organization level) and yet is far from perfect or fool-proof.

Moving to broader strategic issues, it bears considering whether the proposed moratorium, which would necessarily have to be global in scope, is in any way feasible in the current geopolitical climate. After all, one of the classic formulations of technological determinism relies on Great Power competition in military and dual-use applications. It would not be outlandish to suggest that we already are in a phase of strategic confrontation, between the United States and China among others, where the speed of tech change has become a dependent variable.

Perhaps, however, it is best to consider the second-order effects of the letter as the crux of the matter. The moratorium is extremely unlikely to come about, and would be highly unwieldy to manage if it did (the tell, perhaps, is the mismatch between the apocalyptic tone in which generative AI is described and the very short time requested to prepare for its onslaught). Nonetheless, such a proposal shifts the debate. It centers AI as the future technology to be grappled with socially, presents it as largely inevitable, and lays the responsibility for dealing with its ills at the foot of society as a whole.

Most strikingly, though, this intervention in public discourse relies on very tenuous legitimacy grounds for the various actors concerned, beginning with the drafters and signatories of the letter. Is the public supposed to endorse their analysis and support their prescriptions on the basis of their technical expertise? Or their impartiality? Or their track record of civic-mindedness? Or their expressing of preferences held by large numbers of people? All these justifications are problematic in their own way. In a low-trust environment, the authoritativeness of a public statement conducted in this fashion is bound to become itself a target of controversy.

Future publishing on disinformation

My chapter abstract entitled “Censorship Choices and the Legitimacy Challenge: Leveraging Institutional Trustworthiness in Crisis Situations” has been accepted for publication in the volume Defending Democracy in the Digital Age, edited by Scott Shackelford (of Indiana University) et al., to appear with Cambridge UP in 2024.

In other news, I am writing a book review of the very interesting grassroots study by Francesca Tripodi entitled The Propagandists’ Playbook: How Conservative Elites Manipulate Search and Threaten Democracy (Yale UP) for the Italian journal Etnografia e Ricerca Qualitativa.

Independent technology research

This looks like a very worthwhile coalition, advocating for open access to aggregate social media data for research purposes (not necessarily only for academics and journalists), while emphasizing a duty of independence in research and upholding standards and oversight. The coalition is US-focused, but its efforts dovetail with draft legislation currently making its way through the European institutions that seeks to guarantee similar rights of access. Inasmuch as large platforms lay a claim to embodying the contemporary public sphere, such calls for openness will only persist and intensify.

Excess skepticism and the media trust deficit

An interesting presentation at the MISDOOM 2022 conference earlier this week: Sacha Altay (Oxford) on the effectiveness of interventions against misinformation [pre-print here].

Altay lays out some established facts in the academic literature that at times get lost in the policy debate. The main one is that explicit disinformation, i.e. unreliable news such as that generated on propaganda websites that run coordinated influence operations, represents a minuscule segment of everyday people’s media consumption; however, the public has been induced to be indiscriminately skeptical of all news, and therefore doubts the validity even of bona fide information.

Thus, it would appear that a policy intervention aimed at explaining the verification techniques employed by professional journalists to vet reliable information should be more effective, all else being equal, than one that exposes the workings of purposeful disinformation. On the other hand, as Altay recognizes, misinformation is, at heart, a mere symptom of a deeper polarization, an attitude of political antagonism in search of content to validate it. But while such active seeking of misinformation may be fringe, spontaneous, and not particularly dangerous for democracy, generalized excess skepticism and the ensuing media trust deficit are much more serious wins for the enemies of open public discourse.

Conclaves and contention

Michael Den Tandt suggested recently in a CIGI blog post that the international moment called for a high-level mobilization of the best minds in a multilateral venture aimed at guaranteeing a stable information regime at a global level. The parallel Den Tandt mentioned historically is the Bretton Woods conference that set up the postwar international economic order at the end of the Second World War. It is interesting, it should be observed in passing, that contemporary discussions of bold internationalist projects tend to use economic regime creation as an archetype, rather than the arguably more universalist political settlements that bracketed them (you rarely hear calls for a new Dumbarton Oaks…).

There are, prima facie, two fundamental problems with the proposal. The first is that the call for a new round of regime building fundamentally misinterprets the historical moment: Bretton Woods was made possible by the imminence of Allied victory in WWII, while our own time is seeing the rise, not (yet) the dénouement, of fundamental antagonisms and Great Power rivalries. The type of lasting settlement Den Tandt envisions will be up to the victor in these struggles. The second problem concerns the balance of power not among States but between the public and the private sector. The framework of international economic development that was set up by the Bretton Woods system –which has been described by John Gerard Ruggie as embedded liberalism– was decisively smashed by the currency crises of the early 1970s. Many see that passage as the birth of neoliberalism, a new paradigm of public policy that has become hegemonic in the past half century. Under this current dispensation, the State, even the internationally hegemonic State, does not possess the ability to guide macroeconomic development, or to steer technological innovation decisively. Even if the political will for an international consensus at the highest level could be found, it is highly improbable that it could be implemented on a recalcitrant global market.

The disfunctions in the information sphere that Den Tandt decries are undeniable; such problems, however, will not be resolved by simply conjuring into existence a global regulatory regime for which the historical preconditions do not currently exist.