Tag Archives: Public interest

Market concentration woes

Just followed the Medium book launch event for the print edition of Cory Doctorow’s latest, How to Destroy Surveillance Capitalism (free online version here). The pamphlet, from August 2020, was originally intended as a rebuttal of Shoshana Zuboff’s The Age of Surveillance Capitalism [v. supra]. The main claim is that the political consequences of surveillance capitalism were not, as Zuboff maintains, unintended, but rather are central and systemic to the functioning of the whole. Hence, proposed solutions cannot be limited to the technological or economic sphere, but must be political as well. Specifically, Doctorow identifies in trust-busting the main policy tool for reining in Big Tech.

With hindsight of the 2020 election cycle and its aftermath, two points Doctorow made in the presentation stand out most vividly. The first is the link between market power and the devaluing of expert opinion that is a necessary forerunner of disinformation. The argument is that “monopolies turn truth-seeking operations [such as parliamentary committee hearings, expert testimony in court, and so forth] into auctions” (where the deepest pockets buy the most favorable advice), thereby completely discrediting their information content for the general public. The second point is that most all of the grievances currently voiced about Section 230 (the liability shield for online publishers of third-party materials) are at some level grievances about monopoly power.

Addiction vs. dependency

A long, powerful essay in The Baffler about the new antitrust actions against Big Tech in the US and the parallels being drawn with the tobacco trials of the 1990s. I agree with its core claim, that equating the problem Big Tech poses with a personal addiction one (a position promoted inter alios by the documentary The Social Dilemma) minimizes the issue of economic dependency and the power it confers on the gatekeepers of key digital infrastructure. I have argued previously that this is at the heart of popular mistrust of the big platforms. However, the pursuit of the tech giants in court risks to be hobbled because of the lasting effect of neoliberal thought on antitrust architecture in US jurisprudence and regulation. Concentrating on consumer prices in the short run risks missing the very real ways in which tech companies can exert systemic social power. In their quest to rein in Big Tech, US lawmakers and attorneys will be confronted with much deeper and more systemic political economy issues. It is unclear they will be able to win this general philosophical argument against such powerful special interests.

Babies and bathwater

Just attended an EFF-run ‘Fireside Chat’ with US Senator Ron Wyden (D-OR) on Section 230. As one of the original drafters of the legislation, the Senator was eager to point out the core values it was meant to shield from legal challenge, permitting the full deployment of constitutionally-protected speech online without imposing an undue burden of liability on those hosting such speech.

The Electronic Frontier Foundation and other digital rights organizations find themselves in a complicated political position, for, having spoken out against the risks and abuses originating from Big Tech long before there was widespread public consciousness of any problem, they now have to push against a bipartisan current that has crystallized in opposition to Section 230. Even some generalist news outlets have seized on the matter, giving scant play to the values and legitimate interests the law was originally intended to safeguard.

It seems fairly clear that mainstream political discourse has been extremely superficial in considering key aspects of the problem: Section 230 has become a symbol rather than a mere tool of governance. It may also be the case that the wide bipartisan consensus on its ills is in fact illusory, simply being a placemarker for incompatible views on how to move beyond the status quo, with the most likely outcome being paralysis of any reform effort. However, the risk that the imperative to do something cause the passage of hasty measures with lasting damage is real.

In a way, the present situation is the poisoned fruit of a narrative that linked the unfettered expansion of the Big Tech giants over the past decade to the libertarian and countercultural ideals of the early internet: when the former came to be perceived as intolerable, the latter were seen at best as acceptable collateral damage. Most of the popular animus against Section 230 that politicians are attempting to channel stems from resentment at the power of social media platforms and digital gatekeepers. Therefore (and although there may well be a case for the need to curb mindless amplification of certain types of speech online), perhaps antitrust action (in Congress or in the courts) is more suitable for obtaining the results the public seeks. Comparative policymaking will also be extremely relevant, as the European Union pursues its own aggressive agenda on content moderation, permissible speech, and monopoly power.

Victory lap for the EIP

Today, I followed the webcast featuring the initial wrap-up of the Electoral Integrity Partnership (I have discussed their work before). All the principal institutions composing the partnership (Stanford, University of Washington, Graphika, and the Atlantic Council) were represented on the panel. It was, in many respects, a victory lap, given the consensus view that foreign disinformation played a marginal role in the election, compared to 2016, also due to proactive engagement on the part of the large internet companies facilitated by projects such as the EIP.

In describing the 2020 disinformation ecosystem, Alex Stamos (Stanford Internet Observatory) characterized it as mostly home-grown, non-covert, and reliant on major influencers, which in turn forced platforms to transform their content moderation activities into a sort of editorial policy (I have remarked on this trend before). Also, for all the focus on social media, TV was seen to play a very important role, especially for the purpose of building background narratives in the long run, day to day.

Camille François (Graphika) remarked on the importance of alternative platforms, and indeed the trend has been for an expansion of political discourse to all manner of fora previously insulated from it (on this, more soon).

Of the disinformation memes that made it into the mainstream conversation (Stamos mentioned the example of Sharpie-gate), certain characteristics stand out: they tended to appeal to rival technical authority, so as to project an expert-vs-expert dissonance; they were made sticky by official endorsement, which turned debunking into a partisan enterprise. However, their very predictability rendered the task of limiting their spread easier for the platforms. Kate Starbird (UW) summed it up: if the story in 2016 was coordinated inauthentic foreign action, in 2020 it was authentic, loosely-coordinated domestic action, and the major institutional players (at least on the tech infrastructure side) were ready to handle it.

It makes sense for the EIP to celebrate how the disinformation environment was stymied in 2020 (as Graham Brookie of the Atlantic Council put it, it was a win for defence in the ongoing arms race on foreign interference), but it is not hard to see how such an assessment masks a bigger crisis, which would have been evident had there been a different victor. Overall trust in the informational ecosystem has been dealt a further, massive blow by the election, and hyper-polarized post-truth politics are nowhere near over. Indeed, attempts currently underway to fashion a Dolchstoßlegende are liable to have very significant systemic effects going forward. The very narrow focus on disinformation the EIP pursued may have paid off for now, but it is the larger picture of entrenched public distrust that guarantees that these problems will persist into the future.

Electoral Integrity Partnership

Yesterday, I attended the webinar for the public launch of the Election Integrity Partnership between Stanford, U Dub, Graphika and the Atlantic Council. Quite serious and professional public-interest work being done, and clearly timely.

The definition of electoral disinformation the Partnership adopted as their operational target seemed quite well-tailored and manageably factual (e.g. information such as voting hours and poll locations, the presence or absence of massive queues, mis-documented instances of fraud…).

What struck me as remarkable, however, is that there would be plausible user cases in which this type of factual information would be sought first and foremost, or with greater assurance and trust, on random posts on social media platforms: this really speaks to a gigantic lack of authoritativeness or communication ability on the part of election officials.