My forthcoming piece on Ethan Zuckerman’s Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them for the Italian Political Science Review.
Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:
a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.
One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)
Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.
How is the influencer ecosystem evolving? Opposing forces are in play.
On the one hand, a NYT story describes symptoms of consolidation in the large-organic-online-following-to-brand-ambassadorship pathway. As influencing becomes a day job that is inserted in a stable fashion in the consumer-brand/advertising nexus, the type of informal, supposedly unmediated communication over social media becomes quickly unwieldy for business negotiations: at scale, professional intermediaries are necessary to manage transactions between the holders of social media capital/cred and the business interests wishing to leverage it. A rather more disenchanted and normalized workaday image of influencer life thereby emerges.
On the other hand, a Vulture profile of an influencer whose personal magnetism is matched only by her ability to offend (warning: NSFW) signals that normalization may ultimately be self-defeating. The intense and disturbing personal trajectory of Trisha Paytas suggests that the taming of internet celebrity for commercial purposes is by definition a neverending Sisyphean endeavor, for the currency involved is authenticity, whose seal of approval lies outside market transactions. The biggest crowds on the internet are still drawn by titillation of outrage, although their enactors may not thereby be suited to sell much of anything, except themselves.
Yesterday, I attended an Electronic Frontier Foundation webinar in the ‘At Home with EFF’ series on Twitch: the title was ‘Online Censorship Beyond Trump and Parler’. Two panels hosted several veterans and heavyweights in the content moderation/trust & safety field, followed by a wrap-up session presenting EFF positions on the topics under discussion.
Several interesting points emerged with regard to the interplay of market concentration, free speech concerns, and the incentives inherent in the dominant social media business model. The panelists reflected on the long run, identifying recurrent patterns, such as the economic imperative driving infrastructure companies from being mere conduits of information to becoming active amplifiers, hence inevitably getting embroiled in moderation. While neutrality and non-interference may be the preferred ideological stance for tech companies, at least publicly, editorial decisions are made a necessity by the prevailing monetization model, the market for attention and engagement.
Perhaps the most interesting insight, however, emerged from the discussion of the intertwining of free speech online with the way in which such speech is (or is not) allowed to make itself financially sustainable. Specifically, the case was made for the importance of the myriad choke points up and down the stack where those who wish to silence speech can exert pressure: if cloud computing cannot be denied to a platform in the name of anti-discrimination, should credit card verification or merch, for instance, also be protected categories?
All in all, nothing shockingly novel; it is worth being reminded, however, that a wealth of experience in the field has already accrued over the years, so that single companies (and legislators, academics, the press, etc.) need not reinvent the wheel each time trust & safety or content moderation are on the agenda.
Just attended an EFF-run ‘Fireside Chat’ with US Senator Ron Wyden (D-OR) on Section 230. As one of the original drafters of the legislation, the Senator was eager to point out the core values it was meant to shield from legal challenge, permitting the full deployment of constitutionally-protected speech online without imposing an undue burden of liability on those hosting such speech.
The Electronic Frontier Foundation and other digital rights organizations find themselves in a complicated political position, for, having spoken out against the risks and abuses originating from Big Tech long before there was widespread public consciousness of any problem, they now have to push against a bipartisan current that has crystallized in opposition to Section 230. Even some generalist news outlets have seized on the matter, giving scant play to the values and legitimate interests the law was originally intended to safeguard.
It seems fairly clear that mainstream political discourse has been extremely superficial in considering key aspects of the problem: Section 230 has become a symbol rather than a mere tool of governance. It may also be the case that the wide bipartisan consensus on its ills is in fact illusory, simply being a placemarker for incompatible views on how to move beyond the status quo, with the most likely outcome being paralysis of any reform effort. However, the risk that the imperative to do something cause the passage of hasty measures with lasting damage is real.
In a way, the present situation is the poisoned fruit of a narrative that linked the unfettered expansion of the Big Tech giants over the past decade to the libertarian and countercultural ideals of the early internet: when the former came to be perceived as intolerable, the latter were seen at best as acceptable collateral damage. Most of the popular animus against Section 230 that politicians are attempting to channel stems from resentment at the power of social media platforms and digital gatekeepers. Therefore (and although there may well be a case for the need to curb mindless amplification of certain types of speech online), perhaps antitrust action (in Congress or in the courts) is more suitable for obtaining the results the public seeks. Comparative policymaking will also be extremely relevant, as the European Union pursues its own aggressive agenda on content moderation, permissible speech, and monopoly power.