My forthcoming piece on Ethan Zuckerman’s Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them for the Italian Political Science Review.
Tag Archives: Content moderation
Limits of trustbuilding as policy objective
Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.
While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.
Bridle’s vision
Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:
a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.
One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)
Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.
FB foreign policy
There were several items in the news recently about Facebook’s dealings with governments around the world. In keeping with the company’s status as a major MNC, these dealings can be seen to amount to the equivalent of a foreign policy, whose complexities and challenges are becoming ever more apparent.
The first data point has to do with the haemorrage of FB users in Hong Kong. It is interesting to note how this scenario differs from the US one: in both societies we witness massive political polarization, spilling out into confrontation on social media, with duelling requests for adversarial content moderation, banning, and so forth. Hence, gatekeepers such as FB are increasingly, forcefully requested to play a referee role. Yet, while in the US it is still possible (conceivably) to aim for an ‘institutional’ middle ground, in HK the squeeze is on both sides of the political divide: the pro-China contingent is tempted to secede to mainland-owned social media platforms, while the opponents of the regime are wary of Facebook’s data-collecting practices and the company’s porousness to official requests for potentially incriminating information. The type of brinkmanship required in this situation may prove beyond the company’s reach.
The second data point derives from Facebook’s recent spat with Australian authorities over the enactment of a new law on news media royalties. Specifically, it deals with the impact of the short-lived FB news ban on small countries in the South Pacific with telco dependency on Australia. Several chickens coming home to roost on this one: not having national control over cellular and data networks as a key curtailment of sovereignty in today’s world, but also the pernicious, unintended consequences of a lack of net neutrality (citizens of these islands overwhelmingly had access to news through FB because their data plans allowed non-capped surfing on the platform, while imposing onerous extra charges for general internet navigation). In this case the company was able to leverage some of its built-in, systemic advantages to obtain a favorable settlement for the time being, at the cost of alerting the general public as to its vulnerability.
The third data point is an exposé by ProPublica of actions taken by the social media platform against the YPG, a Syrian Kurdish military organization. The geoblocking of the YPG page inside Turkey is not the first time the organization (who were the defenders of Kobane against ISIS) has been sold out: previous instances include (famously) the Trump administration in 2018. What is particularly interesting is the presence within FB of a formal method for evaluating whether groups should be included on a ‘terrorist’ list (a method independent of similar blacklisting by the US and other States and supranational bodies); such certification, however, is subject to the same self-interested and short-term unselfconscious manipulation as that seen in other instances of the genre: while YPG was not so labelled, the ban was approved as being in the best interests of the company, in the face of potential suspension of activities throughout Turkey.
These multiple fronts of Facebook’s diplomatic engagement all point to similar conclusions: as a key component of the geopolitical status quo’s establisment, FB is increasingly subject to multiple pressures not only to its stated company culture and philosophy of libertarian cosmopolitism, but also to its long-term profitability. In this phase of its corporate growth cycle, much like MNCs of comparable scale in other industries, the tools for its continued success begin to shift from pure technological and business savvy to lobbying and international dealmaking.
Free speech and monetization
Yesterday, I attended an Electronic Frontier Foundation webinar in the ‘At Home with EFF’ series on Twitch: the title was ‘Online Censorship Beyond Trump and Parler’. Two panels hosted several veterans and heavyweights in the content moderation/trust & safety field, followed by a wrap-up session presenting EFF positions on the topics under discussion.
Several interesting points emerged with regard to the interplay of market concentration, free speech concerns, and the incentives inherent in the dominant social media business model. The panelists reflected on the long run, identifying recurrent patterns, such as the economic imperative driving infrastructure companies from being mere conduits of information to becoming active amplifiers, hence inevitably getting embroiled in moderation. While neutrality and non-interference may be the preferred ideological stance for tech companies, at least publicly, editorial decisions are made a necessity by the prevailing monetization model, the market for attention and engagement.
Perhaps the most interesting insight, however, emerged from the discussion of the intertwining of free speech online with the way in which such speech is (or is not) allowed to make itself financially sustainable. Specifically, the case was made for the importance of the myriad choke points up and down the stack where those who wish to silence speech can exert pressure: if cloud computing cannot be denied to a platform in the name of anti-discrimination, should credit card verification or merch, for instance, also be protected categories?
All in all, nothing shockingly novel; it is worth being reminded, however, that a wealth of experience in the field has already accrued over the years, so that single companies (and legislators, academics, the press, etc.) need not reinvent the wheel each time trust & safety or content moderation are on the agenda.