I managed to catch a screening of the new Shalini Kantayya documentary, Coded Bias, through EDRi. It tells the story of Joy Bualomwini‘s discovery of systematic discrepancies in the performance of algorithms across races and genders. The tone was lively and accessible, with a good tempo, and the cast of characters presented did a good job showcasing a cross-section of female voices in the tech policy space. It was particularly good to see several authors that appear on my syllabus, such as Cathy O’Neil, Zeynep Tufekci, and Virginia Eubanks.
Category Archives: Trustworthiness
Trust as course material
My course at the University of Bologna with the new material on trust and mistrust in digital societies is now live! Check out the full syllabus here.
Free speech and monetization
Yesterday, I attended an Electronic Frontier Foundation webinar in the ‘At Home with EFF’ series on Twitch: the title was ‘Online Censorship Beyond Trump and Parler’. Two panels hosted several veterans and heavyweights in the content moderation/trust & safety field, followed by a wrap-up session presenting EFF positions on the topics under discussion.
Several interesting points emerged with regard to the interplay of market concentration, free speech concerns, and the incentives inherent in the dominant social media business model. The panelists reflected on the long run, identifying recurrent patterns, such as the economic imperative driving infrastructure companies from being mere conduits of information to becoming active amplifiers, hence inevitably getting embroiled in moderation. While neutrality and non-interference may be the preferred ideological stance for tech companies, at least publicly, editorial decisions are made a necessity by the prevailing monetization model, the market for attention and engagement.
Perhaps the most interesting insight, however, emerged from the discussion of the intertwining of free speech online with the way in which such speech is (or is not) allowed to make itself financially sustainable. Specifically, the case was made for the importance of the myriad choke points up and down the stack where those who wish to silence speech can exert pressure: if cloud computing cannot be denied to a platform in the name of anti-discrimination, should credit card verification or merch, for instance, also be protected categories?
All in all, nothing shockingly novel; it is worth being reminded, however, that a wealth of experience in the field has already accrued over the years, so that single companies (and legislators, academics, the press, etc.) need not reinvent the wheel each time trust & safety or content moderation are on the agenda.
Media manipulation convergence
Adam Satariano in the NYT reports on the latest instance of platform manipulation, this time by Chinese tech giant Huawei against unfavorable 5G legislation being considered in Belgium. There’s nothing particularly novel about the single pieces of the process: paid expert endorsement, amplified on social media by coordinated fake profiles, with the resultant appearance of virality adduced by the company as a sign of support in public opinion at large. If anything, it appears to have been rather crudely executed, leading to a fairly easy discovery by Graphika: from a pure PR cost-benefit standpoint, the blowback from the unmasking of this operation did much more damage to Huawei’s image than any benefit that might have accrued to the company had it not been exposed. However, the main take-away from the story is the adding of yet another data point to the process of convergence between traditional government-sponsored influence operations and corporate astroturfing ventures. Their questionable effectiveness notwithstanding, these sorts of interventions are becoming default, mainstream tools in the arsenal of all PR shops, whatever their principals’ aims. The fact that they also tend to erode an already fragile base of public trust suggests that at the aggregate level this may be a negative-sum game.
Market concentration woes
Just followed the Medium book launch event for the print edition of Cory Doctorow’s latest, How to Destroy Surveillance Capitalism (free online version here). The pamphlet, from August 2020, was originally intended as a rebuttal of Shoshana Zuboff’s The Age of Surveillance Capitalism [v. supra]. The main claim is that the political consequences of surveillance capitalism were not, as Zuboff maintains, unintended, but rather are central and systemic to the functioning of the whole. Hence, proposed solutions cannot be limited to the technological or economic sphere, but must be political as well. Specifically, Doctorow identifies in trust-busting the main policy tool for reining in Big Tech.
With hindsight of the 2020 election cycle and its aftermath, two points Doctorow made in the presentation stand out most vividly. The first is the link between market power and the devaluing of expert opinion that is a necessary forerunner of disinformation. The argument is that “monopolies turn truth-seeking operations [such as parliamentary committee hearings, expert testimony in court, and so forth] into auctions” (where the deepest pockets buy the most favorable advice), thereby completely discrediting their information content for the general public. The second point is that most all of the grievances currently voiced about Section 230 (the liability shield for online publishers of third-party materials) are at some level grievances about monopoly power.