Yesterday I attended the online launch event for Edgelands, a pop-up institute that is being incubated at Harvard’s Berkman Klein Center. The Institute’s goal is to study how our social contract is being redrawn, especially in urban areas, as a consequence of technological changes such as pervasive surveillance and unforeseen crises such as the global pandemic. The design of the EI is very distinctive: it is time-limited (5 years), radically decentralized, and aiming to bridge gaps between perspectives and methodologies as diverse as academic research, public policy, and art. It is also notable for its focus on rest-of-world urban dynamics outside the North-Atlantic space (Beirut, Nairobi, and Medellín are among the pilot cities). Some of its initiatives, from what can be gleaned at the outset, appear a bit whimsical, but it will be interesting to follow the Institute’s development, as a fresh approach to these topics could prove extremely inspiring.
Category Archives: Trustworthiness
Coded Bias
I managed to catch a screening of the new Shalini Kantayya documentary, Coded Bias, through EDRi. It tells the story of Joy Bualomwini‘s discovery of systematic discrepancies in the performance of algorithms across races and genders. The tone was lively and accessible, with a good tempo, and the cast of characters presented did a good job showcasing a cross-section of female voices in the tech policy space. It was particularly good to see several authors that appear on my syllabus, such as Cathy O’Neil, Zeynep Tufekci, and Virginia Eubanks.
Trust as course material
My course at the University of Bologna with the new material on trust and mistrust in digital societies is now live! Check out the full syllabus here.
Free speech and monetization
Yesterday, I attended an Electronic Frontier Foundation webinar in the ‘At Home with EFF’ series on Twitch: the title was ‘Online Censorship Beyond Trump and Parler’. Two panels hosted several veterans and heavyweights in the content moderation/trust & safety field, followed by a wrap-up session presenting EFF positions on the topics under discussion.
Several interesting points emerged with regard to the interplay of market concentration, free speech concerns, and the incentives inherent in the dominant social media business model. The panelists reflected on the long run, identifying recurrent patterns, such as the economic imperative driving infrastructure companies from being mere conduits of information to becoming active amplifiers, hence inevitably getting embroiled in moderation. While neutrality and non-interference may be the preferred ideological stance for tech companies, at least publicly, editorial decisions are made a necessity by the prevailing monetization model, the market for attention and engagement.
Perhaps the most interesting insight, however, emerged from the discussion of the intertwining of free speech online with the way in which such speech is (or is not) allowed to make itself financially sustainable. Specifically, the case was made for the importance of the myriad choke points up and down the stack where those who wish to silence speech can exert pressure: if cloud computing cannot be denied to a platform in the name of anti-discrimination, should credit card verification or merch, for instance, also be protected categories?
All in all, nothing shockingly novel; it is worth being reminded, however, that a wealth of experience in the field has already accrued over the years, so that single companies (and legislators, academics, the press, etc.) need not reinvent the wheel each time trust & safety or content moderation are on the agenda.
Market concentration woes
Just followed the Medium book launch event for the print edition of Cory Doctorow’s latest, How to Destroy Surveillance Capitalism (free online version here). The pamphlet, from August 2020, was originally intended as a rebuttal of Shoshana Zuboff’s The Age of Surveillance Capitalism [v. supra]. The main claim is that the political consequences of surveillance capitalism were not, as Zuboff maintains, unintended, but rather are central and systemic to the functioning of the whole. Hence, proposed solutions cannot be limited to the technological or economic sphere, but must be political as well. Specifically, Doctorow identifies in trust-busting the main policy tool for reining in Big Tech.
With hindsight of the 2020 election cycle and its aftermath, two points Doctorow made in the presentation stand out most vividly. The first is the link between market power and the devaluing of expert opinion that is a necessary forerunner of disinformation. The argument is that “monopolies turn truth-seeking operations [such as parliamentary committee hearings, expert testimony in court, and so forth] into auctions” (where the deepest pockets buy the most favorable advice), thereby completely discrediting their information content for the general public. The second point is that most all of the grievances currently voiced about Section 230 (the liability shield for online publishers of third-party materials) are at some level grievances about monopoly power.