A study (PDF) by a team led by Sean Aday at the George Washington University School of Media and Public Affairs (commissioned by the Hewlett Foundation) sheds light on the improving quality of the coverage of cybersecurity incidents in mainstream US media. Ever since 2014, cyber stories in the news have been moving steadily away from the sensationalist hack-and-attack template of yore toward a more nuanced description of the context, the constraints of the cyber ecosystem, the various actors’ motivations, and the impactof incidents on the everyday lives of ordinary citizens.
The report shows how an understanding of the mainstream importance of cyber events has progressively percolated into newsrooms across the country over the past half-decade, leading to a broader recognition of the substantive issues at play in this field. An interesting incidental finding is that, over the course of this same period of time, coverage of the cyber beat has focused critical attention not only on the ‘usual suspects’ (Russia, China, shadowy hacker groups) but also, increasingly, on big tech companies themselves: an aspect of this growing sophistication of coverage is a foregrounding of the crucial role platform companies play as gatekeepers of our digital lives.
An extremely interesting series of talks hosted by the Digital Freedom Fund: the automation of welfare system decisons is where the neoliberal agenda and digitalization intersect in the most socially explosive fashion. All six events look good, but I am particularly looking forward to the discussion of the Dutch System Risk Indication (SyRI) scandal on Oct. 27th. More info and free registration on the DFF’s website.
An item that recently appeared on NBC News (via /.) graphically illustrates the pervasiveness of the problem of trust across organizations, cultures, and value systems. It also speaks to the routinization of ransomware extortion and other forms of cybercrime as none-too-glamorous career paths, engendering their own disgruntled and underpaid line workers.
Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.
While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.