Tag Archives: Cybersecurity

Public red-teaming and trust

DEF CON is one of the most important hacker conferences worldwide, held yearly in Las Vegas. This coming August, it will host a large simulation, in which thousands of security experts from the private sector and academia will be invited to compete against each other to uncover flaws and bias in the generative large language models (LLMs) produced by leading firms such as OpenAI, Google, Anthropic, Hugging Face, and Stability. While in traditional red-team events the targets are bugs in the code, hardware, or human infrastructure, participants at DEF CON have additionally been instructed to seek exploits through adversarial prompt engineering, so as to induce the LLMs to return troubling, dangerous, or unlawful content.

This initiative definitely goes in the right direction in terms of building trust through verification, and bespeaks significant confidence on the part of the companies, as it can safely be expected that the media outlets in attendance will be primed to amplify any failure or embarassing shortcoming in the models’ output. There are limits, however, to how beneficial such an exercise can be. For one thing, the target constituency is limited to the extremely digitally literate (and by extension to the government agencies and private businesses the firms aspire to add to their customer list): the simulation’s outcome cannot be expected to move the needle on the broad, non-specialist perception of AI models and their risks in the public at large. Also, the stress test will be performed on customized versions of the LLMs, made available by the companies specifically for this event. The Volkswagen emissions scandal is only the most visible instance of how one may exploit such a benchmarking system. What is properly needed is the possibility of an unannounced audit of LLMs on the ground in their actual real-world applications, on the model of the Michelin Guide’s evaluation process for chefs and restaurants.

In spite of these limitations, the organization of the DEF CON simulation if nothing else proves that the leading AI developers have understood that wide-scale adoption of their technology will require a protracted engagement with public opinion in order to address doubts and respond to deeply entrenched misgivings.

Spyware as diplomatic agenda item

Commercial spyware has become a mainstream news item: Politico this week profiled a story about NSO Group in the context of President Biden’s official visit to Israel and Saudi Arabia. Both Middle Eastern countries have ties with this private company, the former as the seat of its headquarters, the second as a customer of its services. The general context of the trip is broadly defensive for the US Administration, as it seeks help to stem the runaway growth in oil prices triggered by the Ukraine war, while emerging from under the shadow of its predecessor’s regional policies, from Jerusalem to Iran to the Abraham Accords. Given Biden’s objectively weak hand, raising the issue of NSO Group and the misuse of their spyware with two strategic partners is particularly complicated. At the same time, many domestic forces, from major companies damaged by Pegasus breaches (Apple, Meta…) to liberals in Congress (such as Oregon Senator Ron Wyden), are clamoring for an assertive stance. Naturally, the agencies of the US National Security State are also in the business of developing functionally similar spyware capabilities. Hence, the couching of the international policy problem follows the pattern of nonproliferation, with all the attendant rhetorical risks of special pleading and hypocrisy. The issue, however, is unlikely to fade away as an agenda item in the near future, a clear illustration of the risks to conventional diplomatic strategy of a situation in which military-grade cryptanalysis is made available on the open market.

More interesting cybersecurity journalism (finally)

A study (PDF) by a team led by Sean Aday at the George Washington University School of Media and Public Affairs (commissioned by the Hewlett Foundation) sheds light on the improving quality of the coverage of cybersecurity incidents in mainstream US media. Ever since 2014, cyber stories in the news have been moving steadily away from the sensationalist hack-and-attack template of yore toward a more nuanced description of the context, the constraints of the cyber ecosystem, the various actors’ motivations, and the impactof incidents on the everyday lives of ordinary citizens.

The report shows how an understanding of the mainstream importance of cyber events has progressively percolated into newsrooms across the country over the past half-decade, leading to a broader recognition of the substantive issues at play in this field. An interesting incidental finding is that, over the course of this same period of time, coverage of the cyber beat has focused critical attention not only on the ‘usual suspects’ (Russia, China, shadowy hacker groups) but also, increasingly, on big tech companies themselves: an aspect of this growing sophistication of coverage is a foregrounding of the crucial role platform companies play as gatekeepers of our digital lives.

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.

Bridle’s vision

Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:

a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.

One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)

Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.