Tag Archives: PR

Public red-teaming and trust

DEF CON is one of the most important hacker conferences worldwide, held yearly in Las Vegas. This coming August, it will host a large simulation, in which thousands of security experts from the private sector and academia will be invited to compete against each other to uncover flaws and bias in the generative large language models (LLMs) produced by leading firms such as OpenAI, Google, Anthropic, Hugging Face, and Stability. While in traditional red-team events the targets are bugs in the code, hardware, or human infrastructure, participants at DEF CON have additionally been instructed to seek exploits through adversarial prompt engineering, so as to induce the LLMs to return troubling, dangerous, or unlawful content.

This initiative definitely goes in the right direction in terms of building trust through verification, and bespeaks significant confidence on the part of the companies, as it can safely be expected that the media outlets in attendance will be primed to amplify any failure or embarassing shortcoming in the models’ output. There are limits, however, to how beneficial such an exercise can be. For one thing, the target constituency is limited to the extremely digitally literate (and by extension to the government agencies and private businesses the firms aspire to add to their customer list): the simulation’s outcome cannot be expected to move the needle on the broad, non-specialist perception of AI models and their risks in the public at large. Also, the stress test will be performed on customized versions of the LLMs, made available by the companies specifically for this event. The Volkswagen emissions scandal is only the most visible instance of how one may exploit such a benchmarking system. What is properly needed is the possibility of an unannounced audit of LLMs on the ground in their actual real-world applications, on the model of the Michelin Guide’s evaluation process for chefs and restaurants.

In spite of these limitations, the organization of the DEF CON simulation if nothing else proves that the leading AI developers have understood that wide-scale adoption of their technology will require a protracted engagement with public opinion in order to address doubts and respond to deeply entrenched misgivings.

Rightwing algorithms?

A long blog post on Olivier Ertzscheid’s personal website [in French] tackles the ideological orientation of the major social media platforms from a variety of points of view (the political leanings of software developers, of bosses, of companies, the politics of content moderation, political correctness, the revolving door with government and political parties, the intrinsic suitability of different ideologies to algorithmic amplification, and so forth).

The conclusions are quite provocative: although algorithms and social media platforms are both demonstrably biased and possessed of independent causal agency, amplifying, steering, and coarsening our public debate, in the end it is simply those with greater resources, material, social, cultural, etc., whose voices are amplified. Algorithms skew to the right because so does our society.

A global take on the mistrust moment

My forthcoming piece on Ethan Zuckerman’s Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them for the Italian Political Science Review.

Routinization of influence, exacerbation of outrageousness

How is the influencer ecosystem evolving? Opposing forces are in play.

On the one hand, a NYT story describes symptoms of consolidation in the large-organic-online-following-to-brand-ambassadorship pathway. As influencing becomes a day job that is inserted in a stable fashion in the consumer-brand/advertising nexus, the type of informal, supposedly unmediated communication over social media becomes quickly unwieldy for business negotiations: at scale, professional intermediaries are necessary to manage transactions between the holders of social media capital/cred and the business interests wishing to leverage it. A rather more disenchanted and normalized workaday image of influencer life thereby emerges.

On the other hand, a Vulture profile of an influencer whose personal magnetism is matched only by her ability to offend (warning: NSFW) signals that normalization may ultimately be self-defeating. The intense and disturbing personal trajectory of Trisha Paytas suggests that the taming of internet celebrity for commercial purposes is by definition a neverending Sisyphean endeavor, for the currency involved is authenticity, whose seal of approval lies outside market transactions. The biggest crowds on the internet are still drawn by titillation of outrage, although their enactors may not thereby be suited to sell much of anything, except themselves.

Media manipulation convergence

Adam Satariano in the NYT reports on the latest instance of platform manipulation, this time by Chinese tech giant Huawei against unfavorable 5G legislation being considered in Belgium. There’s nothing particularly novel about the single pieces of the process: paid expert endorsement, amplified on social media by coordinated fake profiles, with the resultant appearance of virality adduced by the company as a sign of support in public opinion at large. If anything, it appears to have been rather crudely executed, leading to a fairly easy discovery by Graphika: from a pure PR cost-benefit standpoint, the blowback from the unmasking of this operation did much more damage to Huawei’s image than any benefit that might have accrued to the company had it not been exposed. However, the main take-away from the story is the adding of yet another data point to the process of convergence between traditional government-sponsored influence operations and corporate astroturfing ventures. Their questionable effectiveness notwithstanding, these sorts of interventions are becoming default, mainstream tools in the arsenal of all PR shops, whatever their principals’ aims. The fact that they also tend to erode an already fragile base of public trust suggests that at the aggregate level this may be a negative-sum game.