Tag Archives: Cyberattacks

Spyware as diplomatic agenda item

Commercial spyware has become a mainstream news item: Politico this week profiled a story about NSO Group in the context of President Biden’s official visit to Israel and Saudi Arabia. Both Middle Eastern countries have ties with this private company, the former as the seat of its headquarters, the second as a customer of its services. The general context of the trip is broadly defensive for the US Administration, as it seeks help to stem the runaway growth in oil prices triggered by the Ukraine war, while emerging from under the shadow of its predecessor’s regional policies, from Jerusalem to Iran to the Abraham Accords. Given Biden’s objectively weak hand, raising the issue of NSO Group and the misuse of their spyware with two strategic partners is particularly complicated. At the same time, many domestic forces, from major companies damaged by Pegasus breaches (Apple, Meta…) to liberals in Congress (such as Oregon Senator Ron Wyden), are clamoring for an assertive stance. Naturally, the agencies of the US National Security State are also in the business of developing functionally similar spyware capabilities. Hence, the couching of the international policy problem follows the pattern of nonproliferation, with all the attendant rhetorical risks of special pleading and hypocrisy. The issue, however, is unlikely to fade away as an agenda item in the near future, a clear illustration of the risks to conventional diplomatic strategy of a situation in which military-grade cryptanalysis is made available on the open market.

More interesting cybersecurity journalism (finally)

A study (PDF) by a team led by Sean Aday at the George Washington University School of Media and Public Affairs (commissioned by the Hewlett Foundation) sheds light on the improving quality of the coverage of cybersecurity incidents in mainstream US media. Ever since 2014, cyber stories in the news have been moving steadily away from the sensationalist hack-and-attack template of yore toward a more nuanced description of the context, the constraints of the cyber ecosystem, the various actors’ motivations, and the impactof incidents on the everyday lives of ordinary citizens.

The report shows how an understanding of the mainstream importance of cyber events has progressively percolated into newsrooms across the country over the past half-decade, leading to a broader recognition of the substantive issues at play in this field. An interesting incidental finding is that, over the course of this same period of time, coverage of the cyber beat has focused critical attention not only on the ‘usual suspects’ (Russia, China, shadowy hacker groups) but also, increasingly, on big tech companies themselves: an aspect of this growing sophistication of coverage is a foregrounding of the crucial role platform companies play as gatekeepers of our digital lives.

Trust among thieves

An item that recently appeared on NBC News (via /.) graphically illustrates the pervasiveness of the problem of trust across organizations, cultures, and value systems. It also speaks to the routinization of ransomware extortion and other forms of cybercrime as none-too-glamorous career paths, engendering their own disgruntled and underpaid line workers.

Lye machines

Josephine Wolff (Slate) reports on the recent hack of the water processing plant in Oldsmar, FL. Unknown intruders remotely accessed the plant’s controls and attempted to increase the lye content of the town’s water supply to potentially lethal levels. The case is notable in that the human fail-safe (the plant operator on duty) successfully counterbalanced the machine vulnerability, catching the hack as it was taking place and overriding the automatic controls, so no real-world adverse effects ultimately occurred.

What moral can be drawn? It is reasonable to argue, as Wolff does, against full automation: human supervision still has a critical role to play in the resiliency of critical control systems through human-machine redundancy. However, what Wolff does not mention is that this modus operandi may itself be interpreted as a signature of sorts (although no attribution has appeared in the press so far): it speaks of amateurism or of a proof-of-concept stunt; in any case, of an actor not planning to do any serious damage. Otherwise, it is highly improbable that there would have been no parallel attempt at social engineering of (or other types of attacks against) on-site technicians. After all, as the old security engineering nostrum states, rookies target technology, pros target people.

Victory lap for the EIP

Today, I followed the webcast featuring the initial wrap-up of the Electoral Integrity Partnership (I have discussed their work before). All the principal institutions composing the partnership (Stanford, University of Washington, Graphika, and the Atlantic Council) were represented on the panel. It was, in many respects, a victory lap, given the consensus view that foreign disinformation played a marginal role in the election, compared to 2016, also due to proactive engagement on the part of the large internet companies facilitated by projects such as the EIP.

In describing the 2020 disinformation ecosystem, Alex Stamos (Stanford Internet Observatory) characterized it as mostly home-grown, non-covert, and reliant on major influencers, which in turn forced platforms to transform their content moderation activities into a sort of editorial policy (I have remarked on this trend before). Also, for all the focus on social media, TV was seen to play a very important role, especially for the purpose of building background narratives in the long run, day to day.

Camille François (Graphika) remarked on the importance of alternative platforms, and indeed the trend has been for an expansion of political discourse to all manner of fora previously insulated from it (on this, more soon).

Of the disinformation memes that made it into the mainstream conversation (Stamos mentioned the example of Sharpie-gate), certain characteristics stand out: they tended to appeal to rival technical authority, so as to project an expert-vs-expert dissonance; they were made sticky by official endorsement, which turned debunking into a partisan enterprise. However, their very predictability rendered the task of limiting their spread easier for the platforms. Kate Starbird (UW) summed it up: if the story in 2016 was coordinated inauthentic foreign action, in 2020 it was authentic, loosely-coordinated domestic action, and the major institutional players (at least on the tech infrastructure side) were ready to handle it.

It makes sense for the EIP to celebrate how the disinformation environment was stymied in 2020 (as Graham Brookie of the Atlantic Council put it, it was a win for defence in the ongoing arms race on foreign interference), but it is not hard to see how such an assessment masks a bigger crisis, which would have been evident had there been a different victor. Overall trust in the informational ecosystem has been dealt a further, massive blow by the election, and hyper-polarized post-truth politics are nowhere near over. Indeed, attempts currently underway to fashion a Dolchstoßlegende are liable to have very significant systemic effects going forward. The very narrow focus on disinformation the EIP pursued may have paid off for now, but it is the larger picture of entrenched public distrust that guarantees that these problems will persist into the future.