Tag Archives: Cybersecurity

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.

Bridle’s vision

Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:

a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.

One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)

Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.

Lye machines

Josephine Wolff (Slate) reports on the recent hack of the water processing plant in Oldsmar, FL. Unknown intruders remotely accessed the plant’s controls and attempted to increase the lye content of the town’s water supply to potentially lethal levels. The case is notable in that the human fail-safe (the plant operator on duty) successfully counterbalanced the machine vulnerability, catching the hack as it was taking place and overriding the automatic controls, so no real-world adverse effects ultimately occurred.

What moral can be drawn? It is reasonable to argue, as Wolff does, against full automation: human supervision still has a critical role to play in the resiliency of critical control systems through human-machine redundancy. However, what Wolff does not mention is that this modus operandi may itself be interpreted as a signature of sorts (although no attribution has appeared in the press so far): it speaks of amateurism or of a proof-of-concept stunt; in any case, of an actor not planning to do any serious damage. Otherwise, it is highly improbable that there would have been no parallel attempt at social engineering of (or other types of attacks against) on-site technicians. After all, as the old security engineering nostrum states, rookies target technology, pros target people.

Schools get into the phone-hacking business

A disturbing piece of reporting from Gizmodo (via /.) on the adoption by many US school districts of digital forensic tools to retrieve content from their students’ mobile devices. Of course, such technology was originally developed as a counter-terrorism tool, and then trickled down to regular domestic law enforcement. As we have remarked previously, schools have recently been on the bleeding edge of the social application of intrusive technology, with all the risks and conflicts it engenders; in this instance, however, we see a particularly egregious confluence of technological blowback (from war in the periphery to everyday life in the metropole) and criminal-justice takeover of mass education (of school-to-prison-pipeline fame).

Disinformation at the weakest link

There was an interesting article recently in Quartz about 2020 electoral disinformation in Spanish-language social media. While the major platforms have taken credit for the fact that the election did not feature a repeat of the coordinated foreign influence operations of 2016, arguably the victory lap came somewhat too soon, as the problem cases in the information sphere this cycle are only gradually coming to light. Post-electoral myth-building about a rigged process and a stolen victory, for one, while of little practical import for the present, has the potential to plant a deep, lasting sense of grievance in conservative political culture in the US over the long term. Similarly, the fact that less public attention, less civil-society scrutiny, less community-based new-media literacy, and less algorithmic refinement were available for Spanish-speaking electoral discourse meant that disinformation was allowed to expand much more broadly and deeply than in English. The mainstream liberal narrative that such a fact per se helps explain the disappointing electoral results of the Democratic Party with this demographic (especially in States like Florida or Texas) is itself fairly insensitive and stereotyped. The Latinx electorate in the US is not a monolith, and segments within it have distinct political preferences, which are no more the product of disinformation than anyone else’s. Yet, it seems clear that in this electoral campaign factually false political statements received less pushback, both from above and from below, when they were uttered in Spanish.

Two general facts are illustrated by this example. On the one hand, because of the production and distribution dynamics of disinformation, it is clear that its spread follows a path of least resistance: minority languages, like peripheral countries or minor social media platforms, while unlikely to be on the cutting edge of new disinformation, tend to be more permeable to stock disinformation that has been rebutted elsewhere. On the other hand, where disinformation has the ability to do the most damage is in spaces where it is unexpected, in fields that are considered separate and independent, subject to different rules of engagement. In this sense, fake news does not simply provide partisans with ‘factual’ reasons to feel how they already felt about their adversaries: it can legitimately catch the unsuspecting unawares. One of the reasons for disinformation’s massive impact on American public discourse is that in a hyper-partisan era all manner of domains in everyday life once completely divorced from politics have been reached by political propaganda, and in these contexts a weary habituation with such wrangling has not yet set in, effectively tuning them out. This dynamic has been reflected in social media platforms: the ‘repurposing’ of LinkedIn and NextDoor in connection with the BLM protests is telling.

So, if disinformation at its most effective is the insertion of narratives where they are least expected, and if its spread follows a path of least resistance, seeking out the weakest link (while its containment follows an actuarial logic, the most effort being placed where the highest return is expected), what does this portend for the possibility of a unitary public sphere?

There is reason to believe that these are long-run concerns, and that the Presidential campaign may have been the easy part. As Ellen Goodman and Karen Kornbluh mention in their platform electoral performance round-up,

That there was clearly authoritative local information about voting and elections made the platforms’ task easier. It becomes harder in other areas of civic integrity where authority is more contested.

Foreign counterexamples such as that of Taiwan reinforce the conundrum: cohesive societies are capable of doing well against disinformation, but in already polarized ones a focus on such a fight is perceived as being a partisan stance itself.