My forthcoming piece on Ethan Zuckerman’s Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them for the Italian Political Science Review.
Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.
While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.
Apparently, Walmart needs its workforce to have cellphones.
An announcement by the company stating that about 750k employees in the US will be given $500 Samsung phones for free by year’s end was reported widely this week. Walmart’s US workforce in 2017 was 1.5m, so if there have not been dramatic changes in this figure, the policy would cover about half of the chain’s employees. Since Walmart is one of the largest employers in the country, this initiative is bound to be one of, if not the largest of its kind. Also, given the significant proportion of low-wage jobs in the company’s workforce, the benefit is a considerable one.
The company indicated it has taken this step in order to transition away from the dedicated handheld devices its associates previously used in their stores. The result is a hybrid arrangement. Employees get the device, case, and protection plan for free (they still presumably are on the hook for their own voice & data plan), but they
will only be able to use work features on the new Me@Walmart while on the clock.
Although it would be fair to assume that the use of cellphones for personal matters is already discouraged (or prohibited altogether) for Walmart employees on the clock, this policy nudges them toward an equilibrium (use of an employer-gifted device as one’s primary cell phone) where such behavior becomes technically impossible.
As for non-work-related use, Walmart
promised that it would not have access to any employee’s personal data and [they] can “use the smartphone as their own personal device if they want, with all the features and privacy they’re used to”
which has a bit of an ominous ring, for those who consider the overall landscape of surveillance capitalism, cellphones’ key role within it, and the importance of habituation for the smooth functioning of the system.
Of course, Walmart’s convergence to an Amazon-style micro-management-by-app of its employees’ physical movement through its warehouses is the big story, driving the program. However, it is interesting to note a few concurrent dynamics. On the one hand, in the span of one generation cellphones have followed the whole arc of the technology/political economy cycle, from luxury fashion items for conspicuous consumption to basic infrastructure indispensable for working-class jobs. In situations of economic crisis, falling purchasing power, and widening wealth differentials, it can prove economically advantageous for employers to provide them outright as a fringe benefit. On the other hand, the intertwining of the private and the public, work-life and down-time in contemporary America has decisively affected how people access the Internet. This fact was made apparent to white-collar employees working remotely during the pandemic, but as always the most extreme and direct consequences will be experienced by those most directly exposed to market forces and least able to bargain over employment conditions.
Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:
a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.
One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)
Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.
Yesterday I attended the online launch event for Edgelands, a pop-up institute that is being incubated at Harvard’s Berkman Klein Center. The Institute’s goal is to study how our social contract is being redrawn, especially in urban areas, as a consequence of technological changes such as pervasive surveillance and unforeseen crises such as the global pandemic. The design of the EI is very distinctive: it is time-limited (5 years), radically decentralized, and aiming to bridge gaps between perspectives and methodologies as diverse as academic research, public policy, and art. It is also notable for its focus on rest-of-world urban dynamics outside the North-Atlantic space (Beirut, Nairobi, and Medellín are among the pilot cities). Some of its initiatives, from what can be gleaned at the outset, appear a bit whimsical, but it will be interesting to follow the Institute’s development, as a fresh approach to these topics could prove extremely inspiring.