Tag Archives: Surveillance

The technology cycle and Walmart

Apparently, Walmart needs its workforce to have cellphones.

An announcement by the company stating that about 750k employees in the US will be given $500 Samsung phones for free by year’s end was reported widely this week. Walmart’s US workforce in 2017 was 1.5m, so if there have not been dramatic changes in this figure, the policy would cover about half of the chain’s employees. Since Walmart is one of the largest employers in the country, this initiative is bound to be one of, if not the largest of its kind. Also, given the significant proportion of low-wage jobs in the company’s workforce, the benefit is a considerable one.

The company indicated it has taken this step in order to transition away from the dedicated handheld devices its associates previously used in their stores. The result is a hybrid arrangement. Employees get the device, case, and protection plan for free (they still presumably are on the hook for their own voice & data plan), but they

will only be able to use work features on the new Me@Walmart while on the clock.

Although it would be fair to assume that the use of cellphones for personal matters is already discouraged (or prohibited altogether) for Walmart employees on the clock, this policy nudges them toward an equilibrium (use of an employer-gifted device as one’s primary cell phone) where such behavior becomes technically impossible.

As for non-work-related use, Walmart

promised that it would not have access to any employee’s personal data and [they] can “use the smartphone as their own personal device if they want, with all the features and privacy they’re used to”

which has a bit of an ominous ring, for those who consider the overall landscape of surveillance capitalism, cellphones’ key role within it, and the importance of habituation for the smooth functioning of the system.

Of course, Walmart’s convergence to an Amazon-style micro-management-by-app of its employees’ physical movement through its warehouses is the big story, driving the program. However, it is interesting to note a few concurrent dynamics. On the one hand, in the span of one generation cellphones have followed the whole arc of the technology/political economy cycle, from luxury fashion items for conspicuous consumption to basic infrastructure indispensable for working-class jobs. In situations of economic crisis, falling purchasing power, and widening wealth differentials, it can prove economically advantageous for employers to provide them outright as a fringe benefit. On the other hand, the intertwining of the private and the public, work-life and down-time in contemporary America has decisively affected how people access the Internet. This fact was made apparent to white-collar employees working remotely during the pandemic, but as always the most extreme and direct consequences will be experienced by those most directly exposed to market forces and least able to bargain over employment conditions.

Bridle’s vision

Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:

a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.

One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)

Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.

Edgelands Institute launches

Yesterday I attended the online launch event for Edgelands, a pop-up institute that is being incubated at Harvard’s Berkman Klein Center. The Institute’s goal is to study how our social contract is being redrawn, especially in urban areas, as a consequence of technological changes such as pervasive surveillance and unforeseen crises such as the global pandemic. The design of the EI is very distinctive: it is time-limited (5 years), radically decentralized, and aiming to bridge gaps between perspectives and methodologies as diverse as academic research, public policy, and art. It is also notable for its focus on rest-of-world urban dynamics outside the North-Atlantic space (Beirut, Nairobi, and Medellín are among the pilot cities). Some of its initiatives, from what can be gleaned at the outset, appear a bit whimsical, but it will be interesting to follow the Institute’s development, as a fresh approach to these topics could prove extremely inspiring.

Perspectives on data activism: Aventine secessions and sabotage

Interesting article in the MIT Tech Review (via /.) detailing research performed at Northwestern University (paper on ArXiv) on how potentially to leverage the power of collective action in order to counter pervasive data collection strategies by internet companies. Three such methods are discussed: data strikes (refusal to use data-invasive services), data poisoning (providing false and misleading data), and conscious data contribution (to privacy-respecting competitors).

Conscious data contribution and data strikes are relatively straightforward Aventine secessions, but depend decisively on the availability of alternative services (or the acceptability of degraded performance for the mobilized users on less-than-perfect substitutes).

The effectiveness of data poisoning, on the other hand, turns on the type of surveillance one is trying to stifle (as I have argued in I labirinti). If material efficacy is at stake, it can be decisive (e.g., faulty info can make a terrorist manhunt fail). Unsurprisingly, this type of strategic disinformation has featured in the plot of many works of fiction, both featuring and not featuring AIs. But if what’s at stake is the perception of efficacy, data poisoning is only an effective counterstrategy inasmuch as it destroys the legitimacy of the decisions made on the basis of the collected data (at what point, for instance, do advertisers stop working with Google because its database is irrevocably compromised?). In some cases of AI/ML adoption, in which the offloading of responsibility and the containment of costs are the foremost goals, there already is very broad tolerance for bias (i.e., faulty training data).

Hence in general the fix is not exclusively technical: political mobilization must be activated to cash in on the contradictions these data activism interventions bring to light.

Sharp Eyes

An interesting report in Medium (via /.) discusses the PRC’s new pervasive surveillance program, Sharp Eyes. The program, which complements several other mass surveillance initiatives by the Chinese government, such as SkyNet, is aimed especially at rural communities and small towns. With all the caveats related to the fragmentary nature of the information available to outside researchers, it appears that Sharp Eyes’ main characteristic is being community-driven: the feeds from CCTV cameras monitoring public spaces are made accessible to individuals in the community, whether at home from their TVs and monitors or through smartphone apps. Hence, local communities become responsible for monitoring themselves (and providing denunciations of deviants to the authorities).

This outsourcing of social control is clearly a labor-saving initiative, which itself ties in to a long-run, classic theme in Chinese governance. It is not hard to perceive how such a scheme may encourage social homogeneization and irregimentation dynamics, and be especially effective against stigmatized minorities. After all, the entire system of Chinese official surveillance is more or less formally linked to the controversial Social Credit System, a scoring of the population for ideological and financial conformity.

However, I wonder whether a community-driven surveillance program, in rendering society more transparent to itself, does not also potentially offer accountability tools to civil society vis-à-vis the government. After all, complete visibility of public space by all members of society also can mean exposure and documentation of specific public instances of abuse of authority, such as police brutality. Such cases could of course be blacked out of the feeds, but such a heavy-handed tactic would cut into the propaganda value of the transparency initiative and affect public trust in the system. Alternatively, offending material could be removed more seamlessly through deep fake interventions, but the resources necessary for such a level of tampering, including the additional layer of bureaucracy needed to curate live feeds, would seem ultimately self-defeating in terms of the cost-cutting rationale.

In any case, including the monitored public within the monitoring loop (and emphasizing the collective responsibility aspect of the practice over the atomizing, pervasive-suspicion one) promises to create novel practical and theoretical challenges for mass surveillance.