Tag Archives: Minorities

Bridle’s vision

Belatedly finished reading James Bridle’s book New Dark Age: Technology and the End of the Future (Verso, 2018). As the title suggests, the text is systemically pessimist about the effect of new technologies on the sustainability of human wellbeing. Although the overall structure of the argument is at times clouded over by sudden twists in narrative and the sheer variety of anecdotes, there are many hidden gems. I very much enjoyed the idea, borrowed from Timothy Morton, of a hyperobject:

a thing that surrounds us, envelops and entangles us, but that is literally too big to see in its entirety. Mostly, we perceive hyperobjects through their influence on other things […] Because they are so close and yet so hard to see, they defy our ability to describe them rationally, and to master or overcome them in any traditional sense. Climate change is a hyperobject, but so is nuclear radiation, evolution, and the internet.

One of the main characteristics of hyperobjects is that we only ever perceive their imprints on other things, and thus to model the hyperobject requires vast amounts of computation. It can only be appreciated at the network level, made sensible through vast distributed systems of sensors, exabytes of data and computation, performed in time as well as space. Scientific record keeping thus becomes a form of extrasensory perception: a networked, communal, time-travelling knowledge making. (73)

Bridle has some thought-provoking ideas about possible responses to the dehumanizing forces of automation and algorithmic sorting, as well. Particularly captivating was his description of Gary Kasparov’s reaction to defeat at the hands of AI Deep Blue in 1997: the grandmaster proposed ‘Advanced Chess’ tournaments, pitting pairs of human and computer players, since such a pairing is superior to both human and machine players on their own. This type of ‘centaur strategy’ is not simply a winning one: it may, Bridle suggests, hold ethical insights on patways of human adaptation to an era of ubiquitous computation.

Coded Bias

I managed to catch a screening of the new Shalini Kantayya documentary, Coded Bias, through EDRi. It tells the story of Joy Bualomwini‘s discovery of systematic discrepancies in the performance of algorithms across races and genders. The tone was lively and accessible, with a good tempo, and the cast of characters presented did a good job showcasing a cross-section of female voices in the tech policy space. It was particularly good to see several authors that appear on my syllabus, such as Cathy O’Neil, Zeynep Tufekci, and Virginia Eubanks.

Sharp Eyes

An interesting report in Medium (via /.) discusses the PRC’s new pervasive surveillance program, Sharp Eyes. The program, which complements several other mass surveillance initiatives by the Chinese government, such as SkyNet, is aimed especially at rural communities and small towns. With all the caveats related to the fragmentary nature of the information available to outside researchers, it appears that Sharp Eyes’ main characteristic is being community-driven: the feeds from CCTV cameras monitoring public spaces are made accessible to individuals in the community, whether at home from their TVs and monitors or through smartphone apps. Hence, local communities become responsible for monitoring themselves (and providing denunciations of deviants to the authorities).

This outsourcing of social control is clearly a labor-saving initiative, which itself ties in to a long-run, classic theme in Chinese governance. It is not hard to perceive how such a scheme may encourage social homogeneization and irregimentation dynamics, and be especially effective against stigmatized minorities. After all, the entire system of Chinese official surveillance is more or less formally linked to the controversial Social Credit System, a scoring of the population for ideological and financial conformity.

However, I wonder whether a community-driven surveillance program, in rendering society more transparent to itself, does not also potentially offer accountability tools to civil society vis-à-vis the government. After all, complete visibility of public space by all members of society also can mean exposure and documentation of specific public instances of abuse of authority, such as police brutality. Such cases could of course be blacked out of the feeds, but such a heavy-handed tactic would cut into the propaganda value of the transparency initiative and affect public trust in the system. Alternatively, offending material could be removed more seamlessly through deep fake interventions, but the resources necessary for such a level of tampering, including the additional layer of bureaucracy needed to curate live feeds, would seem ultimately self-defeating in terms of the cost-cutting rationale.

In any case, including the monitored public within the monitoring loop (and emphasizing the collective responsibility aspect of the practice over the atomizing, pervasive-suspicion one) promises to create novel practical and theoretical challenges for mass surveillance.

FB foreign policy

There were several items in the news recently about Facebook’s dealings with governments around the world. In keeping with the company’s status as a major MNC, these dealings can be seen to amount to the equivalent of a foreign policy, whose complexities and challenges are becoming ever more apparent.

The first data point has to do with the haemorrage of FB users in Hong Kong. It is interesting to note how this scenario differs from the US one: in both societies we witness massive political polarization, spilling out into confrontation on social media, with duelling requests for adversarial content moderation, banning, and so forth. Hence, gatekeepers such as FB are increasingly, forcefully requested to play a referee role. Yet, while in the US it is still possible (conceivably) to aim for an ‘institutional’ middle ground, in HK the squeeze is on both sides of the political divide: the pro-China contingent is tempted to secede to mainland-owned social media platforms, while the opponents of the regime are wary of Facebook’s data-collecting practices and the company’s porousness to official requests for potentially incriminating information. The type of brinkmanship required in this situation may prove beyond the company’s reach.

The second data point derives from Facebook’s recent spat with Australian authorities over the enactment of a new law on news media royalties. Specifically, it deals with the impact of the short-lived FB news ban on small countries in the South Pacific with telco dependency on Australia. Several chickens coming home to roost on this one: not having national control over cellular and data networks as a key curtailment of sovereignty in today’s world, but also the pernicious, unintended consequences of a lack of net neutrality (citizens of these islands overwhelmingly had access to news through FB because their data plans allowed non-capped surfing on the platform, while imposing onerous extra charges for general internet navigation). In this case the company was able to leverage some of its built-in, systemic advantages to obtain a favorable settlement for the time being, at the cost of alerting the general public as to its vulnerability.

The third data point is an exposé by ProPublica of actions taken by the social media platform against the YPG, a Syrian Kurdish military organization. The geoblocking of the YPG page inside Turkey is not the first time the organization (who were the defenders of Kobane against ISIS) has been sold out: previous instances include (famously) the Trump administration in 2018. What is particularly interesting is the presence within FB of a formal method for evaluating whether groups should be included on a ‘terrorist’ list (a method independent of similar blacklisting by the US and other States and supranational bodies); such certification, however, is subject to the same self-interested and short-term unselfconscious manipulation as that seen in other instances of the genre: while YPG was not so labelled, the ban was approved as being in the best interests of the company, in the face of potential suspension of activities throughout Turkey.

These multiple fronts of Facebook’s diplomatic engagement all point to similar conclusions: as a key component of the geopolitical status quo’s establisment, FB is increasingly subject to multiple pressures not only to its stated company culture and philosophy of libertarian cosmopolitism, but also to its long-term profitability. In this phase of its corporate growth cycle, much like MNCs of comparable scale in other industries, the tools for its continued success begin to shift from pure technological and business savvy to lobbying and international dealmaking.

Disinformation at the weakest link

There was an interesting article recently in Quartz about 2020 electoral disinformation in Spanish-language social media. While the major platforms have taken credit for the fact that the election did not feature a repeat of the coordinated foreign influence operations of 2016, arguably the victory lap came somewhat too soon, as the problem cases in the information sphere this cycle are only gradually coming to light. Post-electoral myth-building about a rigged process and a stolen victory, for one, while of little practical import for the present, has the potential to plant a deep, lasting sense of grievance in conservative political culture in the US over the long term. Similarly, the fact that less public attention, less civil-society scrutiny, less community-based new-media literacy, and less algorithmic refinement were available for Spanish-speaking electoral discourse meant that disinformation was allowed to expand much more broadly and deeply than in English. The mainstream liberal narrative that such a fact per se helps explain the disappointing electoral results of the Democratic Party with this demographic (especially in States like Florida or Texas) is itself fairly insensitive and stereotyped. The Latinx electorate in the US is not a monolith, and segments within it have distinct political preferences, which are no more the product of disinformation than anyone else’s. Yet, it seems clear that in this electoral campaign factually false political statements received less pushback, both from above and from below, when they were uttered in Spanish.

Two general facts are illustrated by this example. On the one hand, because of the production and distribution dynamics of disinformation, it is clear that its spread follows a path of least resistance: minority languages, like peripheral countries or minor social media platforms, while unlikely to be on the cutting edge of new disinformation, tend to be more permeable to stock disinformation that has been rebutted elsewhere. On the other hand, where disinformation has the ability to do the most damage is in spaces where it is unexpected, in fields that are considered separate and independent, subject to different rules of engagement. In this sense, fake news does not simply provide partisans with ‘factual’ reasons to feel how they already felt about their adversaries: it can legitimately catch the unsuspecting unawares. One of the reasons for disinformation’s massive impact on American public discourse is that in a hyper-partisan era all manner of domains in everyday life once completely divorced from politics have been reached by political propaganda, and in these contexts a weary habituation with such wrangling has not yet set in, effectively tuning them out. This dynamic has been reflected in social media platforms: the ‘repurposing’ of LinkedIn and NextDoor in connection with the BLM protests is telling.

So, if disinformation at its most effective is the insertion of narratives where they are least expected, and if its spread follows a path of least resistance, seeking out the weakest link (while its containment follows an actuarial logic, the most effort being placed where the highest return is expected), what does this portend for the possibility of a unitary public sphere?

There is reason to believe that these are long-run concerns, and that the Presidential campaign may have been the easy part. As Ellen Goodman and Karen Kornbluh mention in their platform electoral performance round-up,

That there was clearly authoritative local information about voting and elections made the platforms’ task easier. It becomes harder in other areas of civic integrity where authority is more contested.

Foreign counterexamples such as that of Taiwan reinforce the conundrum: cohesive societies are capable of doing well against disinformation, but in already polarized ones a focus on such a fight is perceived as being a partisan stance itself.