Tag Archives: Doxxing

Trust among thieves

An item that recently appeared on NBC News (via /.) graphically illustrates the pervasiveness of the problem of trust across organizations, cultures, and value systems. It also speaks to the routinization of ransomware extortion and other forms of cybercrime as none-too-glamorous career paths, engendering their own disgruntled and underpaid line workers.

Sharp Eyes

An interesting report in Medium (via /.) discusses the PRC’s new pervasive surveillance program, Sharp Eyes. The program, which complements several other mass surveillance initiatives by the Chinese government, such as SkyNet, is aimed especially at rural communities and small towns. With all the caveats related to the fragmentary nature of the information available to outside researchers, it appears that Sharp Eyes’ main characteristic is being community-driven: the feeds from CCTV cameras monitoring public spaces are made accessible to individuals in the community, whether at home from their TVs and monitors or through smartphone apps. Hence, local communities become responsible for monitoring themselves (and providing denunciations of deviants to the authorities).

This outsourcing of social control is clearly a labor-saving initiative, which itself ties in to a long-run, classic theme in Chinese governance. It is not hard to perceive how such a scheme may encourage social homogeneization and irregimentation dynamics, and be especially effective against stigmatized minorities. After all, the entire system of Chinese official surveillance is more or less formally linked to the controversial Social Credit System, a scoring of the population for ideological and financial conformity.

However, I wonder whether a community-driven surveillance program, in rendering society more transparent to itself, does not also potentially offer accountability tools to civil society vis-à-vis the government. After all, complete visibility of public space by all members of society also can mean exposure and documentation of specific public instances of abuse of authority, such as police brutality. Such cases could of course be blacked out of the feeds, but such a heavy-handed tactic would cut into the propaganda value of the transparency initiative and affect public trust in the system. Alternatively, offending material could be removed more seamlessly through deep fake interventions, but the resources necessary for such a level of tampering, including the additional layer of bureaucracy needed to curate live feeds, would seem ultimately self-defeating in terms of the cost-cutting rationale.

In any case, including the monitored public within the monitoring loop (and emphasizing the collective responsibility aspect of the practice over the atomizing, pervasive-suspicion one) promises to create novel practical and theoretical challenges for mass surveillance.

FB as Great Game arbitrator in Africa?

French-language news outlets, among others, have been reporting a Facebook takedown operation (here is the full report by Stanford University and Graphika) against three separate influence and disinformation networks, active in various sub-Saharan African countries since 2018. Two of these have been traced back to the well-known Russian troll farm Internet Research Agency; the third, however, appears to be linked to individuals in the French military (which is currently deployed in the Sahel). In some instances, and notably in the Central African Republic, the Russian and French operations competed directly with one another, attempting to doxx and discredit each other through fake fact-checking and news organization impersonations, as well as using AI to create fake online personalities posing as local residents.

The report did not present conclusive evidence for attribution of the French influence operation directly to the French government. Also, it argues that the French action was in many ways reactive to the Russian disinfo campaign. Nonetheless, as the authors claim,

[b]y creating fake accounts and fake “anti-fake-news” pages to combat the trolls, the French operators were perpetuating and implicitly justifying the problematic behavior they were trying to fight […] using “good fakes” to expose “bad fakes” is a high-risk strategy likely to backfire when a covert operation is detected […] More importantly, for the health of broader public discourse, the proliferation of fake accounts and manipulated evidence is only likely to deepen public suspicion of online discussion, increase polarization, and reduce the scope for evidence-based consensus.

What was not discussed, either in the report or in news coverage of it, is the emerging geopolitical equilibrium in which a private company can act as final arbitrator in an influence struggle between two Great Powers in a third country. Influence campaigns by foreign State actors are in no way a 21st-century novelty: the ability of a company such as Facebook to insert itself into them most certainly is. Media focus on disinformation-fighting activities of the major social media platforms in the case of the US elections (hence, on domestic ground) has had the effect of minimizing the strategic importance these companies now wield in international affairs. The question is to what extent they will be allowed to operate in complete independence by the US government, or, otherwise put, to what extent will foreign Powers insert this dossier into their general relation with the US going forward.

Cyberwarfare articles

A couple of scholarly articles read today on cyberwarfare. The first, a long piece by James Shires in the Texas National Security Review, speaks to a long-term thread of interest for me, namely the (imperfect) mapping of real-world alliances with operations in the cyber domain: the UAE, Qatar, and Saudi Arabia, although strategic partners of the US in the Gulf region, nonetheless targeted Hack-and-leak (HLO) operations at the US.

Shires underscores the patina of authenticity that leaks hold, and does a good job of showing how HLOs connect them with Bruce Schneier’s concept of “organizational doxxing”. In describing these HLOs as “simulations of scandal “, he leverages theoretical understandings of the phenomenon such as that of Jean Baudrillard. Standards of truth emerge as a major object of manipulation, but the key stake is whether the public will focus on the hack or the leak as the essence of the story.

The second article, by Kristen Eichensehr at justsecurity.org, reflects on the technical and legal process of attribution of cyberattacks. It argues in favor of the creation of a norm of customary international law obliging States to provide evidence when they attribute acts of cyberwarfare to a State or non-State actor. How to guarantee the credibility of the evidence and of the entity providing it (whether a centralized international body, a government agency, or a think-tank, academic institution, or private company) remains somewhat vague under her proposal.

Surveillance and anti-surveillance in residential housing

An interesting project (via Slashdot) to track and report the deployment of surveillance technology by property owners. This type of granular, individual surveillance relationship sometimes gets lost in the broader debates about surveillance, where we tend to focus on Nation-States and giant corporations, but it is far more pervasive and potentially insidious (as I discuss in I Labirinti). Unsurprisingly, it is showing up at a social and economic flashpoint in these pandemic times, the residential rental market. The Landlord Tech Watch mapping project is still in its infancy: whether doxxing is an effective counter-strategy to surveillance in this context remains to be seen.