Tag Archives: Privacy

Political economy entanglements of cryptocurrency

A few interesting news items in the past twenty-four hours illustrate the far-reaching impact of blockchain technology and its growing entanglement with structural political and economic realities. Kosovo has moved to ban cryptocurrency mining within its borders, in the face of a countrywide energy crisis. Meanwhile, The Guardian reports that the ongoing political unrest in Kazakhstan has led to a crisis for global bitcoin mining, as the government shuts down the nation’s internet backbone to attempt to thwart protesters’ communications. Finally, Casey Newton’s Platformer blog is running a piece on Signal’s imminent foray into cryptocurrency integration: Newton’s take is that this disruption is needless provocation of US authorities and may result in finally coalescing sufficient political will to outlaw end-to-end encryption outright, a move for which many voices worldwide have long been advocating.

Whatever the outcome of these specific dossiers, the data points are fast accumulating to support the claim that blockchain has broken through to mainstream status: going forward, it will be seen as a key variable shaping our future, alongside such old twentieth century factors as the right to free expression or the price of oil.

Rightwing algorithms?

A long blog post on Olivier Ertzscheid’s personal website [in French] tackles the ideological orientation of the major social media platforms from a variety of points of view (the political leanings of software developers, of bosses, of companies, the politics of content moderation, political correctness, the revolving door with government and political parties, the intrinsic suitability of different ideologies to algorithmic amplification, and so forth).

The conclusions are quite provocative: although algorithms and social media platforms are both demonstrably biased and possessed of independent causal agency, amplifying, steering, and coarsening our public debate, in the end it is simply those with greater resources, material, social, cultural, etc., whose voices are amplified. Algorithms skew to the right because so does our society.

The taxman and the 4th Amendment

Interesting article in The Intercept last week about the purchase by the U.S. Treasury Department of app data harvested by private brokers, such as Babel Street, in order to circumvent the necessity of obtaining court warrants for searches of personal data.

Nothing shockingly new, but the article ends with a key quote from Jack Poulson, the founder of Tech Inquiry, a research and advocacy group:

“Babel Street’s support for the IRS increasing its surveillance of small businesses and the self employed — after the IRS has already largely given up on auditing the ultrawealthy — is an example of the U.S. surveillance industry being used to help shift the tax burden to the working class”.

Digital Welfare Systems

An extremely interesting series of talks hosted by the Digital Freedom Fund: the automation of welfare system decisons is where the neoliberal agenda and digitalization intersect in the most socially explosive fashion. All six events look good, but I am particularly looking forward to the discussion of the Dutch System Risk Indication (SyRI) scandal on Oct. 27th. More info and free registration on the DFF’s website.

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.