Interesting article on Slashdot about the historical roots of the weaponization of doubt and scientific disagreement by special interests.
It is notable that these phenomena start at scale with the pervasive political engagement of corporations with American politics in the 1970s and ’80s: this is the moment in which business as a whole detaches from automatic support for a particular political party (choosing its battles and the champions for them –whether financing an insurgent movement, litigation, legislative lobbying, and so forth– on a case-by-case basis), and also the dawn of the end-of-ideologies era. These themes are well discussed by Edward Walker in Grassroots for Hire (2014).
As for the present predicament, one is reminded of an NYT op-ed from last year by William Davies, “Everything Is War and Nothing Is True” on public political discourse:
Social media has introduced games of strategy into public discourse, with deception and secrecy — information warfare — now normal parts of how arguments play outor of a similarly-dated piece by Z. Tufekci on the commercial side of things:
The internet is increasingly a low-trust society—one where an assumption of pervasive fraud is simply built into the way many things function.
There definitely seem to be systemic aspects to this problem.
Interesting study (via Schneier) on how to use disinformation to attack the power grid. In essence, one is trying to game the profit-maximizing behavior of consumers (in this case, through fake information on discounts in electricity used during peak times), nudging them in precisely the opposite direction of market signals, hence overloading the grid. The general obscurity of electricity pricing for the consumer (much of which may be by design) is an important enabler of this hack.
Good article about the politicization of the US Department of Homeland Security’s intelligence briefings with regard to alleged Russian disinformation activities during the presidential campaign. Beyond the merits of the specific case, it is interesting that within the federal system a chief purpose of DHS’s intelligence gathering is to provide broader context for local law enforcement; however, given the competitive nature of the US intelligence ecosystem, perceived politicization of one agency leads to a loss in authoritativeness compared to other parts of the intelligence community. This would be a self-correcting mechanism. If, on the other hand, such briefings were not primarily intended as a guide for action but as an instrument for the steering of public debate, a sort of public diplomacy, their perceived internal authoritativeness would not matter so much: they would still provide official cover for decisions taken along sympathetic ideological lines. One single tool cannot fulfil both these tasks well, and shifts in public perception are extremely hard to reverse.
Yesterday I attended an online panel organized by the Atlantic Council with government (Matt Masterson of CISA), think-tank (Alicia Wanless of the Carnegie Endowment for International Peace and Clara Tsao of the AC’s DFRLab) and industry figures (Nathaniel Gleicher of FB and Yoel Roth of Twitter) on steps being taken to guarantee the integrity of the electoral process in the US this Fall. The general sense was that the current ecosystem is much less vulnerable to disinformation than the last presidential cycle, four years ago, and this despite the unprecedented challenges of the current election. However, the most interesting panelist, Wanless, was also the least bullish about the process.
Good review article in the NYRB (via Schneier) on digital disinformation as a nation-State resource for Great Power competition. One of the three books reviewed is Philip Howard’s latest, Lie Machines (Yale UP), which I am thinking of adopting for my course next year.