Tag Archives: Social media

A.utomated I.dentity?

An interesting, thoughtful article by Michelle Santiago Cortés in The Cut last week looks at affective relationships with algorithms and their role in shaping our identities.

Three parts of the analysis specifically stood out to me. The first revolves around our typical lack of knowledge of algorithms: Cortés’ story about

some YouTube alpha male […] out there uploading videos promising straight men advice on how to “hack” the Tinder algorithm to date like kings

is clearly only the tip of a gigantic societal iceberg, a cargo-culture-as-way-of-life involving pretty much everyone in the remotest, most diverse walks of life. The ever-evolving nature of these algorithms compounds the obfuscation effect, making end-users’ strategic attempts, whether exploitation- or resistance-focused, generally appear puerile.

Second, the clarity with which Cortés encapsulated the main tradeoff in the relationship was truly apt:

[w]e are, to varying degrees, okay with being surveilled as long as we get to feel seen.

The assertion of visibility and assurance of recognition are two of the key assets algorithmic systems offer their users, and their value can hardly be minimized as mere late-consumerist narcissism.

Finally, the comparison between algorithmic portraits of personality and astrology was extremely telling: closing the behavioral loop from algorithmic interaction to the redefinition of one’s own identity on the basis of the algorithm’s inevitably distorting mirror is still a matter of choice, or rather, a sensibility that can be honed and socialized regarding the most empowering and nurturing use of what is ultimately a hermeneutic tool. Of course, such a benign conclusion rests on the ambit of application of such technologies: music videos, entertainment, dating. As soon as our contemporary astrological devices are put in charge of directing outcomes in the field of political economy and public policy, the moral calculus rapidly shifts.

Rightwing algorithms?

A long blog post on Olivier Ertzscheid’s personal website [in French] tackles the ideological orientation of the major social media platforms from a variety of points of view (the political leanings of software developers, of bosses, of companies, the politics of content moderation, political correctness, the revolving door with government and political parties, the intrinsic suitability of different ideologies to algorithmic amplification, and so forth).

The conclusions are quite provocative: although algorithms and social media platforms are both demonstrably biased and possessed of independent causal agency, amplifying, steering, and coarsening our public debate, in the end it is simply those with greater resources, material, social, cultural, etc., whose voices are amplified. Algorithms skew to the right because so does our society.

A global take on the mistrust moment

My forthcoming piece on Ethan Zuckerman’s Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them for the Italian Political Science Review.

Limits of trustbuilding as policy objective

Yesterday, I attended a virtual event hosted by CIGI and ISPI entitled “Digital Technologies: Building Global Trust”. Some interesting points raised by the panel: the focus on datafication as the central aspect of the digital transformation, and the consequent need to concentrate on the norms, institutions, and emerging professions surrounding the practice of data (re-)use [Stefaan Verhulst, GovLab]; the importance of underlying human connections and behaviors as necessary trust markers [Andrew Wyckoff, OECD]; the distinction between content, data, competition, and physical infrastructure as flashpoints for trust in the technology sphere [Heidi Tworek, UBC]. Also, I learned about the OECD AI Principles (2019), which I had not run across before.

While the breadth of different sectoral interests and use-cases considered by the panel was significant, the framework for analysis (actionable policy solutions to boost trust) ended up being rather limiting. For instance, communal distrust of dominant narratives was considered only from the perspective of deficits of inclusivity (on the part of the authorities) or of digital literacy (on the part of the distrusters). Technical, policy fixes can be a reductive lens through which to see the problem of lack of trust: such an approach misses both the fundamental compulsion to trust that typically underlies the debate, and also the performative effects sought by public manifestations of distrust.

Routinization of influence, exacerbation of outrageousness

How is the influencer ecosystem evolving? Opposing forces are in play.

On the one hand, a NYT story describes symptoms of consolidation in the large-organic-online-following-to-brand-ambassadorship pathway. As influencing becomes a day job that is inserted in a stable fashion in the consumer-brand/advertising nexus, the type of informal, supposedly unmediated communication over social media becomes quickly unwieldy for business negotiations: at scale, professional intermediaries are necessary to manage transactions between the holders of social media capital/cred and the business interests wishing to leverage it. A rather more disenchanted and normalized workaday image of influencer life thereby emerges.

On the other hand, a Vulture profile of an influencer whose personal magnetism is matched only by her ability to offend (warning: NSFW) signals that normalization may ultimately be self-defeating. The intense and disturbing personal trajectory of Trisha Paytas suggests that the taming of internet celebrity for commercial purposes is by definition a neverending Sisyphean endeavor, for the currency involved is authenticity, whose seal of approval lies outside market transactions. The biggest crowds on the internet are still drawn by titillation of outrage, although their enactors may not thereby be suited to sell much of anything, except themselves.