skip to Main Content
A Beam Of Light Highlights The Top Line Of An Entray For

What we talk about when we talk about algorithms*

By Mike Ridley

Words create reality; what we say and write makes ideas concrete. Such is the case with how we talk about algorithms which are simply processes that transform inputs into outputs. Despite this simplicity, algorithms fuel powerful artificial intelligence and complex machine learning processes. While algorithms are “neither good nor bad, nor neutral”, how we refer to them signals our emotional and intellectual reactions to them.

The explosion of research about algorithms has resulted in specific descriptive phrases about their nature and characteristics. Let’s take a brief tour through some of them to see how they reflect our hopes and concerns.

Dry erase marker on a clear board, mapping out an algorithmic process

Before 1997, algorithmic bias was a statistical concept where bias is balanced with variance. However in that year, Batya Friedman and Helen Nissenbaum used “algorithmic bias” to indicate what it generally means now: algorithms that exhibit unfairness, algorithms that can’t be trusted.

Because of concerns about bias and unfairness, many people exhibit “algorithmic aversion” (Berkeley Dietvorst), refusing to accept algorithmic decisions and placing their trust in only human deciders. Conversely, other people prefer the supposed objectivity and expertise of algorithms, which is referred to as “algorithmic appreciation” (Natali Helberger). Many see this broad acceptance of algorithms as an uncritical perspective leading to “algorithmic overdependence” (Sachin Banker & Salil Khetani) where people will accept algorithmic recommendations even when they are demonstrably inaccurate or wrong. This can lead to “algorithmic obedience” (Cathy O’Neil).

Concerns about “algorithmic inequity” (Sara Wachter-Boettcher) and “algorithmic inequality” (Virginia Eubanks) arising from the application of “algorithmic bias”, became more intense and urgent with Safiya Noble’s description of “algorithmic oppression” and Ruha Benjamin’s “algorithmic racism”. Mimi Onuoha amplifies this harm in a visceral way by calling out “algorithmic violence”. A political and neocolonial perspective, with specific reference to Africa, views this racism, oppression and violence through the lens of “algorithmic colonialism” and “algorithmic colonization” (Abeba Birhane). 

An insightful counterfactual example is “algorithmic privilege” (Julia Angwin). This phase emphasizes the disproportionate rewards that dominant socioeconomic groups accrue because of “algorithmic bias”. By drawing attention to the unmerited advantages algorithms bestow, it seeks to bring attention to their potential to harm.

Increasingly, because of our engagement with algorithms through social media and recommender systems such as Facebook, Netflix and Amazon, we have identities defined by algorithms. These are our “algorithmic identities” (John Cheney-Lippold) and “algorithmic selves” (Frank Pasquale); reflections of our true selves and, at the same time, constructions of what we want to be and want to be seen as. These identities and selves are fueled by the avaricious accumulations of our personal data which are shared and used by others often unbeknownst—and certainly uncompensated—by us. Hence the rise of “algorithmic sovereignty” (Urbano Reviglio & Claudio Agosti), a techno-political attempt to reclaim personal ownership of our data and when it is or is not used. A less stringent form of this is “algorithmic accountability” (Nicholas Diakopoulos) where regulatory and legal practices provide guardrails around decision-making systems. From a more activists’ perspective, Joy Buolamwini calls this “algorithmic justice”.

A woman walking down the street, intently staring at her phone

Researchers in the area of critical algorithm studies have generated a raft of evocative phrases all arising from “algorithmic culture” (Andrew Goffey). This is the notion that algorithms are embedded in the everyday, and affect broad social, political and economic experiences. It’s easy to view algorithms primarily in a negative light. However, the “algorithmic imaginary” (Taina Bucher) or “algorithmic imagination” (Ed Finn) portray algorithms as a place of shared power. We are not just used by algorithms, we can in turn use them (or resist them) to our benefit. Tanya Kant calls our ability to do so the “algorithmic capital” we can choose to “spend” at appropriate moments. These researchers see various strategies to counter and oppose “algorithmic determinism” (Amy Webb). 

Let’s end this tour of what we talk about when we talk about algorithms with my favourite: “algorithmic literacy” (Newcomb Greenleaf). On one hand this describes the skills and understanding we need to effectively use and create algorithms. It is a literacy we possess. Simultaneously, however, it also confers literacy on algorithms themselves; it indicates that they possess the inherent qualities capable of literate expression.

Algorithms. Can’t live with them and can’t live without them.

Note: A bibliography with all the references made here is available on Zotero

*With apologies to Raymond Carver and Leah Price.

Michael Ridley is a Librarian Emeritus at the University of Guelph. He is also a PhD candidate at the Faculty of Information and Media Studies (FIMS), Western University. He is a Librarian Emeritus at the University of Guelph where he was the formerly the Chief Librarian and CIO.

Back To Top