top of page
  • Writer's pictureTanja Ahlin

Unboxing AI - but how?

Updated: Apr 13, 2023



Technologies don't exist in a vacuum, but are social phenomena: the way they are designed, produced and used is contingent on their specific social, political, economic and historical contexts. As Nick Seaver (2018, 379) writes, “technical systems [are] sociotechnical systems.” It is therefore impossible to disentangle technologies from human beliefs, opinions, assumptions and even emotions. In the words of one of the earliest anthropologists of technology, Bryan Pfaffenberger (1988, 241), technology is not an independent entity that simply impacts on society or culture; rather,


Any technology is a set of social behaviors and a system of meanings. To restate the point: when we examine the “impact” of technology on society, we are talking about the impact of one kind of social behavior on another.

What does this mean for the understanding and research of AI systems?


AI mechanisms such as various algorithms are often treated as “black boxes” - people have trouble understanding how they work and why they produce the outputs that they do. This can lead to inaccurate impressions of AI, which may further lead to having too much or too little trust in them, as Sartori and Theodorou write (2022). Further, organizations and governments have been exploiting this black-box nature of AI systems to deny legal liability (Saratori and Theodorou 2022, 4).


For example, algorithms are increasingly used by governments to support a “digital welfare state” – but this is “developed often without consultation, and operated secretively and without adequate oversight” (The Guardian 2020). In the Netherlands, such an implementation of algorithms into the welfare system led to a violation of human rights as families were mistakenly accused of social benefits fraud. Similarly, in Australia, welfare recipients were levied with immense government debt through an automatic debt collection system gone wrong. Much of the debate that followed concerned the legality of the automated system and how it came to be introduced in the first place. As Ben Eltham writes,


from the point of view of debt recipients and outside observers, the algorithm was a “magical, imaginary process.”

Given that technologies do not exists independently of people, sociology and anthropology can help to open those black boxes through their specific methods. Ethnography - the ultimate qualitative methodology grounded in extended participant observation, complemented with in-depth interviews, text and discourse analysis, visual and other methods - is particularly useful to study unpredictable innovative technologies such as various kinds of AI because it allows for serendipity and surprise for which quantitative methods leave little space.


For instance, when qualitative methods did not yield great results in their study of the predictive policing system in Delhi, India, Vidushi Marda and Shivangi Narayan (2021) turned to ethnography. This approach allowed them to


shift [their] attention from only studying the use and outcomes of the system to include the institution and actors involved (Marda and Narayan 2021, 188).

Through ethnography, they discovered the underlying assumptions, built into the technology that predicted where in the city crime was most likely to happen: the functioning of the algorithm was grounded in the institutional belief that "poor immigrants and people who lived in poorer areas were de facto 'criminals'" (Marda and Narayan 2021, 188). Further, the automatic association between crime and specific places was not decided on some sort of address database on the city map, but rather "in accordance with the plotter's subjective knowledge of Delhi," as they write. Ethnography, Marda and Narayan conclude, is important to bridge the gap between the potential of AI systems and the everyday experiences of their use. This gap, they maintain, often arises from the AI black box, as

the personnel meant to use and benefit from technology do not understand it, or do not have the infrastructural support to use it.


Prediction mapping for police (image source)


To address the asymmetric power relations that are embedded in AI systems, Marda and Narayan finally argue, AI researchers, technologists and activists need to expand the methodological tools to include qualitative methods, which are a hallmark of anthropology and sociology.


Yet at the same time it is important not to see these disciplines as simply having to “deal with residual aspects” (Sartori and Theodorou 2022, 4) of other disciplines, such as computer science or engineering, which are often deemed to be at the core of AI research. Indeed, there is a danger of seeing ethnography as a means to simply providing

the analog alternative to algorithm's digital; the small-scale to their large-scale; the cultural to their technical, the thick description to their thin; the human to their computer (Seaver 2018, 380).

As Seaver (2018) argues, only filling in "the analog slot" with anthropological insights on "the social" - while foregrounding the digital as the centre of inquiry - is not sufficient because it re-creates these very dichotomies. Rather, he suggests,


we can refuse [the human-algorithm dichotomy] from the start and look for alternatives; we can try to bring that human-versus algorithm frame into focus and see how it is maintained and reproduced (Seaver 2018, 381).

This means focusing attention on the practical implementation of algorithms in everyday practices of translation, maintenance, repair, confusion and improvised solutions. It means finding the people who work within the systems of technologies, whose labor fuels the algorithms but who mostly remain invisible. Ethnography, then, is a method that can shed light on how people and machines collaborate, what they jointly create and under what conditions, and how their collaborations could be made better.



Sources


Marda, Vidushi, and Shivangi Narayan. "On the importance of ethnographic methods in AI research." Nature Machine Intelligence 3, no. 3 (2021): 187-189.


Pfaffenberger, Bryan. "Fetishized objects and humanized nature: Toward an anthropology of technology." Man New Series 23, no. 2 (1988): 236-252.


Sartori, Laura, and Andreas Theodorou. "A sociotechnical perspective for the future of AI: narratives, inequalities, and human control." Ethics and Information Technology 24, no. 1 (2022): 4.


Seaver, Nick. "What should an anthropology of algorithms do?." Cultural anthropology 33, no. 3 (2018): 375-385.

68 views0 comments

Recent Posts

See All
bottom of page