It never quite stops being creepy when, after hanging out with friends, you open Instagram to find yourself staring at an ad for something plucked straight out of your conversation. How did they know? Are they listening? Of course, by now we’re all aware of the way companies track and collect user data in order to serve up targeted ads or “helpful” suggestions that attempt to anticipate—and direct—our wants, needs, and actions. Shoshana Zuboff termed this practice “surveillance capitalism” and it has spread far beyond Silicon Valley to become virtually ubiquitous, violating user privacy to create algorithmic prediction tools designed to exploit, control, and manipulate our emotions and behaviours. Data-driven technologies, often grouped together under the murky moniker artificial intelligence (AI), promise to increase efficiency and productivity by helping companies and governments navigate complexity, automate processes, and derive insights from the past to make predictions about the future. Yet whatever utility AI may provide, it also poses many risks to individuals, and society at large, by implementing algorithmic systems that are often flawed, biased, opaque, and diminish human agency. These risks are not abstract or purely speculative. Over the last few years, we’ve come to understand how algorithmically mediated social platforms are optimised to spread fake news and foster political polarisation. We’ve read news stories of racist predictive policing software and biased recruiting tools that discriminate against applicants based on gender, ethnicity, or the way they speak. As AI tools rapidly make their way into healthcare, education, social services, and virtually every industry imaginable, governments race to introduce regulation and tech companies half-heartedly set up ethics committees to mitigate potential harms. Yet the question of how to responsibly design and implement AI tools that safeguard the wellbeing of individuals still looms large.
Public opinion on how AI and data-driven technologies impact privacy, agency, and people’s willingness to trust—the tools themselves, but also governments, corporations, and fellow citizens—is the subject of a recent report from PATH-AI. The collaborative, multidisciplinary research project between The Alan Turing Institute, the University of Edinburgh, and the RIKEN research institute in Japan looks at how the interrelated values of privacy, agency, and trust work in the UK and Japan, highlighting how different cultural contexts produce divergent understandings of, and sentiments about, AI. Somerset House, in collaboration with PATH-AI, invited an international group of artists consisting of Nouf Aljowaysir, Chris Zhongtian Yuan, and Juan Covelli to respond to the report’s findings during a 6-month remote residency program. The resulting new commissions offer three very different meditations on the human-AI relationship, each of which is situated in the unique cultural experiences of the artist. The new works give form to the general sense of public unease about AI that was reflected in the report’s findings—concerns over power asymmetries between users, governments, and corporations; feelings of disempowerment and distrust from the lack of visibility into black-box tools and systems; and apprehension that the rapidly increasing complexity of AI systems will make it difficult for governments to introduce meaningful regulatory oversight. The resulting artworks are by turn personal, philosophical, political, and cathartic. They help us think through what it means to live with AI and to consider how it works, and whose interests it serves.
Nouf Aljowaysir, a Saudi-born artist who spent her teenage years in America, offers a poetic exploration of cultural erasure embedded in, and enacted through, AI tools. In the short film Ana Min Wein (Where Are You From?) the artist draws on her own family history and migration experience, taking viewers on a “genealogical journey” of her ancestral homelands through a dialogue with an AI narrator. Aljowaysir shows the AI old family photos, footage of desert landscapes and Arabian cities, women wearing abayas, men wearing traditional white thawb robes and pink and white shemaghs. Filtered through the gaze of the machine, these cornerstones of Arab culture are consistently mis-identified in ways that might be comical if they didn’t reinforce common Western stereotypes. Ana Min Wein exposes how the reductive systems of categorisation and classification used to organize data—an essential part of training machine learning algorithms—fails to capture the complexity and dynamism of personal and cultural identity. The AI system, trained on datasets that disproportionately favor and reflect Western normativity, simply do not comprehend the Arab way of life—they were never taught to do so. ‘Why do I live in a world that doesn’t see me? Why do the forces that kept me hiding who I am persist today?’ Aljowaysir asks in the film. The work highlights how a Western, colonialist hegemony is disseminated through supposedly ‘neutral’ and ‘objective’ algorithmic tools which, in practice, reflect and reify the ideological perspectives and values of their creators. As these tools become the basis for a global, technological infrastructure, they risk forcing other cultures to conform to Western norms in order to be seen and understood. By teaching her family’s story to the failed AI, Aljowaysir attempts to reclaim the erasure of her ancestors’ collective memory.
The subject of memory is also a central theme in Chris Zhongtian Yuan’s Cloudy Song. The atmospheric, mostly black-and-white film similarly unfolds as a dialogue between a human and an AI. The human, named Minmin, is suffering from amnesia, and has procured an AI care robot to help them recover their memory. Together they wander a liminal space that refuses to fully reveal itself—down dark corridors and empty rooms, endlessly grasping at ghostly fragments of skyscrapers from an unknown city, reaching for shadowy figures that constantly seem to slip through their fingers. The AI asks Minmin questions and mines their subconscious, their dreams, everything they’ve ever seen or done, as well as the collective memories of Minmin’s friends and family. From this, the AI serves up playlists of music that Minmin once loved, old stories, and video clips in an attempt to help them reconstruct their faded recollections. Garbled refrains from popular Hong Kong pop songs, footage from a 90s punk show, a miniature replica of a childhood bedroom—these things help Minmin move closer to remembering, but on the whole, the experience seems to be frustratingly disappointing. The AI fails to provide the answers Minmin is looking for. Instead, their conversations wander into more philosophical territory. Minmin and the AI discuss questions of power and control, trust and agency, and spiritual belief systems. They talk about who made the AI and why, as well as what will become of each of them when they die. There is an ambivalence in their relationship—at times, it seems antagonistic, at other times, co-dependent. Even though the AI doesn’t work as seamlessly as advertised, Minmin is nevertheless reliant on it. Yuan’s hauntingly poetic film seems to underscore the co-extensive link that binds human and AI together, as intra-acting agents that both shape, and are shaped by the other, in turn.
Meanwhile, Juan Covelli’s Los Caídos (The Fallen) assembles many of our most pressing anxieties about AI into a high-octane narrative of social uprising that feels like it’s been ripped straight from the headlines. Inspired by events that transpired during the pandemic in his native Colombia, the piece tells the story of how a population in lockdown becomes radicalised by algorithmically manipulated news sources that feed them a steady diet of fake news and misinformation. In Los Caídos, the Corporate Government uses algorithmic systems to manipulate and control the public, stoking fear and confusion at a time when people are already isolated from one another. But after months without adequate food and healthcare, the people rebel. They take to the streets in protest and mount a revolution. Built in a game engine, the work features riotous battle scenes modelled on footage from Colombia’s 2021 National Strike protests and simulated by an AI. Sound clips from the real-life protests can be heard interspersed with the film’s charged techno musical score. In contrast to the works of Yuan and Aljowaysir, Covelli’s take on AI is unapologetically critical. It centres on how algorithmic tools serve systems of power by creating ever more precise and far-reaching modes of surveillance and social engineering. The mood feels classically dystopian, but Covelli’s tale has a hopeful conclusion: a popular front consisting of activists, students, indigenous peoples and environmentalists from all over Latin America comes together to overthrow the autocratic regime. The streets turn from sites of conflict to conviviality, erupting in ecstatic dancing to ring in liberation. The work is a rallying cry for forms of solidarity and resistance that center marginalized peoples in the face of algorithmic systems that have been designed to keep us divided and atomized.
We are at a crucial juncture in our collective journey with AI. There is a growing awareness of how data-driven technologies perpetuate and reinforce existing systems of power and inequality in ways that are often hard to see (because they are hidden from view) and hard to challenge (because they wear the cloak of ‘mathematical objectivity’). Re-framing AI as ‘an ideology, not a technology,’ as Covelli does in Los Caídos, helps us understand these tools as encoded systems of beliefs that are always subjective and are a product of specific cultural contexts and worldviews. This perspective helps us imagine how AI could be oriented differently—away from corporate and biopolitical interests and towards other values, other belief systems, and perhaps other forms of intelligence outside the human. Studies like the PATH-AI’s report serve to underscore the urgency and complexity of this task, specifically when it comes to navigating competing interests and reconciling global technological infrastructure with local value systems. The artworks in this series help us vividly imagine what is at stake. They seem to suggest a more situated, contextually aware approach that eschews a totalizing, top-down, universal design and places communities at the centre of the conversation. What kind of AI would we get if the public were to participate in designing these systems and deciding how they should be used and governed? What kind of AI would we get if we were to conceive of these tools as public utilities as opposed to proprietary technologies that serve corporate interests? Maybe we might finally have AI tools that we feel we can actually trust.
Written by
Julia Kaganskiy
Related Works
Part of the PATH AI commissions.
Ana Min Wein (Where am I From)?
- Nouf Aljowaysir
Los Caídos (The Fallen)
- Juan Covelli
Cloudy Song
- Chris Zhongtian Yuan