Kyle McDonald

Rapid Response Fellow 2020 - 2020

Kyle McDonald is an artist working with code based in Los Angeles. He crafts interactive installations, sneaky interventions, playful websites, workshops, and toolkits for other artists working with code. Exploring possibilities of new technologies: to understand how they affect society, to misuse them, and build alternative futures; aiming to share a laugh, spark curiosity, create confusion, and share spaces with magical vibes. Working with machine learning, computer vision, social and surveillance tech spanning commercial and arts spaces. Previously adjunct professor at NYU’s ITP, member of F.A.T. Lab, community manager for openFrameworks, and artist in residence at STUDIO for Creative Inquiry at CMU, and YCAM in Japan. Work commissioned and shown around the world, including: the V&A, NTT ICC, Ars Electronica, Sonar, Todays Art, and Eyebeam.

What do you plan to do during Phase 1 of Rapid Response?

I will be researching the intersection of face analysis and policing. American police have been running face recognition on surveillance cameras for nearly 20 years. While there have been many investigations into face detection and recognition, today advances in machine learning are weaponized to estimate everything from age and facial expressions to race and gender. This analysis happens behind the scenes on billboards, social media, and policing systems, often trained on publicly accessible data.

How does your work relate to the theme of the open call?

While any single prediction may or may not be accurate, this tech is typically used to justify biased decisions and reinforce racist systems. I believe that research, critique, and education is an essential starting point for systemic change. A deeper understanding of this tech will give us the tools we need to dismantle it — whether in public spaces monitored by surveillance cameras, or personal webcams and mobile cameras monitored by big tech companies.

What does the future look like to you?

I see a near future where the automation of decisions based on face analysis becomes so socially stigmatized as to fall into near complete disuse. While local governments ban face recognition, I want to see face analysis treated similarly before it even has a chance to be as heavily misused.

What is your grounding ethos?

I hope to help unpack the interdependence of social and digital systems, and share those perspectives in playful ways. Sometimes this means finding a hint of deep conceptual beauty in a complex technical system. Other times this means pushing back against different kinds of power. I try to follow threads and let my work unfold in a process-oriented way. My early work was guided mostly by this curiosity and exploration. Recently, I’ve been trying to follow a deeper sense of personal responsibility.