Speech, Signal, Symptom: Machine Listening and the Limits of Technosolutionism in American Mental Health Care

My current book project ethnographically investigates initiatives to embed voice analysis technologies, or “vocal biomarker AI,” into American mental health care research and practice and thus transform the way that people who interface with the mental health care system are listened to.

I show that the effort to render mental distress and psychosocial disabilities machine audible is part of a much broader, historical project in the United States, an artifact of the intertwining of security and insurance regimes with the psy-disciplines, institutions of confinement, computer science, and communication engineering. The book approaches vocal biomarker AI as sites where dominant ideologies of listening, language, disability, race and gender are distilled, reconfigured, and at times, contested by the very individuals involved in constructing these technologies, from clinical social workers and data annotators to human research subjects.

Research from this project was funded by the National Science Foundation, the Wenner-Gren Foundation, and the Society for Psychological Anthropology. It has been published in Science, Technology, & Human Values, Somatosphere, the AI Now Institute’s A New Lexicon of AI series, and the edited volume, Technocreep and the Politics of Things Not Seen (May 2025, Duke University Press).

Sounds Suspect: The Paranoid Past of Voice Stress Analysis

Drawing from archival and historical anthropological research, my second project examines the world of voice-based lie detection in the United States and abroad, offering a transational and carceral pre-history of clinical algorithmic listening. I presented preliminary findings of this project at the 2021 SIGCIS conference and the 2021 Society for the Social Studies of Science Annual Meeting.

Public Scholarship

I am committed to public scholarship, especially in collaboration with computer scientists, healthcare workers, survivor/patient/user communities, and artists. To read more of my thoughts on collaborating with tech workers in particular, check out my interview of visual artist and computer scientist Jonathan Zong. The interview was part of an online series about digital psychiatry that I co-edited with Dörte Bemme and Natassia Brenman. To close the series, directed by River Ujhadbor and Dörte Bemme, we co-produced a podcast episode on digital exclusions and digital mental health care.

I served as a subject matter expert for an Access Now report, written by Xiaowei Wang and Shazeda Ahmed on the risks of emerging biometric technologies.

In 2020, with over 20 other academic and technology workers and the Coalition for Critical Technology, I co-wrote and facilitated an open letter condemning the development of computational, physiognomic models for “predicting criminality.” If you would like to use the letter for teaching purposes, you may download this accessible PDF. I was a participant of a 2019 Alex Trebek Forum for Dialogue on Artificial Intelligence and Mental Health Care at the University of Ottawa and co-organized “#AICantFixThis: MIT, Imperialism, and the Future of AI,” a teach-in at MIT.