Get in touch
But because it’s more straightforward to train AI to optimize for our engagement in an app or what we click on—behaviors that are easy to quantify—many of the algorithms we interact with every day have been trained to view our emotional cues as means to those kinds of ends.
This isn't always bad. When we engage with something, it's often a sign that we enjoy it. But our engagement can also be driven by an unhealthy inclination to outrage and moral condemnation, by emotional insecurities, feelings of envy and greed, or compulsive or addictive tendencies.
AI should be able to tell the difference between good and bad drivers of engagement, between positive and negative emotional cues. It should use these distinctions to better optimize for human emotional well-being. And it should do so without defaulting to overly reductive methods, like surfacing funny videos and eliminating negative news, using strategies that honor the complexity of human emotion.
AI that learns from the full array of emotional cues in expression, language, and action can help us build healthier social networks and train digital assistants to respond with nuance to a user’s present state of mind. It can inform technologies that let animators bring relatable characters to life, apps that work to improve mental health, and communication platforms that enhance our empathy for others.
Algorithms that interpret our emotional behaviors can use this information in ways that aren’t conducive to our well-being. They could surface and reinforce unhealthy temptations when we are most vulnerable to them, help create more convincing deepfakes, or exacerbate harmful stereotypes.
To steer the development of empathic AI that benefits human well being, we at The Hume Initiative asked prominent AI researchers, ethicists, social scientists, and legal professionals to weigh in on a wide range of potential use cases, ethical guidelines, and recommendations—the list of which will evolve and grow as new applications arise. We hope that these recommendations will serve as guideposts for the creation of truly beneficial empathic AI technologies.
The Ethics Committee
The Hume Initiative’s Ethical Guidelines were developed by leaders in AI research, ethics, social science, and cyber law.
Alan Cowen
CEO & Chief Scientist, Hume AI
Taniya Mishra
CEO, SureStart; former Director of AI Research, Affectiva
Desmond Ong
Professor, NUS; Director, Computational Affective and Social Cognition Lab
Danielle Krettek-Cobb
Founder, Google Empathy Lab
Ben Bland
Chair, IEEE P7014 Empathic AI Working Group
Kristen Mathews
Partner, Morrison & Foerster, Global Privacy + Data Security
Elizabeth Adams
CEO, EMA Advisory Services; Fellow, Stanford Institute for Human-Centered AI
Dacher Keltner
Professor of Psychology, UC Berkeley; Director, Greater Good Science Center
Karthik Dinakar
Research Scientist, MIT Media Lab & Harvard Medical School
Kate McCall-Kiley
Co-founder, xD; Research Affiliate, MIT; Head of User Insight & Design, US Census Bureau
Edward Dove
Health Law and Regulation, Law School, University of Edinburgh