Get in touch
To steer the development of empathic AI that benefits human well being, we at The Hume Initiative asked prominent AI ethicists, safety engineers, social scientists, and legal professionals to weigh in on a wide range of potential use cases, ethical guidelines, and recommendations—the list of which will evolve and grow as new applications arise. We hope that these recommendations will serve as guideposts for the creation of truly beneficial empathic AI technologies.
The present guidelines focus on what we call “empathic AI,” or any technology that is explicitly designed to respond to cues of emotion.
Our Ethics Committee began by developing six core principles for weighing the ethics of empathic AI applications: Beneficence, Emotional Primacy, Scientific Legitimacy, Inclusivity, Transparency, and Consent.
Algorithms that respond to cues of emotion open avenues for measuring well-being. With this capability, the responsibility arises to incorporate measures of well-being into analyses of the cost and benefits of empathic technology. We offer recommendations for doing so, informed by the science of happiness.
When specific applications are aligned with our Guiding Principles and risks are properly mitigated, we at The Hume Initiative determined that the benefits of these use cases substantially outweigh the potential costs. This list will grow as new applications arise.
We encourage developers to contact us if they are contemplating an application not addressed by our Supported Use Cases. However, in the interest of avoiding the worst outcomes, here we outline categories of use cases that we can firmly say we will never support.
We expect them to evolve and grow as new applications of empathic AI come to our attention.
Want to learn more?
If you’re interested in joining the empathic AI movement, have questions, or would like to share your feedback, please don’t hesitate to get in touch.