Get in touch
Benefits are measured in terms of increases in well-being. Well-being is measured in terms of reported emotional experience (e.g. positive emotions, self-esteem, and satisfaction with life) and objective proxies of well-being (e.g. emotional expression, health outcomes, and goal attainment).
Costs are measured in terms of reductions in well-being.
Corollary: Developers of AI should strive to understand and improve its benefit-to-cost ratio. Every other Guiding Principle derives from this core tenet.
When our emotional behaviors are used as inputs to an AI that optimizes for third party objectives (e.g. purchasing behavior, engagement, habit formation, etc.), the AI can learn to exploit and manipulate our emotions.
An AI privy to its users’ emotional behaviors should treat these behaviors as ends in and of themselves. In other words, increasing or decreasing the occurrence of emotional behaviors such as laughter or anger should be an active choice of developers informed by user well-being metrics, not a lever introduced to, or discovered by, the algorithm as a means to serve a third-party objective.
Algorithms used to detect cues of emotion should only serve objectives that are aligned with well-being. This can include responding appropriately to edge cases, safeguarding users against exploitation, and promoting users’ emotional awareness and agency.
Corollary: Developers of AI privy to data regarding human emotional behaviors should strive to understand and improve its effects on human emotion.
Because they are difficult to interrogate and can have costly implications, claims about the capabilities of an empathic AI system deserve particularly rigorous scrutiny.
If an AI makes decisions that affect human emotions, we should understand how it affects them.
Careful determinations of the capabilities, costs, and benefits of an empathic AI system can only come from precise, replicable, and appropriately generalizable studies of its behavior over time.
Corollary: Developers of empathic AI should conduct ongoing scientific research to examine its capabilities, costs, and benefits.
When an empathic technology is developed using data from one cultural or demographic group, its behavior may not generalize to all cultural or demographic groups. Thus, testing is required to ensure that the empathic technology benefits each of the cultural or demographic groups whom it affects.
When an AI system is trained on everyday human behavior, it can inherit everyday human biases. Testing is required to ensure that AI systems are free from cultural and demographic biases.
The onus is on the developers and operators of any AI system that relies on measures of human behavior to ensure that it treats members of all demographic, cultural, and neurodiverse groups fairly.
Corollary: Developers of empathic AI should strive to make its benefits accessible across demographic, cultural, and neurodiverse populations and ensure that it does not impose differential costs on any particular group.
The behaviors of an AI system are difficult even for its own developers to understand. For this reason, developers should take special care to ensure people are able to make informed decisions about its use.
People who opt in to allowing an AI system to use their data should have visibility into the ways their data is processed and to what ends.
Informed consent procedures should be designed to convey both benefits and risks in a manner that is provably understandable and accessible to those affected.
Corollary: Developers of empathic AI should provide those affected by its behavior with the capability to observe and study its behavior.
Individuals and communities affected by a technology should be considered the best judges of whether its functioning is aligned with their well-being.
As individuals and communities learn how an empathic technology affects their well-being in daily life, they should be considered the best judges of whether the technology remains beneficial to them, and should have the right not to use it if it is not.
Corollary: Developers should deploy empathic AI only with the ongoing informed consent of the individuals or communities whom it affects.
When specific applications are aligned with our Guiding Principles and risks are properly mitigated, we at The Hume Initiative determined that the benefits of these use cases substantially outweigh the potential costs. This list will grow as new applications arise.
We encourage developers to contact us if they are contemplating an application not addressed by our Supported Use Cases. However, in the interest of avoiding the worst outcomes, here we outline categories of use cases that we can firmly say we will never support.