AI Humanifesto

Introduction

The AI Humanifesto is a simple framework for designing pro-human AI products. It balances the progress and power of this exciting new technology with human creativity, societal well-being, and environmental sustainability. By establishing clear principles and basic points of evidence, it empowers researchers, designers, engineers, data scientists, and business stakeholders to create AI systems that enhance human potential while minimizing the risk of harm or exploitation. Our goal is to provide product design professionals with a short, easy to use, well-thought out summary of the most important considerations for ensuring that AI products will serve human needs rather than humans being exploited by AI.

D-BOTS is a mnemonic for five core concepts of Pro-Human AI product design: Diversity, Balance, Options, Trust, and Safety. Together these scales comprise a 5 point scorecard, one point for each of the core concepts described below. A score of 4.5 is the threshold level for Cre8 Auditors to rate a reviewed AI product as "Pro-Human."

The AI Humanifesto is a living document created by Paul Bryan, producer of the Cre8 and STRAT product design conferences, and has been guided and shaped by attendees of these conference over the past several years.

Diversity

Summary

Diversity in AI means fostering varied and inclusive experiences by ensuring that AI systems are trained on diverse, comprehensive datasets and are adaptable to a wide range of users and needs. The AI doesn't perpetuate biases, stereotypes, or exclusion but actively corrects gaps in representation and evolves toward greater inclusion, not only in terms of ethnicity, but other differentiating characteristics such as neurodiversity, as well as encouragement toward varied life experiences.  

Key Aspects of Diversity

Trained on Inclusive Data

The AI is trained on comprehensive, inclusive datasets that reflect diverse human experiences. It avoids skewed or biased data that excludes underrepresented populations.

Acknowledges and Addresses Biases and Gaps

The AI is transparent about its limitations and biases. The AI and its developers regularly assess people groups and types of experiences that are missing from the datasets, actively addressing these gaps. For example, data representing marginalized groups such as Kurds, Gypsies, and Native Americans is gradually incorporated. The lack of inclusive data for a target people group or set of experiences is called out when it has been identified but not yet addressed.

Evolves with Shifting Norms

As the concept of diversity shifts over time, the AI adapts, recognizing new social realities and updating its models to remain relevant and fair.

Breaches Barriers

The AI makes an effort to breach barriers of literacy, age, socioeconomic status, and other disparities and divides. It meets users where they are, adapting to their level of expertise and providing accessible, helpful cross-barrier interactions.

Reflects Cultural Differences

The AI reflects and respects cultural differences, offering solutions that are nuanced and sensitive to the social norms of different populations.

Synonyms and Related Terms

Inclusion, Bias, Equity, Fairness

Balance

Summary

Balance reflects an effort to consider both sides of conflicting goals in AI development, such as innovation progress vs. human and environmental sustainability, human augmentation vs. automation, experiencing nature vs. immersion in screens, and sophisticated capabilities vs. safety from harm. Balance ensures that AI doesn’t push extremes that could harm individuals, society, or the environment. Instead, it emphasizes a flexible, adaptive approach that meets shifting needs while finding equilibrium between efficiency, creativity, and impact on natural ecosystems.

Key Aspects of Balance

Human Empowerment

The AI empowers users to achieve their goals while amplifying human creativity, emphasizing augmentation options rather than automatically diminishing human capabilities.

Human vs. AI Contribution

The AI complements human capabilities rather than replacing them, when it makes sense to do so. In creative or decision-making processes, the distinction between parts that are AI-generated and those that involve human inputs are clearly indicated.

Efficiency vs. Creativity

The AI balances efficiency with the human need for creative expression. Balance ensures the AI optimizes productivity while leaving room for flexibility, uniqueness, innovation, and imagination.

Environmental Impact

The AI minimizes its environmental footprint to the extent possible, given its capabilities, especially regarding energy usage per useful outcome and underpaid human labor (digital sweatshops). Developers should understand the fully-loaded environmental costs of their AI product, seeking simpler, more sustainable methods for every major system component.

Social and Interpersonal Relationships

The AI creates, supports and enhances human relationships rather than diminishing or obviating them. The AI doesn't displace meaningful human interactions except by choice. It doesn't encourage harmful behaviors that satisfy product KPI's, for example product addiction or covert dopamine engagement tracks.

Synonyms and Related Terms

Sustainability, Harmony, Wellbeing, Human Empowerment

Options

Summary

Options in AI design ensure that humans maintain meaningful control over the AI products they use and the experiences that AI enables. These systems are built to respect and adapt to user preferences, which can evolve over time. Control extends across critical aspects such as algorithms, data, decision-making processes, and security, providing users with tools to oversee or intervene as desired, without overly complicating the experience. Options foster autonomy, enabling individuals to balance power between themselves, AI systems, developers, and businesses. This approach ensures no single entity has unchecked influence, particularly in areas like data handling, decision-making, and system behavior.

Key Aspects of Options

User Autonomy

The AI is designed to empower users by providing them control over how the system interacts with their data and influences their experiences. Users can adjust settings, modify interactions, or opt out of specific functionalities, offering straightforward ways to tailor the way the AI works to meet their needs.

Transparency of Algorithms

Users are offered accessible insights into how the AI operates, with simple, intuitive options for understanding the AI's decision-making processes, suggestions, and actions. This transparency fosters trust, allowing users to see how their data influences the AI and ensuring clarity in the system's outputs and actions.

Control Over Data

Users are given straightforward tools to manage their data, including viewing, modifying, or deleting information as desired. Consent for data collection and processing is clear and adjustable, ensuring users remain in charge of their personal information throughout their interactions.

***need to put this content in new key aspects sections:

Adjustable Levels of Automation
The AI provides users with options to calibrate the degree of automation they are comfortable with. From fully autonomous modes to augmentation modes that provide a reasonable level of manual control given the context, users can adjust the balance of control between themselves and the AI, ensuring the technology aligns with their preferences and needs.

Safety and Override Mechanisms
Built-in safety measures and override options empower users to step in and halt, correct, or redirect AI processes when needed. This ensures that users retain ultimate authority, especially in high-stakes situations where AI outputs might conflict with human judgment or ethical considerations.

Synonyms and Related Terms

Governance, Accountability

Trust

Summary

The AI product is worthy of user trust. Trust is both an emotional and a cognitive response to the way an AI product is designed, the context of use, and the results achieved over time. Trust is not static but is calibrated to align with the level of complexity and impact of the AI's functionality. Trust can be built through human-centered design, fair treatment, consistency, transparency, and the responsible handling of data, but the context of use and past usage experiences can override these trust factors. For example, a cardiologist is likely to have a much more involved trust framework with periodic checks and balances, compared with someone looking for gift suggestions. Outputs that conflict with human judgment and decisions without explaining why erodes trust. The AI fosters trust by showing users that their needs and interests are taken into account, and that there is openness about data usage, decisions, and system behavior. The AI is transparent in its processes and acknowledges biases or mistakes. The AI does not do harm secretly, in any way that users would consider themselves harmed if they knew what was happening deep down in the data tables and algorithms.

Key Aspects of Trust

Prioritizes User Benefit

The AI demonstrably acts in the user’s best interest, avoiding exploitation or deceptive practices. Measurable outcomes, such as improved efficiency, enhanced decision-making, or reduced cognitive load, should validate the user benefit. Trust is undermined if the system prioritizes corporate goals over user welfare, such as excessive data monetization or manipulative features.

Reliability, Consistency, and Calibration

The AI operates with consistent performance metrics, ensuring reliability across varying contexts and user interactions. Trust calibration aligns user expectations with actual system performance, preventing overtrust (leading to misuse) or undertrust (reducing adoption). The AI presents appropriate disclaimers and confidence levels for predictions or recommendations.

Error Transparency and Corrective Measures

The AI acknowledges when errors occur. Rather than obscure or hide mistakes, there is a process for error reporting and clear communication about how issues are being resolved.

Ethical Data Stewardship

The AI deploys ethical and transparent data practices, clearly outlining data collection, storage, usage, and sharing policies. Regular audits and compliance with data protection standards, such as GDPR or CCPA, are measurable and readily available. Users have granular control over their data. When appropriate, the AI provides metrics showing opt-out rates and consent levels to further calibrate trust.

Accountability and Governance

The AI includes frameworks for developer and organizational accountability. This includes documented processes for reviewing AI outputs, handling grievances, and rectifying harms. Metrics like response times to user concerns, resolution satisfaction rates, and third-party audits of AI outputs build confidence that the AI product and its creators are answerable for their actions.

delete

delete

Synonyms and Related Terms

Transparency, Reliability, Explainability

Safety

Summary

The AI protects its users from harm — physically, emotionally, financially, and environmentally. Safety goes beyond physical well-being to include ethical concerns about privacy, consent, data security, and avoiding harmful or addictive behaviors. Safety requirements are not only satisfied to meet standards, but the AI actively protects its users in ways they don't know they need protection.

Key Aspects of Safety

Privacy and Data Security

The AI protects user privacy and ensures secure handling of personal data. Without having to dig into privacy terms and conditions, users can easily discover how their data is being used and/or sold. The product has industry-standard security measures in place against breaches and misuse.

Consent and Education

The AI product design includes user consent mechanisms and offer clear, understandable explanations of how the AI works, including its risks, benefits, and biases. Users can easily determine how their information is used and any potential consequences from that data use.

Physical and Emotional Well-being

The AI prioritizes its users' physical safety (for example in autonomous systems or medical applications) and also guards against emotional or developmental harm. The AI actively defends against addictive behaviors, social isolation, and inappropriate levels of mental stress.

Risk Management

The AI has built-in mechanisms to mitigate risks, identifying potential harms and proactively designing safeguards against them.

Innovation vs. Safety

The AI balances innovation, progress, and efficiency with a strong commitment to safety and ethical behavior. It pushes functional and experiential boundaries without compromising user well-being, resulting in sustainable progress.

Synonyms and Related Terms

Security, Privacy, Robustness