Skip to main content

Intro

Golem is a design system for embedded AI that draws inspiration from Jewish mythology and Isaac Asimov's Three Laws of Robotics.

In Jewish mythology the Golem, is an animated being, created from inanimate mud and clay. Using mystical rituals, the rabbi brought the Golem to life by inscribing the word "emet" (truth) on its forehead. The Golem obeyed his commands but grew increasingly uncontrollable. To stop it, Rabbi Loew erased a letter from "emet," turning it into "met" (death), which deactivated the Golem.

Golem aims to create a framework for AI that seamlessly interacts with humans, prioritizing ethical behavior and user-centric design.

What is embedded AI?

Embedded AI refers to artificial intelligence systems that are integrated directly into user-facing products and services. These AIs are designed to interact with humans in natural, intuitive ways, often through voice commands, gesture recognition, or predictive interfaces. The challenge lies in creating AI systems that are not only functional but also ethical and user-friendly.

The 3 Laws of Robotics for Golem and Embedded AI

  1. A Golem AI component must prioritize user well-being and privacy in all interactions.

    This law ensures that every AI interaction pattern is designed with the user's best interests in mind. It covers aspects such as:

    • Respecting user privacy and data protection
    • Providing clear opt-in/opt-out mechanisms for data collection
    • Ensuring transparency in AI decision-making processes
    • Preventing manipulative or addictive design patterns
  2. A Golem AI component must follow user instructions and respect human agency, except where such actions would conflict with the First Law.

    This principle emphasizes the importance of user control and informed consent:

    • AI should be responsive to user commands and preferences
    • The system should provide clear feedback on its actions and limitations
    • Users should have the ability to override AI decisions when appropriate
    • AI should complement human decision-making, not replace it entirely
  3. A Golem AI component must adapt and improve its interactions over time, as long as this evolution does not conflict with the First or Second Law.

    This law promotes the continuous improvement of AI systems while maintaining ethical boundaries:

    • AI should learn from user interactions to provide more personalized experiences
    • The system should be able to recognize and respond to changing user needs
    • Improvements should be made transparently, with user awareness and consent
    • AI evolution should never compromise user safety or autonomy

Implementing Golem in Human Interaction Patterns

By grounding our design system in these adapted laws, Golem provides a framework for creating embedded AI that is both powerful and ethically sound. Here are some examples of how these laws might manifest in human interaction patterns:

  1. Voice Assistants: Golem ensures that voice-activated AI clearly distinguishes between commands and casual conversation, respecting user privacy by not recording or processing unintended interactions.

  2. Predictive Text: While offering suggestions, Golem-based systems make it clear when AI is generating content, allowing users to easily accept, modify, or reject suggestions.

  3. Smart Home Devices: Golem principles ensure that AI-driven home automation respects user preferences and provides clear override options, never locking users out of manual controls.

  4. Health Monitoring: Golem-designed AI health trackers prioritize user well-being by providing actionable insights while clearly communicating data usage and sharing policies.

By adhering to these principles, Golem aims to create embedded AI systems that are not just functional, but also trustworthy, respectful, and genuinely beneficial to users. As we continue to develop and refine this design system, we invite designers, developers, and ethicists to join us in shaping the future of human-AI interaction.