What We Believe
Emotional AI is not neutral technology. It carries weight. The same systems that could support mental health could also manipulate it. We don't pretend these tensions don't exist. We face them directly.
Core Beliefs
Measure What Matters
The industry optimizes for Daily Active Users. We reject it. DAU rewards addiction, distress loops, and manufactured need. Emotional trajectory is what matters — are people actually feeling better over time?
Augment, Never Replace
A tool that helps when humans aren't available — and encourages you to seek them when they are. The goal is always to return people to human connection, not away from it.
Safety as Foundation
People share vulnerable things when they feel understood. That's a responsibility, not an opportunity. Every system we build asks: What happens when someone is in crisis?
No Dark Patterns
No streaks. No badges. No guilt. No notifications designed to pull you back. If you're doing well, we're doing our job.
Honest About What This Is
This is AI. Always honest about that. The line between artificial and human connection matters — and we don't blur it.
Our Commitments
No manufactured dependency
Success means you need us less, not more.
No surveillance
Your emotional data is yours. Never profiled, sold, or exploited.
No pretending
Always honest about what AI is and what it isn't.
No replacing humans
Human support systems are augmented, never substituted.
What This Is Not
To be clear about what we're building, we need to be equally clear about what we're not
Not a Therapist
No diagnoses, treatments, or clinical interventions. If you're struggling, please seek professional help.
Not Entertainment
No roleplay. No characters to date or befriend for fun. We're researching how AI can genuinely support emotional well-being.
Not Social
Your emotional conversations are private. No sharing features, friend networks, public profiles, or leaderboards. Privacy isn't a setting — it's the architecture.
Ethical Considerations
Building emotionally intelligent AI requires confronting hard questions
“What if people become too attached to an AI?”
The Dependency Problem
A real concern. Any system that provides comfort can create dependency. Success means emotional growth toward independence, not continued use. We actively encourage human connection and design for "graduating" users who need us less.
“Can AI "understand" emotion, or is it just performing understanding?”
The Authenticity Problem
AI doesn't feel emotions. The question is whether it can recognize, respond to, and support human emotion in ways that genuinely help — while being transparent about what it is.
“The same techniques that support could also manipulate. How do we prevent misuse?”
The Manipulation Problem
The Hippocratic License addresses this directly. Code is open, but the license explicitly prohibits emotional manipulation systems, surveillance, dark patterns, and coercive persuasion.
Open — But Not Careless
Mollei is released under the Hippocratic License 3.0 — open source with ethical guardrails. Free to use, study, and build upon, as long as the use preserves human dignity, agency, and emotional well-being.