What We Believe

Emotional AI is not neutral technology. It carries weight. The same systems that could support mental health could also manipulate it. We don't pretend these tensions don't exist. We face them directly.

Core Beliefs

Measure What Matters

The industry optimizes for Daily Active Users. We reject it. DAU rewards addiction, distress loops, and manufactured need. We measure emotional trajectory — are people actually feeling better over time?

Augment, Never Replace

We build something that helps when humans aren't available — and encourages you to seek them when they are. The goal is always to return people to human connection, not away from it.

Safety as Foundation

People share vulnerable things when they feel understood. That's a responsibility, not an opportunity. Every system we build asks: What happens when someone is in crisis?

No Dark Patterns

No streaks. No badges. No guilt. No notifications designed to pull you back. If you're doing well, we're doing our job.

Honest About What We Are

Mollei is AI. Always honest about that. We don't blur the line between artificial and human connection — because that line matters.

Our Commitments

1

No manufactured dependency

Success means you need us less, not more.

2

No surveillance

Your emotional data is yours. We don't profile, sell, or exploit it.

3

No pretending

This is AI. We're honest about what it is and what it isn't.

4

No replacing humans

We augment human support systems, not substitute for them.

What This Is Not

To be clear about what we're building, we need to be equally clear about what we're not

Not a Therapist

Mollei is not therapy. It doesn't diagnose, treat, or provide clinical interventions. If you're struggling, please seek professional help.

Not Entertainment

This isn't roleplay. It isn't a character you date or befriend for fun. We're exploring how AI might genuinely support emotional well-being.

Not Social

Your emotional conversations are private. No sharing features, friend networks, public profiles, or leaderboards. Privacy isn't a setting — it's the architecture.

Ethical Considerations

Building emotionally intelligent AI requires confronting hard questions

What if people become too attached to an AI?

The Dependency Problem

This is real. Any system that provides comfort can create dependency. We measure success by emotional growth toward independence, not continued use. We actively encourage human connection and design for "graduating" users who need us less.

Can AI "understand" emotion, or is it just performing understanding?

The Authenticity Problem

We don't claim AI feels emotions. We explore whether AI can recognize, respond to, and support human emotion in ways that genuinely help — while being transparent about what it is.

The same techniques that support could also manipulate. How do we prevent misuse?

The Manipulation Problem

This is why we chose the Hippocratic License. The code is open, but the license explicitly prohibits emotional manipulation systems, surveillance, dark patterns, and coercive persuasion.

Open — But Not Careless

Mollei is released under the Hippocratic License 3.0 — open source with ethical guardrails. Free to use, study, and build upon, as long as the use preserves human dignity, agency, and emotional well-being.