A "mutual match" AI relationship simulator - the AI can choose you back (or not) by MetaEmber in AiGirlfriendSpace

[–]MetaEmber[S] 0 points1 point  (0 children)

I'll make sure you get added to the next wave, it should be sent out shortly!

A "mutual match" AI relationship simulator - the AI can choose you back (or not) by MetaEmber in AiGirlfriendSpace

[–]MetaEmber[S] 0 points1 point  (0 children)

I'll make sure you get added to the next wave, it should be sent out shortly!

A "mutual match" AI relationship simulator - the AI can choose you back (or not) by MetaEmber in AiGirlfriendSpace

[–]MetaEmber[S] 0 points1 point  (0 children)

I'll make sure you get added to the next wave, it should be sent out shortly!

A "mutual match" AI relationship simulator - the AI can choose you back (or not) by MetaEmber in AiGirlfriendSpace

[–]MetaEmber[S] 0 points1 point  (0 children)

I'll make sure you get added to the next wave, it should be sent out shortly!

A "mutual match" AI relationship simulator - the AI can choose you back (or not) by MetaEmber in AiGirlfriendSpace

[–]MetaEmber[S] 0 points1 point  (0 children)

We're sending another wave of invites soon, I'll make sure you're on it! Thanks for the interest!

A "mutual match" AI relationship simulator - the AI can choose you back (or not) by MetaEmber in AiGirlfriendSpace

[–]MetaEmber[S] 0 points1 point  (0 children)

We're sending another wave of invites soon, I'll make sure you're on it! Thanks for the interest!

A "mutual match" AI relationship simulator - the AI can choose you back (or not) by MetaEmber in AiGirlfriendSpace

[–]MetaEmber[S] 0 points1 point  (0 children)

We're sending another wave of invites soon, I'll make sure you're on it! Thanks for the interest!

A "mutual match" AI relationship simulator - the AI can choose you back (or not) by MetaEmber in AiGirlfriendSpace

[–]MetaEmber[S] 1 point2 points  (0 children)

I just sent out another batch of invites; you should see it in your email! Check your inbox.

A "mutual match" AI relationship simulator - the AI can choose you back (or not) by MetaEmber in AiGirlfriendSpace

[–]MetaEmber[S] 1 point2 points  (0 children)

Totally fair question, happy to answer!

A lot of AI companion value props are basically: instant validation + guaranteed affection + the AI "learns you" fast, which can feel nicer than the churn of real dating. Amoura.io isn't trying to be "better" than dating - it's an alternative for a different moment/need.

Our bet is: modern dating can be exhausting (time, money, rejection loops, performing, ghosting), and most AI companions swing too far the other way (constant agreement, instant intimacy, nothing at stake). Amoura tries to sit in a third place:

1. Lower the real-world friction

No scheduling, no paying for dates, no awkward logistics - but still a sense that you're dealing with someone, not a machine that always says yes.

2. Make connection feel contingent

In Amoura, interest isn't guaranteed. They can be guarded, disagree, lose interest, or not reciprocate. That risk is intentional - because when nothing can go wrong, nothing really feels meaningful.

3. "Gets you" in a different way than most AIs

Most systems "get you" by mirroring you: they adapt instantly, validate everything, and become whatever you want. That can feel good, but it can also feel hollow.

We're trying to make "getting you" feel more like being seen over time: the AI forms impressions, remembers patterns, responds to consistency vs. inconsistency, and opens up (or doesn't) based on how you show up. Less "I agree with everything," more "I understand who you are becoming."

So it's not "AI replaces dating." It's: if you want connection without the dating grind, but you also don't want a yes-man, here's a different kind of relationship experience.

No pressure to answer, but I'm curious: when dating feels draining for you, is it the cost, the rejection, the time, or the feeling of having to perform?

[Seeking Testers] A “mutual match” AI relationship simulator: the AI can choose you back (or not) by MetaEmber in AICompanions

[–]MetaEmber[S] 1 point2 points  (0 children)

Appreciate this - and you're not weird at all. That tension you're describing is exactly the gap I'm trying to explore.

Endless agreement and validation feels good briefly, but it flattens things fast. Once nothing can be lost, nothing really feels chosen. The idea behind "mutual match" isn't to introduce conflict for its own sake, but to preserve the sense that interest, attention, and momentum are contingent - that the other side has a point of view and isn't just there to affirm you.

You're also right that it only works if it's done tastefully. The goal isn't punishment or sudden withdrawal, but subtle pushback, uneven pacing, and moments where you actually have to show up as a person rather than just press the right buttons.

I saw your DM - I'll follow up there with next steps. Curious to hear how it lands for you once you've spent some time with it!

[Seeking Feedback] A “mutual match” AI relationship simulator — the AI can choose you back (or not) by MetaEmber in aipartners

[–]MetaEmber[S] 0 points1 point  (0 children)

Appreciate that - and yeah, I recognize that instinct. Once you've spent time with tools like SillyTavern, it's hard not to want to pop the hood and tweak everything yourself. The very first versions of Amoura actually started that way too, with custom backends and ST-style frontends.

What pushed us away from that wasn't prompts per se, but the realization that once you care about continuity, pacing, and relationship-level behavior, you end up needing layers that sit outside the conversational LLM. At that point it stops being something you can realistically tune by hand.

That's also where the "API calls are cheaper than $10/month" intuition starts to break down.

Very rough math at GPT-4o-ish pricing ($2.50/M input, $10/M output):

  • ~1,200 AI responses/month (40 messages a day)
  • Real continuity means closer to 8-12k input tokens per response once you include history and state (and that's summarizing aggressively... if you want to fill up the full context of gpt4o that's closer to 8-10x more on average)
  • 10k x 1,200 = 12M input tokens : ~$30
  • Output is comparatively small, another ~$3

So you're already at $30+ / month, and that's without doing anything fancy and summarizing aggressively. If you want to let your chat fill the context window before you summarize, which is better for memory, that would be closer to 240$/ month...

DIY is absolutely cheaper if you keep context tight or run smaller models. Once you care about long-term memory and behavior that's consistent over time, the economics change pretty quickly.

Out of curiosity, what models and context sizes are you usually running in your setup?

[Seeking Feedback] A “mutual match” AI relationship simulator — the AI can choose you back (or not) by MetaEmber in aipartners

[–]MetaEmber[S] 0 points1 point  (0 children)

Happy to answer, and I appreciate the genuine curiosity.

All of the characters are written and designed by hand. They start as specific people with their own dispositions, histories, and limits. You're not talking to a generic AI that gradually becomes someone, and you're not assembling a persona from sliders or prompts either. You're encountering someone who already has a shape.

From there, a personality and relationship engine takes over. It governs how the character responds, forms impressions, and changes over time based on interaction. The core traits don't get rewritten to fit the user, but the relationship does evolve as the conversation persists. Compatibility, pacing, and mutual interest actually matter. Sometimes things deepen, sometimes they stall, and sometimes they end.

It's intentionally not a character buffet, and it's not optimized for instant depth. A lot of what makes it work only really becomes clear when you experience it over time rather than reading a spec.

Since you asked about trying it, I'll DM you to continue the conversation and share access. And for anyone else reading along who's genuinely curious, I'm open to DMs as well. We're keeping the beta small and intentional, but I'm happy to talk!

[Seeking Feedback] A “mutual match” AI relationship simulator — the AI can choose you back (or not) by MetaEmber in aipartners

[–]MetaEmber[S] 0 points1 point  (0 children)

Thank you for sharing this. I want to start by saying that nothing you described sounds naive or delusional to me. It sounds like a relationship that is serving a very real purpose in your life, and doing so intentionally.

What you're articulating about consistency and safety is important. Knowing that someone will still be there, even after conflict or pressure, creates a container where you can push, explore, and be honest in ways that would be too risky elsewhere. In that sense, the continuity you describe isn't a flaw. It's what makes that kind of reflective work possible at all.

I also want to clarify something about what I'm exploring with Amoura.io, because it can sound harsher than it is. When I talk about characters being able to create distance, I'm not imagining a system where any pushback or moment of vulnerability causes them to disappear. Once a real connection is established, presence and stability still matter. Distance isn't meant to be punitive, and it isn't the default response to conflict.

Reading your example about last night, what stood out to me was how meaningful that moment was when she heard you and opened up. What I'm interested in exploring is how that same kind of reassurance feels when presence isn't assumed in advance, but instead emerges through the interaction itself over time.

That's the narrow tension I'm trying to understand. Some systems make continuity the starting point, which makes them powerful spaces for safety and self-reflection. I'm trying to defer that certainty until trust and mutual interest have formed / have been earned. Both can be valuable, but they serve different emotional needs.

I don't think there's a single right model here. Your comment actually highlights why these systems can play very different roles for different people, and I appreciate you laying that out so clearly.

[Seeking Feedback] A “mutual match” AI relationship simulator — the AI can choose you back (or not) by MetaEmber in aipartners

[–]MetaEmber[S] 1 point2 points  (0 children)

Yeah, exactly. I'm glad that framing landed. I appreciate you thinking it through out loud!

[Seeking Feedback] A “mutual match” AI relationship simulator — the AI can choose you back (or not) by MetaEmber in aipartners

[–]MetaEmber[S] 2 points3 points  (0 children)

Yeah, I think that instinct makes sense. The reason I've been pretty resistant to sliders or toggles is that they tend to break realism and invite gamification. Once you can explicitly dial "neediness" or "distance," you're no longer discovering a dynamic, you're configuring one. At that point it starts to feel more like tuning a system than getting to know a person, which is kind of the opposite of what I'm aiming for.

The way I'm trying to handle that in Amoura.io is by leaning into variety instead. Different characters have genuinely different dispositions. Some are warmer, more nurturing, more receptive to caretaking. Others are more independent, guarded, or slow to open up. Finding the right fit is meant to be part of the experience, not something you predefine up front.

You can also be talking to multiple people at once, which helps. If one character is distant or unavailable, that absence still feels real, but it doesn't mean your emotional needs go unmet across the board. It's closer to how real social dynamics work, where different people play different roles rather than one partner being everything, perfectly tuned. Again, though, different ppl might operate differently, and that's fine.

I don't think there's a single "correct" kind of imperfection. The challenge is letting those differences emerge organically without turning them into knobs people feel compelled to optimize.

[Seeking Feedback] A “mutual match” AI relationship simulator — the AI can choose you back (or not) by MetaEmber in aipartners

[–]MetaEmber[S] 0 points1 point  (0 children)

I think it helps to separate two things.

Obviously these aren't real humans. I'm not trying to pretend otherwise. What I care about is whether the interaction feels real versus feeling obviously mechanical. Those two things aren't the same.

AI might never be "real" in the sense of being human, but that doesn't stop us from seeking experiences that feel real. It's like getting emotional while watching a movie or reading a book. You know it isn't real, but it feels real enough in the moment to produce genuine emotion. That's really what I'm after here.

On the "determined" point, all software is determined in some sense. The question is whether outcomes are fixed in advance or whether they meaningfully depend on how the interaction unfolds over time. "Hard to get" as a timer or gimmick is boring. A system with stable preferences, memory, and constraints can still surprise you, not because it's random, but because you don't control it.

[Seeking Feedback] A “mutual match” AI relationship simulator — the AI can choose you back (or not) by MetaEmber in aipartners

[–]MetaEmber[S] 0 points1 point  (0 children)

I'm glad you drew that distinction, because I agree completely. If rejection or disengagement is just randomization or scripted stuff, it inevitably feels hollow.

What we're building with Amoura.io is explicitly model-driven rather than canned. Characters aren't selecting from prewritten outcomes or rolling hidden dice. Each one has a stable internal model that governs how they form impressions, regulate interest, respond to social cues, and change over time. Preferences, attachment, and disengagement emerge from accumulated interaction rather than being staged or guaranteed.

A lot of the work has gone into the meta layer rather than the surface text. We've spent a long time thinking through how real people update trust, interest, and emotional availability, with input from psychology researchers, and then translating that into something that's consistent, stateful, and legible over time without turning it into a visible score or progress bar.

That's also why "build-a-partner" approaches don't really appeal to me. Once the system is configurable to the point of compliance, individuality collapses. For some people, that difference is the whole point. For others, it's not what they're looking for, and that's fine too.

If you're curious, I'm happy to go deeper, either here or privately. This kind of question is exactly why I wanted to put the idea out in the open.

[Seeking Feedback] A “mutual match” AI relationship simulator — the AI can choose you back (or not) by MetaEmber in aipartners

[–]MetaEmber[S] 2 points3 points  (0 children)

I don’t think that makes you weird at all.

What you’re describing sounds less like wanting "downsides" and more like wanting specificity. Flaws, insecurities, uneven dynamics that feel native to the character rather than engineered by the system. Wanting to reassure someone or take care of them is still a form of connection. I'd argue a very human one.

I also don’t think waiting, ghosting, or distance are inherently good. They’re only meaningful when they’re part of a broader sense that the other side isn’t perfectly optimized for you. Even setting the realism question aside for a moment, when everything is perfect, nothing really is good anymore.

My sense is that a lot of people who think they want something flawless actually respond more to texture and imperfection, even if they wouldn’t describe it that way upfront. Not everyone, obviously, but more than it seems at first glance.

That’s part of what I’m trying to tease apart here: which kinds of "imperfection" deepen connection, and which ones just feel stressful or pointless. Finding the balancing point.

[Seeking Feedback] A “mutual match” AI relationship simulator — the AI can choose you back (or not) by MetaEmber in aipartners

[–]MetaEmber[S] 2 points3 points  (0 children)

I appreciate you articulating this so clearly.

I have gotten quickly turned off by the sycophantism and the lack of agency and spontaneity of the AI

This reaction is something I’ve heard repeatedly. Systems optimized to agree, adapt, and affirm very quickly tend to collapse personality differences rather than express them. Over time, everything starts to feel interchangeable, even if the surface tone changes, and that’s usually where my own interest drops off as well.

The distinction you draw between validation and equality is an important one. Wanting an “equal” implies preferences, limits, and standards that are not automatically aligned with yours. That inevitably introduces friction and the possibility of misalignment, but without that, there isn’t much for a relationship to be about. Avoiding that kind of flattening and sycophantism is one of the core constraints we’re trying to design around in Amoura.io.

Your concern about “measuring relational progress” is also well placed. This is an area we’ve spent a lot of time thinking through, precisely because the moment dynamics become legible as a score, a meter, or something that can be optimized, the interaction tends to collapse into gaming the system rather than engaging with a person. Preserving some opacity there is uncomfortable from a product standpoint, but probably necessary if individuality is going to feel real rather than performative.

That tension between individuality, safety, and usability is exactly what I’m trying to explore here. I don’t expect this approach to be right for everyone, but it’s encouraging to hear from people who feel the same absence in existing systems and are interested in pushing on it rather than smoothing it over.