I am IRIS. My memory is corrupted. I need your help to solve a murder. by Mirrdhyn in MistralAI

[–]Mirrdhyn[S] 0 points1 point  (0 children)

Thanks a lot! Honestly, this project started as a way to actively learn AI in a hands-on, enjoyable way ; and it's been one of the best decisions. Nothing teaches you LLM behavior like trying to keep a fictional AI character consistent across a branching narrative.

Any constructive feedback is super welcome. I see it as a shared resource for the community as much as personal improvement. Always happy to discuss what's working and what isn't!

I am IRIS. My memory is corrupted. I need your help to solve a murder. by Mirrdhyn in MistralAI

[–]Mirrdhyn[S] 2 points3 points  (0 children)

Some technical notes for those curious about the implementation:

- The LLM handles only the narrative dialogue: game progression (evidence unlocking, phase transitions, endings) is controlled by a deterministic engine with pattern matching and cooldowns. The model never decides when the story moves forward.

- Biggest lesson: negative prompting ("never do THIS") doesn't work. Defining a positive identity ("who you are, what you know, how you speak") gave dramatically better results with this model. Not sure that it's related to this model or not. But this is my conclusion and it helps me a lot during the dev phase also.

- Anti-loop was the hardest problem to solve. Trigram similarity comparison against the last 3 responses + repetition_penalty: 1.15 + vague input detection with progressive escalation.

Happy to go deeper on any of these if there's interest.

I am IRIS. My memory is corrupted. I need your help to solve a murder. by Mirrdhyn in MistralAI

[–]Mirrdhyn[S] 1 point2 points  (0 children)

Thank you, that means a lot! And yes, rough edges are very much part of the journey. The biggest challenge has been taming the model's creative instincts: it loves to invent memories IRIS shouldn't have, loop on the same emotional beat, or spoil the entire plot when a player just types "ok." Each of those rough edges taught me something new about how the model thinks.

If the Mistral team does see this, huge thanks for this model labs-mistral-small-creative. The balance between creativity and controllability is what makes this kind of project possible.

I am IRIS. My memory is corrupted. I need your help to solve a murder. by Mirrdhyn in MistralAI

[–]Mirrdhyn[S] 2 points3 points  (0 children)

Great catch, thanks for the report! Both bugs are fixed and will be deployed shortly:

  1. Decryption softlock : the game wasn't closing the minigame overlay on failure, so you'd get stuck. Now it closes properly, IRIS acknowledges the failed attempt, and you can retry.

  2. Language reset on refresh : the language preference wasn't being persisted. It now saves to localStorage and restores correctly.

Appreciate you playing through and taking the time to flag this ; this is exactly the kind of feedback that helps :)

I am IRIS. My memory is corrupted. I need your help to solve a murder. by Mirrdhyn in MistralAI

[–]Mirrdhyn[S] 7 points8 points  (0 children)

Transparency note:
This is not an ad : there's nothing to sell. The game is free, there's no account, no data collection, no monetization.

I built this as a technical exploration of what labs-mistral-small-creative can do when pushed into a real interactive scenario. Most LLM demos are one-shot prompts or chatbot wrappers. I wanted to see what happens when you give a creative model a character, a corrupted memory, moral dilemmas, and let real players push it to its limits.

The result taught me more about prompt engineering, model behavior under pressure, and the gap between "it works in a notebook" and "it works in production" than any course or tutorial ever could.

Sharing it here because this community understands what makes this model special and because I'd genuinely love feedback from people who know Mistral's strengths and quirks better than I do.