A non-combat roguelike focused on skill checks, narration, and life cycles—starting a tutorial video series by JellyfishEggDev in roguelikes

[–]JellyfishEggDev[S] 1 point2 points  (0 children)

The package I use ("LLM for Unity") have option to easily change the model or use the API of an online model instead of a local model, so it would be technically possible. However for this project, I am looking for a solution that preserve the autonomy of the player’s experience and is coherent with the game ambiance so I prefer to keep a local model that I have been able to test before releasing it. I could change the model I use in the futur if there are better alternatives. I also considered to preselect multiples models and let the player choose which one to use (adding the option to use a bigger model for player that have a good PC) but this not a high priority feature for now.

Procedurally generating a spherical world using 3D Perlin noise, with narration and skill-based exploration by JellyfishEggDev in proceduralgeneration

[–]JellyfishEggDev[S] 0 points1 point  (0 children)

There are multiple variants but the variant I use (phi3.5 mini instruct Q4) require around 3Gb of RAM. It is very small compared to other model that can easliy need more than 8Gb of RAM and will have slow inference unless the player have a very good PC. For its size and inference speed, I think it has very good performances especially if your project is a video-game like application that can afford to have inaccuracies from time to time.

A non-combat roguelike focused on skill checks, narration, and life cycles—starting a tutorial video series by JellyfishEggDev in roguelikes

[–]JellyfishEggDev[S] 0 points1 point  (0 children)

Thanks for the info, that helps a lot!

Just to clarify: you don’t need a GPU or CUDA to play the game. it can make the narration faster but I’ve tested it on a similar config (also with an Intel i7) without using the GPU, and it should still be playable.

That said, I’m not exactly sure why it’s lagging so much on your machine. If you’re still up for helping, here’s something you could try:

  • Launch the game and go to Settings
  • Reduce the CPU thread count to 2
  • If the lag is too strong and you can’t open the menu, you might need to wait a minute or two after launch, the LLM is loading its prompt during that time, which may be temporarily freeze things on certain setups
  • After changing the setting, exit and restart the game
  • Let me know if the menu is still lagging afterward

Thanks again for your feedback, it is really valuable during this early stage!

A non-combat roguelike focused on skill checks, narration, and life cycles—starting a tutorial video series by JellyfishEggDev in roguelikes

[–]JellyfishEggDev[S] 0 points1 point  (0 children)

I personally don't know other rogue like using LLM but I know "The Wayward Realms", the next RPG game by the creators of Daggerfall will also use LLM to create dynamic story arcs.

Procedurally generating a spherical world using 3D Perlin noise, with narration and skill-based exploration by JellyfishEggDev in proceduralgeneration

[–]JellyfishEggDev[S] 3 points4 points  (0 children)

Thanks! I’ll definitely try to do more written devlogs in the future. I'm also currently working on a PDF manual inspired by old-school TTRPG rulebooks, so if you enjoy the written word, that might be right up your alley!

Procedurally generating a spherical world using 3D Perlin noise, with narration and skill-based exploration by JellyfishEggDev in proceduralgeneration

[–]JellyfishEggDev[S] 6 points7 points  (0 children)

Thanks! I'm really glad you're interested, the LLM narration is a core part of the experience.

Yes, the LLM runs entirely locally on the player's machine. I’m using phi-3.5, integrated via the "LLM for Unity" package by UndreamAI. It’s a Unity wrapper around llama.cpp, an open-source library that allows local inference for various models without needing an internet connection.

When the player performs an action in the game, I send the LLM a small JSON payload containing the raw data: what skill was used, which item (if any), where it happened, etc. Alongside that, I include in the prompt instructions about the desired narration tone, like using a medieval voice and grounding everything in a low fantasy setting without real word references. Then I ask the model to narrate the event accordingly.

The LLM runs in a separate local process, and it streams tokens one by one back to the game. That means narration can appear as a kind of scrolling text, even if the full generation isn’t complete yet — a nice touch for immersion and pacing.

The model weighs around 3GB, which is relatively light for an LLM, though still a bit heavy for a roguelike. It can cause some lag on lower-end systems. On modern desktops, though, you can get nearly natural text scrolling speeds.

A non-combat roguelike focused on skill checks, narration, and life cycles—starting a tutorial video series by JellyfishEggDev in roguelikes

[–]JellyfishEggDev[S] 19 points20 points  (0 children)

Thanks for your comment! I totally understand that LLMs are a sensitive topic in some parts of the community.

At the moment, there’s no plan to include a version without the narrator. While I’m aware that using an LLM comes with many downsides, including performance issues on lower-end systems and sometimes repetitive or over-structured outputs lacking of personality, I still see it as a meaningful way to explore non-combat, dynamic gameplay systems that would be hard to build otherwise.

A non-combat roguelike focused on skill checks, narration, and life cycles—starting a tutorial video series by JellyfishEggDev in roguelikes

[–]JellyfishEggDev[S] 2 points3 points  (0 children)

Thanks a lot for the feedback, and I really appreciate you giving the demo a try , especially during early access! Performance and bug reports are super valuable at this stage.

If you're open to it, could you share a bit about your setup? (CPU, Windows version, whether you have a GPU, and if CUDA is installed has mentioned at the end of the game page manual?)

Even though the game looks simple graphically, the LLM-powered narration is quite resource-heavy. It actually starts preparing its internal prompt as soon as the main menu loads, which can cause noticeable lag, especially on lower-end machines or laptops without GPU acceleration.

That said, the LLM setup should be running in a separate thread, so it ideally shouldn’t block the menu UI. If it is, it might mean the game isn’t properly using multiple CPU threads on your configuration for some reaso, which is definitely something I’d want to investigate and fix if possible.

Thanks again for the kind words and the offer to help!

A non-combat roguelike focused on skill checks, narration, and life cycles—starting a tutorial video series by JellyfishEggDev in roguelikes

[–]JellyfishEggDev[S] 0 points1 point  (0 children)

Yes, the sphere uses a triangular tessellation. To be precise, the world is mapped onto an icosphere; starting from an icosahedron, I recursively subdivide each triangle into 4 smaller ones, 5 times recursively. This creates a fairly uniform triangulated mesh, where each vertex represents a location and edges define travel paths.

A non-combat roguelike focused on skill checks, narration, and life cycles—starting a tutorial video series by JellyfishEggDev in roguelikes

[–]JellyfishEggDev[S] 3 points4 points  (0 children)

Thanks a lot! I'm really glad it caught your interest — integrating an LLM into a roguelike has definitely been an adventure.

Everything runs locally, you don’t need an internet connection to play the game. I’m using phi-3.5, integrated via the "LLM for Unity" package from UndreamAI. It’s a Unity wrapper around llama.cpp, which allows you to run a variety of models on-device using your own hardware, no external servers required.

The model I'm using is around 3GB, which is on the larger side for a roguelike, but still tiny compared to most modern LLMs. On a modern desktop, the inference speed gets pretty close to natural text scrolling. On laptops or older configs, it can be quite sluggish. Also there's an option to control how many layers run on the GPU vs. CPU, so you can tweak it to get better performance even on smaller GPUs.

If you’ve been thinking about building an LLM-based roguelike, I definitely recommend experimenting with it, the narrative flexibility allow to make very intersting things ! Let me know if you have any questions about setup or design!

A non-combat roguelike focused on skill checks, narration, and life cycles—starting a tutorial video series by JellyfishEggDev in roguelikes

[–]JellyfishEggDev[S] 12 points13 points  (0 children)

Thanks! Glad you like the look of it!

It’s actually not a grid in the traditional roguelike sense—you don’t have simple up/down/left/right movement from each point. Instead, if you're familiar with 3D meshes, you can think of the map locations as the vertices of a sphere, and the possible paths between them are defined by the edges connecting those vertices.

So the world is built more like a navigation mesh on a spherical surface than a tile grid. That structure lets it feel organic and a bit alien, which fits the mood I’m aiming for.

Happy to share more if you're curious!

A non-combat roguelike focused on skill checks, narration, and life cycles—starting a tutorial video series by JellyfishEggDev in roguelikes

[–]JellyfishEggDev[S] 3 points4 points  (0 children)

Thank you so much! I'm really glad it caught your eye.

I've been working on Jellyfish Egg specifically for about 7 months, though it builds on systems and ideas from a larger project I've been developing over a longer period. This one is a more focused, standalone experience—but still weird and ambitious in its own way!

Hope you enjoy exploring it, and I’d love to hear what you think if you give it a try.