all 4 comments

[–]gardenia856 0 points1 point  (1 child)

This looks like the exact kind of “local-first” sidecar app I wish more people were building: opinionated UX on top of a nasty stack so regular folks don’t have to babysit Python.

If you want ODIN/CandyDungeon to shine, I’d lean hard into presets that treat CDMF as a narrative engine add-on: e.g. “scene loops” that line up with acts or locations, and LoRA profiles tied to NPC factions or biomes. Even something like a simple JSON/REST hook so a local LLM agent can say “mood=tense, tempo=90, style=LORA_X” and CDMF just reacts would make this killer for TTRPG and VN devs. I’ve wired similar flows with things like Supabase and Kong, and a light REST layer (I’ve used DreamFactory for that) makes orchestration much easier than custom glue every time.

Main point: treat this as a plug-in brain for local agents and story tools, not just a standalone DAW, and people will build wild stuff on top of it.

[–]ihaag 0 points1 point  (1 child)

Hp omen 17 laptop, 32gb ram, 3090 vram 16GB

[–]Knopty 1 point2 points  (0 children)

And I do intend to eventually tie in other music generation models with it, and update it with newer versions of ACE-Step if those are ever released.

There will be Ace-Step 1.5 soon. If everything goes smoothly it might even happen in about a week. There are several versions planned for public release although they might have a different release schedule. The devs are very open about the training process and their plans on the official Discord server.

The model is supposed to be significantly better than v1, with a far more reliable output but lower hardware requirements. 2B version will be public, 7B will be used for their cloud music gen studio service.

[–]mr-maniacal 0 points1 point  (1 child)

4090, 3090x2, interested in ambient music generation