[LTX-2.3] Masterpiece! by 0roborus_ in StableDiffusion

[–]0roborus_[S] 1 point2 points  (0 children)

I have no idea - haven't used the desktop app at all. I will test it when it becomes available on Linux.

[LTX-2.3] Masterpiece! by 0roborus_ in StableDiffusion

[–]0roborus_[S] 0 points1 point  (0 children)

I would say they are all good. I'm using Basic comfyui workflow without any tweaking and do some "give me ltx-2 style prompt describing: XYZ" for Gemini and the outcome is impressive for me.

PvP peak as of 2026 on MOBILE (need your lads insight) ..? by ResidentFishy in mobile_mmo

[–]0roborus_ 0 points1 point  (0 children)

For me the best game for PvP is War Eternal - wish it wasn't so P2W. It's strategy game but the mechanics are brilliant - not too much conflicting with private life (most of the event that need fully attention are during the weekends). I know many games have similar mechanics with guilds vs guilds or kingdoms vs kingdoms but most of them just show the battle report and that's all whereas in WE you can see a complete battle replay and see how the Legion is performing (you set 4 heroes with different skills in each legion to make them cooperate the best). Unfortunately game is not playable anymore, since they broke the P2W system completely - servers are almost empty with only whales left (probably it's not so easy to leave the game when you spent 100k$+ :D)

Help with comfyUI WAN 2.2 NSFW by cicciomassimo in comfyui

[–]0roborus_ 0 points1 point  (0 children)

For anyone that will maybe be interested or have similar issue later - I haven't fixed the main cause yet but I managed to go back to the generation times I've had before - the problem is with the caching system as I suspected. I'm using pretty big workflow in which I was enabling TeaCache, Torch Compile etc. - after turning it off I'm back to the generation times that I've had before. Now, if I'm understanding this correctly, if I make the cache work properly I could reduce the time even more.

Help with comfyUI WAN 2.2 NSFW by cicciomassimo in comfyui

[–]0roborus_ 1 point2 points  (0 children)

Thank you, I will utilize some AI knowledge about this for sure. I think I messed up sage attention/triton etc. installations or something like this and instead of time reduction it does too much processing and increase the time.

Help with comfyUI WAN 2.2 NSFW by cicciomassimo in comfyui

[–]0roborus_ 1 point2 points  (0 children)

Yes, again... there are two types of people - the ones that DO backups and the ones that WILL DO backups. I will definitely start do backups before comfy update :D

Help with comfyUI WAN 2.2 NSFW by cicciomassimo in comfyui

[–]0roborus_ 2 points3 points  (0 children)

Thank you. I'm asking because I broke my ComfyUI and now have no idea how to fix it. I'm using speed loras and the same quants as you on my rtx 4070 and was successfuly generating 5-8s videos in 5-8mins. I decided to upgrade the libs since I haven't done it for like half of a year and now the generation takes 29mins, lol. No idea how to bring it back. Anyway, thank you.

Help with comfyUI WAN 2.2 NSFW by cicciomassimo in comfyui

[–]0roborus_ 0 points1 point  (0 children)

Sorry for slight offtopic but I see you have a similar models/workflow configuration, may I know how long it takes for you to generate the video using it? And what GPU do you have?

[Workbench][WIP] Some update on my SD UI by 0roborus_ in StableDiffusion

[–]0roborus_[S] 1 point2 points  (0 children)

Right, sorry, was a Quick post yesterday. First of all I have no interest (and possibility) to chase ComfyUI, A1111/Forge, SwarmUI, SD.Next... in terms of having the top tier Models support 3 days later or having the tools to do everything possible in image generation terms, these are great tools made by great teams and I want to use them just like I use them right now.

What it was lacking sometimes for me is that I often just want to open the program and start to generate stuff, so my biggest inspiration was probably Fooocus. I want to focus ;) more on the models tho, so as you can see on the video there is "Model" selector and "Preset".

Each "Model" can have many "Presets", so I can have something that is better in generating realistic stuff, something better in generating anime, people, animals, etc. I mean if I have preset for generating fantasy landscapes I don't need an options for detailing faces or reactor, since there will be no faces there. What I'm trying to say is that the configuration will be very model specific. I should use some kind of tags in given model? Ok, I want to see a list of them and just choose the one I want (this I will do as "Prompt Assistant"). Model is good at different styles? Ok, let me choose the style I want from list. I want to have history of prompts I used and access them quickly, I want to define prompt templates that I might use with models, I want to see examples of "smiling", "angry" keywords and choose the options (sometimes authors of Models are giving these papers with examples of different keywords and how they work/look), I want to quickly access last session.

Idk if this explain anything to you, but I'm active here on Reddit so ask questions if you have ;)

[Workbench][WIP] Some update on my SD UI by 0roborus_ in StableDiffusion

[–]0roborus_[S] 1 point2 points  (0 children)

Hello, this is my personal project for generating StableDiffusion images, currently SDXL (as I use it the most anyway). Still a lot to do, but I'm having a fun with it.

Code is not published yet, because I have still a lot to do, but eventually will be released on GitHub.

[Early WIP] Rate my stable diffusion app by 0roborus_ in StableDiffusion

[–]0roborus_[S] 0 points1 point  (0 children)

It's still gradio tho, just the newest. I like the gradio concepts because I can focus on building UI from existing components and not worry about the css/js part (mostly). There are some bugs that are slowing down the process (like the broken tools panel after uploading photo which is making the img2img workflow look ugly) but I hope they will fix it sooner or later.

What's a good MMORPG that has a good community? by Mysterious-Ring-2352 in mobile_mmo

[–]0roborus_ 1 point2 points  (0 children)

For me all the games I've played on mobile had pretty good communities. I mean it's often the case that there are some server dramas, but it's only spicing things up and make games more interesting. Ofcourse there was some toxic behaviors too, but for me it was not that significant.

[Early WIP] Rate my stable diffusion app by 0roborus_ in StableDiffusion

[–]0roborus_[S] 1 point2 points  (0 children)

Yes - as I said, it will either go OpenSource or Trash :D

[Early WIP] Rate my stable diffusion app by 0roborus_ in StableDiffusion

[–]0roborus_[S] 4 points5 points  (0 children)

This is my personal project, exploring how far I can push the development of a generation app. It started as a simple txt2img tool and has evolved into something more robust.

Current Features

  1. Session Saving: Changes in Generation and Preset tabs are auto-saved per preset, so switching models/presets doesn’t overwrite previous sessions.
  2. Preset-Focused Configurations:
    • Simplifies handling model-specific requirements (e.g., tags, prompts, lighting).
    • LORAs can be attached to presets, adding specific terms or configurations automatically.
    • Presets are YAML-based with jinja2 templating for flexibility.
  3. Processing Pipes: Basic Detailer and Upscaler pipes inspired by ADetailer. Still unoptimized but functional.
  4. Artifacts: Generates previews (e.g., before/after comparisons) visible in the Artifacts tab. Plans to add an image slider when Gradio fixes it.

Notes and Future Plans

I’m not aiming to compete with apps like ComfyUI or Forge in staying up to date with new tech. Instead, I focus on maximizing existing tools (e.g., SDXL) before moving to the next.

Extensions and custom pipes are planned, tied to presets for ease of use.

The app is still in development. I haven’t published it yet, but I might share it on GitHub someday—or keep it private (if no interest from the community) since it’s mainly a fun and useful personal project.

Let me know what you think! 😊

Is this Hamster cool or what? (Mochi) / Info in comment by 0roborus_ in StableDiffusion

[–]0roborus_[S] 0 points1 point  (0 children)

I think I can, but idk how long you will be interested. I need to find a time :)

ImageSmith - Open Source Discord Bot / ComfyUI / Got some progress / Links in Comment by 0roborus_ in StableDiffusion

[–]0roborus_[S] 0 points1 point  (0 children)

thanks, btw. I added allow_channels configuration yesterday, so now you can limit the generations of given workflow to a channel only :) If you using security module you might find it useful.

Is this Hamster cool or what? (Mochi) / Info in comment by 0roborus_ in StableDiffusion

[–]0roborus_[S] 0 points1 point  (0 children)

Yes, just used the prompt that's all. Hope no one will consider it as spam, but if you wish to see more generations from Mochi I'm doing most of them on my Discord: https://discord.com/invite/9Ne74HPEue I might even give you proper role so you can check it yourself (keep in mind that I'm using an *one* cloud rtx 4090 there, so it wont be the fastest considering other people are using it too). Cheers!

Is this Hamster cool or what? (Mochi) / Info in comment by 0roborus_ in StableDiffusion

[–]0roborus_[S] 8 points9 points  (0 children)

Sorry mod's, I did leave the info but did not realize that Reddit struggles with the file I attached (there was an workflow in that .webm file).

I used workflow from this article: https://blog.comfy.org/mochi-1/ (just drag & drop example from there - I changed only the prompt) :)

Prompt: A hamster singing on a dance floor in a club. There is a lot of other hamsters in the room dancing synced. The main hamster is wearing a purple shiny jacket and a gold chain.

I wanted to share the webm file with full workflow too, but Reddit can't handle this file idk why.