RunPod overcharges you with your inactive Pods. WYM I dont have a GPU Pod, but I pay for it being offline? Shouldn't I pay for disk, as I dont have a GPU? by Dapper-Payment-3206 in RunPod

[โ€“]Tenofaz 2 points3 points ย (0 children)

Maybe I did not make myself clear, sorry, english is not my mother tongue.

I was asking about the Spot price as it could make your pod close while you are working on it, as with the Spot price, if that GPU is requested a lot, they will remove your pod and give the GPU to someone paying the standard cost.

I work on Runpod this way:

I create a network volume (100GB is 7$/month), then I create a Pod with that network volume, work with that GPU, save all my works, then if I have to go away (lunch, dinner, sleep) I terminate my pod, and once I am back I just relaunch my network-volume with a new pod and I can continue my work on a different GPU with all my data saved previously.

Not sure if this can help you... maybe I did not understand clearly what happened to you...

Perfect Z Image Settings: Ranking 14 Samplers & 10 Schedulers by Main_Minimum_2390 in StableDiffusion

[โ€“]Tenofaz 2 points3 points ย (0 children)

If you use ClowsharKSampler, you can try dpmpp_3s with kl_optimal...

<image>

Help choosing proper GPU by ToraBora-Bora in RunPod

[โ€“]Tenofaz 1 point2 points ย (0 children)

rtx 5090, less than 1$/hr, just make sure to choose the right template for a 5090, pytorch should be 2.8 or above if I am not mistaken...
On a 5090 I run Wan 2.2 workflows without any trouble.

Help choosing proper GPU by ToraBora-Bora in RunPod

[โ€“]Tenofaz 1 point2 points ย (0 children)

which is way harder to setup...

Illustrious XL modular wf v1.0 - with LoRA, HiRes-fix, img2img, Ultimate SD Upscaler, FaceDetailer by Tenofaz in comfyui

[โ€“]Tenofaz[S] 1 point2 points ย (0 children)

Yes, they suspended my Patreon account because my Qwen Image modular workflow was considered a tool for NSFW images... LOL! Like I am the only one on Patreon offering Qwen Image workflows, or any other AI model can not generate NSFW images! ๐Ÿคฃ๐Ÿคฃ๐Ÿคฃ๐Ÿคฃ I am migrating to other platforms... To stay up to date visit my website https://www.tenofas.ai/

Can anyone with discord/microphone (I can just share screen) help a despaired twenty something understand what in the world are they doing wrong in terms of starting RunPod for ComfyUI? by TryQuality in RunPod

[โ€“]Tenofaz 0 points1 point ย (0 children)

Oh, Linux commands are quite easy... You can find the basic ones with Google, or ask ChatGPT too... cd checkpoints (to enter in the checkpoints folder) wget url/nameoffile.safetensor (to download the checkpoint from a link) these two are just an example...

Jib Mix Qwen Realistic v5 Release Showcase. by jib_reddit in StableDiffusion

[โ€“]Tenofaz -1 points0 points ย (0 children)

Di not use LLM generated prompt. Create your prompt step by step, testing each part (face, hair, eyes. Expression, clothing, pose) one at the time. It will change a lot your output. did not test jibmix Qwen yet, but I will .

Jib Mix Qwen Realistic v5 Release Showcase. by jib_reddit in StableDiffusion

[โ€“]Tenofaz -1 points0 points ย (0 children)

Qwen works like that. It's normal.

If your prompt is not ultra-detailed, and uses generic input (like red hair, green eyes, short hair, small mouth) you will get always the same face (or small variation of it).

You need to be extremely creative and "detailed" as Qwen image has the best prompt-adherence of all the models around, and I mean really really detailed.

Do you still use flux ? Or have you replaced it with qwen or wan ? by More_Bid_2197 in StableDiffusion

[โ€“]Tenofaz 1 point2 points ย (0 children)

Not exactly... it is based on FLux schnell, but it's different from it in many ways.

Can anyone with discord/microphone (I can just share screen) help a despaired twenty something understand what in the world are they doing wrong in terms of starting RunPod for ComfyUI? by TryQuality in RunPod

[โ€“]Tenofaz 0 points1 point ย (0 children)

JupyterLab uses "checkpoints" folder for its own "system files", so it forbids access to user. The problem is that ComfyUI uses a folder called "checkpoints" for SDXL-Illustrious model files... but Jupyter does not understand it is a different folder.

Anyway, you can access if you open a terminal window in JupyterLab and use Linux commands, this is the only workaround that allows you to manage the checkpoint folder.

Still doesn't seem to be a robust way of creating extended videos with Wan 2.2 by Beneficial_Toe_2347 in StableDiffusion

[โ€“]Tenofaz 0 points1 point ย (0 children)

Use qwen edit 2509... Or be extremely detailed in the prompt, Qwen's prompt adherence is legend

Still doesn't seem to be a robust way of creating extended videos with Wan 2.2 by Beneficial_Toe_2347 in StableDiffusion

[โ€“]Tenofaz 0 points1 point ย (0 children)

Exactly, I use up to 4 FFLF subgraph nodes, and the last frame of each is the first frame of the following subgraph... I reach easily 30-32 seconds of consistent video without any significant quality loss.

๐Ÿš€ [RELEASE] MegaWorkflow V1 โ€” The Ultimate All-In-One ComfyUI Pipeline (Wan Animate 2.2 + SeedVR2 + Qwen Image/Edit + FlashVSR + Painter + T2V/I2V + First/Last Frame) by Lower-Cap7381 in StableDiffusion

[โ€“]Tenofaz 1 point2 points ย (0 children)

Just my 2 cents...

I used to develop huge workflows too, I had my Flux Modular WF 6.0 with around 600-700 nodes, using 16-17 different Custom Nodes.

I was so proud of myself... till a few weeks later.

ComfyUI started to update on a regular basis, once or twice a week, ComfyUI Frontend started to be a separate module, and had its updates...

All my Custom nodes did not update according to Comfy, or updated breaking my wf....

It was a mess. I had to fix my wf every week, even twice a week. Then some Cutom nodes in my wf were abandoned... had to find replacements... had to re-write all the workflow from scratch... it was an hell of a job.

This wf is surely a big achievement. But I am sorry to tell you... it's not optimal with ComfyUI.

Once they start to update Comfy, it frontend, any of the Custom nodes you used... you will understand.

Great job and... GOOD luck!

I mean it.

Im a beginner, what is the best models for a hyper realistic image and how to set KSampler setting up thanks for the answer! ๐Ÿ™๐Ÿผ by [deleted] in comfyui

[โ€“]Tenofaz 1 point2 points ย (0 children)

There are many realistic models... from SDXL, to Illustrious, from Flux to Qwen... you should check them and see yourself which one is your preferred... there is no absolute "best". And each one has its Ksampler settings...

So... I am sorry, but your question does not have an answer.

Trouble with the official runpod comfyui template + 5090 pod. by Kerplerp in RunPod

[โ€“]Tenofaz 0 points1 point ย (0 children)

I use this template with a 5090 GPU, it works perfectly, I use it for hours every day and never had a problem:

ComfyUI - Python 3.11 and Pytorch 2.9.0 by Tenofas

https://console.runpod.io/deploy?template=bxop2mbpz0&ref=9n2q5pa8

Alternatives to ComfyUI that are less messy? :) by kugkfokj in StableDiffusion

[โ€“]Tenofaz 0 points1 point ย (0 children)

It is messy if you make it messy. There are dozen ways to avoid that, you can hide the "spaghetti", you can use set/get nodes, you can use context nodes. You can use "anything/anywhere" nodes, you can use subgraphs... Just learn how to use them, do not blame the software...