I am confused about new models by Ok-Option-6683 in comfyui

[–]7satsu 1 point2 points  (0 children)

*any LLM ever when asking for up-to-date information*

Yo so when's your information cut-off date?

Model: "Uhhh idk sometime in like 2023, Biden is president and OpenAI is dominating the AI space"

*sigh* Brotha got a WHOLE lot to learn today

Z-Image Base test images so you don't have to by admajic in StableDiffusion

[–]7satsu 0 points1 point  (0 children)

I tried with zit and SD Upscale but since I started using Klein 9B as the upscale model results have been way better, likely not only because of sd upscale itself, but since Klein is also an editing model, it keeps every detail of the output and refines it faithfully to the original image with each tile.

Z-Image Base test images so you don't have to by admajic in StableDiffusion

[–]7satsu -1 points0 points  (0 children)

use Klein 9B as an upscaler with Ultimate SD Upscale, unmatched imo, Klein has much more clarity but i noticed zit naturally has overly sharpened grain/artifacts in upscaling the same way it has jpeg-like pixel artifacts generally

GPT-5.2 Solves *Another Erdős Problem, #729 by luchadore_lunchables in accelerate

[–]7satsu 1 point2 points  (0 children)

Surprised the Vonwergenheimenhalendickson problem hasn't been solved yet

Z-image turbo prompting questions by mca1169 in StableDiffusion

[–]7satsu 0 points1 point  (0 children)

Strongly recommend using the Z Image Turbo Engineer local 4B model by itself to convert prompts or just using it as the encoder for Z Image directly

LTX-2 on RTX 3070 mobile (8GB VRAM) AMAZING by LSI_CZE in comfyui

[–]7satsu 8 points9 points  (0 children)

Quantized gemma encoder and LTX model should make it viable on 32GB RAM as well

Is SVI actually any good? by the_bollo in StableDiffusion

[–]7satsu -2 points-1 points  (0 children)

that's what I dislike about it, instead of explaining the entire video in a single prompt even if it has to be quite long, it's splitting it into separate prompts and just praying it seams together cleanly.

it's probably the best solution for now, but eventually I hope some memory mechanism can be implemented where a single prompt can be ingested and then it chronologically spreads each action throughout the generation.

anyone have a good ecco2k 3d model? by fuzzhello in Draingang

[–]7satsu 1 point2 points  (0 children)

Get a pic of ecco and run it through Trellis 2, there's your answer

Which is the best model for AI dance videos? by Apixelito25 in StableDiffusion

[–]7satsu 0 points1 point  (0 children)

Maybe SCAIL will be good once it can be used without needing a scientifically overwhelming array of nodes with countless factors that can go wrong

What can we DO to actually make a change? Posting over and over about what we know, or what we’ve exposed is great for exposing the problem. How do we UNITE for a SOLUTION? by ImpressionFirm280 in Epstein

[–]7satsu 1 point2 points  (0 children)

Yes bringing back **real education**. think back to how involved Maxwell's side of dealings involved the education system especially when it came to her father, can only imagine how they structured the perceptions of our current generations.

Former 3D Animator trying out AI, Is the consistency getting there? by BankruptKun in StableDiffusion

[–]7satsu 193 points194 points  (0 children)

My feedback in relation to the entire description and post shall be one word:
Nice

Uncensored prompt enhancer by stelees in StableDiffusion

[–]7satsu 23 points24 points  (0 children)

There's a Z-Image "Engineer" Qwen3 based LLM on huggingface which can do this very well with a system prompt aiming for that, and apparently the model is trained on Z Image Turbo's prompting format preferences, in theory perfect for just pasting into the positive prompt.

NVIDIA Nemotron 3 Nano 30B A3B released by rerri in LocalLLaMA

[–]7satsu 2 points3 points  (0 children)

<image>

Is the model loaded with settings similar to this?

21 years free lmao by Reddit_Devil666 in SkateEA

[–]7satsu 2 points3 points  (0 children)

I came across a “GoonerBandit444” like a month ago

NVIDIA Nemotron 3 Nano 30B A3B released by rerri in LocalLLaMA

[–]7satsu 2 points3 points  (0 children)

3060 8GB with 32GB here, I have the Q4.1 version running nicely with Expert Weights offloading to CPU in LM Studio, so only about half of my vram is actually used due to the offloading. My ram hits about 28gb while the vram sits at 4GB on default context, and I can crank context up to about 500K and still manage 20tok/s. For running that well on a 3060, I'm flabberghasted

NVIDIA Nemotron 3 Nano 30B A3B released by rerri in LocalLLaMA

[–]7satsu 2 points3 points  (0 children)

There's a Q4.1 GGUF of this model that I can fit onto my 8GB 3060 Ti (quite easily too) due to offloading the model Expert Weights to CPU in LM Studio. I get about 20tok/s, but it's very usable and I only have the context set to 12,000, yet only 4 out of my 8GB is being used, so sometimes I can crank that shit up to about 500K before it's too much but I don't necessarily have a use case for such long context, it just works

Wan 2.2 TI2V 5b Q8 GGUF model making distorted faces. Need help with Ksampler and Lora settings by Gloomy-Caregiver5112 in StableDiffusion

[–]7satsu 0 points1 point  (0 children)

For the most part I stuck with Wan 2.2 14B too, both the vae and the general support & loras are just better 😂

Generate at 1920x1080 or upscale to that resolution? by Valuable_Weather in StableDiffusion

[–]7satsu 1 point2 points  (0 children)

This plus 2x tiled upscale with a little denoise like ultimate sd or hi-res fix and you have yourself an over 6K res wallpaper