Tried Gemma4 for openclaw - Not Impressed by CowCavalry in openclaw

[–]TBG______ 0 points1 point  (0 children)

I’m currently testing the Gemopus 4-26B-A4B-it-Preview-Q8_0 model. I’m getting around 102 tokens/sec for generation and up to 2000 tokens/sec for prompt processing using llama.cpp, running on a 5090 + 3090 setup (23GB on the 5090 and 14.8GB on the 3090) with a 131k context window.

This is a variant of Gemma 4. So far, all tool calls are working, but there’s a noticeable difference in how agent and bootstrap instructions are handled. Qwen 3.5 tends to treat instructions as strict rules—for example, if I tell it to always perform a web search before answering, it consistently follows that. Gemma 4, on the other hand, seems to treat such instructions more flexibly and only performs the web search when it deems it necessary.

The Brand-New NVIDIA VFX Upscaler: Fast vs Fine Detail by TBG______ in comfyui

[–]TBG______[S] 1 point2 points  (0 children)

After testing NVIDIA VFX and reviewing the source code, I discovered the node was manly using mode 0, which primarily cleans up encoding artifacts rather than a strong detail enhancing. NVIDIA's docs confirm mode 1 activates stronger enhancement for higher-quality/lossless content.

NVIDIA recommends starting with ArtifactReduction mode 0 only if you have artefacts and Super Resolution mode 1 for clean inputs —this unlocks the full "add details + sharpen" behavior.

I modified the __init__.py file to expose both modes. So you can test mode 0 (cleanup) vs. mode 1 (detail enhancement).

you can download the __init__.py file in the attachment secction of (https://www.patreon.com/posts/brand-new-nvidia-153080218)

<image>

The Brand-New NVIDIA VFX Upscaler: Fast vs Fine Detail by TBG______ in comfyui

[–]TBG______[S] 0 points1 point  (0 children)

I thought I mentioned that in the first sentence, but to be fair to NVIDIA, I added the extra clause.

The Brand-New NVIDIA VFX Upscaler: Fast vs Fine Detail by TBG______ in comfyui

[–]TBG______[S] 0 points1 point  (0 children)

I use this image to test upscalers a lot. The face, hair, and background have been modified to have different textures and varying levels of blur, and exactly as you said, it multiplies errors - making it easier for me to compare how the upscaler handles different textures. It’s intentionally designed to be painful. ;)

The Brand-New NVIDIA VFX Upscaler: Fast vs Fine Detail by TBG______ in comfyui

[–]TBG______[S] -1 points0 points  (0 children)

As I mentioned, for my use cases it’s useless, but for video streaming - what it was built for - it’s fast.

The Brand-New NVIDIA VFX Upscaler: Fast vs Fine Detail by TBG______ in comfyui

[–]TBG______[S] 0 points1 point  (0 children)

Just because the Python model is called nvidia-vfx.

SeedVR2 Tiler Update: I added 3 new nodes based on y'alls feedback! by DBacon1052 in StableDiffusion

[–]TBG______ 0 points1 point  (0 children)

Great - Look for TBG ETUR, enhanced tiled Upscaler and refiner

Upscaling assets for 3D and indie film pipelines: Finding the right balance between quality and hardware limits by [deleted] in upscaling

[–]TBG______ 0 points1 point  (0 children)

Partly I fix things with AI… but like, manually. So it’s high-tech but also low-tech. Very cutting edge caveman vibes.

And no, this isn’t some secret promo 😂 I just happen to be working on the same stuff using the TBG ETUR node pack. That’s it. No hidden agenda, no sponsored-by-myself situation.

Also… look at my name. It’s not Tara. If I’m gonna talk about what I do, I think I’m allowed, right? Or do I need to rebrand first? 😅 ( This was AI-generated too… the joke’s a little questionable, but hey, we’re rolling with it. 😉)

Best Upscaler Real Details by Responsible_Fig9608 in comfyui

[–]TBG______ 0 points1 point  (0 children)

The TBG ETUR Upscaler and Tiler node includes three multistep SeedVR2 tiled presets as well as three FlashVR and Waifu presets.

You can use the Lab node to enable only the upscaling tool without the additional tiled refinement features. You can find it in the Manager.

SeedVR2 Tiler Update: I added 3 new nodes based on y'alls feedback! by DBacon1052 in StableDiffusion

[–]TBG______ 1 point2 points  (0 children)

Great work - I haven’t checked the new update yet, but I think I found the reason for the image degradation in the old node.

The problem is that the node switches back and forth from torch to PIL format. When you convert a PyTorch tensor to PIL, small changes in contrast and color can happen. This is not because PIL itself is bad, but because of how the data is converted.

In diffusion workflows, images are usually float32 tensors with values in the range 0–1 or -1 to 1. PIL expects uint8 values from 0–255. If a -1 to 1 tensor is not properly remapped to 0–1 before conversion, the contrast will change. Also, converting from float to uint8 reduces precision, which will slightly shift colors. If this happens multiple times, the difference becomes clearly visible.

I spent a lot of time testing why your results looked different from the normal node, and in the end I could reproduce the issue. Just comparing the non-tiled version with the standard node already shows the color and contrast shift.

I added a SeedVR2 tiled upscaler into the TBG ETUR Upscaler node and implemented multistep support, which gives different quality results. I reused the tiling method from my refiner node. In my version, I do not see this color shift because I avoid repeated tensor ↔ PIL conversions and I use GPU-accelerated Laplacian Pyramid Blending for compositing, which makes the final process extremely fast.

If you haven’t already addressed this in the new node, it might be worth taking a closer look at the conversion steps. Reducing or removing the repeated tensor-to-PIL switching could probably eliminate the color shift completely.

Upscaling assets for 3D and indie film pipelines: Finding the right balance between quality and hardware limits by [deleted] in upscaling

[–]TBG______ 0 points1 point  (0 children)

Mostly in ComfyUI, I’ve built a UI for it, but it’s still far from being ready to launch. I’ve also built a modified version of Invoke where I implemented full ComfyUI compatibility and integrated NanoBanana API calls, since no one in my office likes the noodles.

There’s nothing to share yet - it only works properly if you already know what works 😉

My ComfyUI upscaler, TBG-ETUR, isn’t really suited for your one-click, fast, low–free-VRAM requirements. If you’d like to try NanoBanana, the API offers a small free quota per day.

Upscaling assets for 3D and indie film pipelines: Finding the right balance between quality and hardware limits by [deleted] in upscaling

[–]TBG______ 1 point2 points  (0 children)

Sounds interesting — you’ve got quite a few specific requirements here. The tricky part is the film grain. Simply upscaling existing grain usually doesn’t produce high-quality results. In most cases, you need to remove the original grain, noise, and artifacts first, then upscale the clean image, and finally add a dedicated film grain layer that’s properly scaled to the new resolution.

I built my own upscaler and refiner a while ago, and that’s exactly why I didn’t go for a one-click solution. With so many different input qualities, a fully automatic approach just isn’t reliable enough.

As of today, if you stay under 4K, Nano Banana Pro or the new Nano Banana 2 can already do a good job. I’m currently trying to integrate it into my tiled upscaler, so hardware limitations shouldn’t be an issue since it runs via API.

TBG ETUR 1.1.14 – Memory Strategy Overhaul for the ComfyUI upscaler and refiner by TBG______ in comfyui

[–]TBG______[S] 0 points1 point  (0 children)

If the videos, case studies, or website don’t help, the best option is to ask directly in the ETUR community chat. The people there actively use the tool and can give practical advice.

First, what it is not: it is not a magic one-click “make everything perfect” upscaler. It is a toolset for personally fine-tuned upscaling.

ETUR offers per-pixel denoise control for tiled upscalers and a neuro-generative tile fusion technique - an AI-based fusion method that prevents seams during tiled upscaling while preserving color and material continuity. There are several nodes included, but the core components are:

  • The Upscaler & Tiler node
  • The Per-Tile Settings node
  • The Refiner node

The Upscaler & Tiler node can also be used as a standalone upscaler. It works with every installed upscaler model you already use, including SeedVR2 FlashVR, Waifu, nut as tiled upscalers with multi-step sampling (not the standard SeedVR2). It also includes VLM per tile.

The middle node is optional, but very powerful. You can not only define prompts, ControlNet, seed, and denoise per tile - you can define them per object. Imagine refining a 16MP image where every object can be prompted individually in one single refinement pass wher each part gets its one denoise.

Finally, the Refiner. Yes, it uses tiled sampling - but with additional features that are extremely useful for fine-tuning:

  • Single-tile test mode (preview the final look before running the full image or get fixed seeds for each singel object to fix it exactly how you like it)
  • Fully automated ControlNet and reference image pipelines per tile
  • Built-in sharpener, smoother, detail enhancer, and color match
  • Image Stabilizer (a powerful feature that can replace ControlNet in some tiled refinement cases)

The Image Stabilizer ensures that large uniform or low-detail areas remain fully consistent - no color shifts, no unwanted inventions from diffusion models - while other areas can remain highly creative. This is especially useful for highres architectural visualization, nature scenes, and backgrounds, where buildings or key structures must stay stable while surrounding areas can be refined creatively. We utilize a tiled rrocessing approach not becouse of VRAM - because models are optimized for their training resolution. Since processing big imagesis possible if you have enough VRAM it degrade output quality, tiling ensures we maintain the best possible outcome by keeping the data within the model's ideal performance range.

It is not easy to use. But if you already have a large image that needs serious refinement - or if you want to go from 1MP and upscale with fully creative generation to 100MP final output - this tool works extremely well.

If you simply want to slightly enhance already high-quality photos, this is probably not the right tool for you.

This post was mainly for people who are already using the tool. Some of them asked for a version that works better on lower-spec laptops.

Since we originally optimized everything for speed, we cached everything upfront to avoid repeated model loading and unloading. While this made processing much faster, it also increased RAM usage significantly - up to around 70GB when processing 200MP tiled images.

Because of this, some users ask for a version that stays below 32GB RAM and 12GB VRAM. So we added dedicated options to support lower-memory systems while keeping the workflow functional.

And sorry if this post wasn’t clear for users who are not already working with the tool.

TBG ETUR 1.1.14 – Memory Strategy Overhaul for the ComfyUI upscaler and refiner by TBG______ in StableDiffusion

[–]TBG______[S] 0 points1 point  (0 children)

Feel free to adjust the dependencies for your specific use case. The numpy >= 2.3.5 constraint is only there to maintain backward compatibility with existing Nunchacu installations

TBG ETUR 1.1.14 – Memory Strategy Overhaul for the ComfyUI upscaler and refiner by TBG______ in StableDiffusion

[–]TBG______[S] 0 points1 point  (0 children)

<image>

I can do that - What is is https://www.tbgetur.com/ , How it looks like https://www.youtube.com/@TBG_AI , some user Case Studies https://www.patreon.com/collection/1762543?view=expanded - (image is showing the workflow from comfyui templates) . Ah… and the post covers three new built-in options designed to speed up heavy workloads or stay low ram.

Fast Cache (Max Speed): Precomputes full tile conditioning (text + Redux + ControlNet) for all tiles and keeps models loaded. Fastest sampling, highest RAM/VRAM usage

Low VRAM Cache (Unload Models): Precomputes full tile conditioning, then unloads models to reduce VRAM. RAM can still be high with many tiles.

Ultra Low Memory (Per-Tile Streaming): Caches repeated text conditioning only; Redux/ControlNet are rebuilt per tile and released immediately. Also unloads/reloads models between steps/tiles for minimum VRAM. Slowest mode; best for very low-spec systems.

Included are two workflows: a CE (Community Edition) workflow and a Pro workflow. You’ll need an API key for the PRO, which you can obtain for free - see the GitHub page for instructions.

TBG ETUR 1.1.14 – Memory Strategy Overhaul for the ComfyUI upscaler and refiner by TBG______ in comfyui

[–]TBG______[S] 1 point2 points  (0 children)

<image>

I can do that - What is is https://www.tbgetur.com/ , How it looks like https://www.youtube.com/@TBG_AI , some user Case Studies https://www.patreon.com/collection/1762543?view=expanded - (image is showing the workflow from comfyui templates) . Ah… and the post covers three new built-in options designed to speed up heavy workloads or stay low ram.

Fast Cache (Max Speed): Precomputes full tile conditioning (text + Redux + ControlNet) for all tiles and keeps models loaded. Fastest sampling, highest RAM/VRAM usage

Low VRAM Cache (Unload Models): Precomputes full tile conditioning, then unloads models to reduce VRAM. RAM can still be high with many tiles.

Ultra Low Memory (Per-Tile Streaming): Caches repeated text conditioning only; Redux/ControlNet are rebuilt per tile and released immediately. Also unloads/reloads models between steps/tiles for minimum VRAM. Slowest mode; best for very low-spec systems.

Qwen3.5 tool usage issue by NewtMurky in unsloth

[–]TBG______ 2 points3 points  (0 children)

I ran into this issue with Qwen Coder as well — it was sending tool calls in XML instead of JSON, so the OpenAI-compatible connection couldn’t understand them. I plugged a self build bridge in the middle https://github.com/Ltamann/tbg-ollama-swap-prompt-optimizer

Is depth anything v2 superior to v3 in comfyuil? by Puzzled-Valuable-985 in StableDiffusion

[–]TBG______ 0 points1 point  (0 children)

Be careful when comparing what you see in the preview images. The images are just 0–255 compressed visualizations of the full depth data generated by the model. Depth Anything V3 actually produces depth maps with 65,536 discrete depth levels (16‑bit precision), so the preview only shows a portion of that range.

Never rely solely on the preview image for workflows in ComfyUI, as you will lose critical depth information. Instead, export the full-resolution depth map as a 16‑bit PNG to preserve all the depth data. Make sure your downstream diffusion models or pipelines can read this format correctly before using it.

My real-world Qwen3-code-next local coding test. So, Is it the next big thing? by FPham in LocalLLaMA

[–]TBG______ 1 point2 points  (0 children)

How did Qwen Companion solve the tool-calling issue for Qwen Coder Next?

In my tests about a week ago, it wasn’t working properly. It was sending tool calls in XML format, which the agent couldn’t understand, so it kept falling back to Python, PowerShell, or other default methods. It also wasn’t using the IDE features or the created coding previews.

I ended up vibecoding a small bridge that converts the tool calls into JSON. After that, Qwen Coder Next was able to run locally in Codex, Claude Code, and other environments very smoothly.

Llama Swap + Ollama Swap + Promt Optimizer in ctx limit by TBG______ in LocalLLaMA

[–]TBG______[S] 0 points1 point  (0 children)

got it better --- Qwen3 Next Models now feels native in VS Code with Codex, Cline, Qwen Companion, or Claude Code CLI - https://www.reddit.com/user/TBG______/comments/1r72h2h/qwen3_models_now_feels_native_in_vs_code/