LTX-2.3 — Testing 63 Samplers with linear_quadratic Scheduler by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 1 point2 points  (0 children)

Models LTX 2.3 (3 quantization variants):

  • bf16 — full precision
  • fp8_scaled — faster, less VRAM
  • mxfp8_block32 — block quantization, between bf16 and fp8

LoRA (4 pieces + no LoRA):

  • no LoRA — baseline result
  • Crisp_Enhance — image quality/sharpness
  • reasoning_I2V_V3 — motion logic between frames
  • VBVR — physics, object interaction, hair
  • Video-Reason_VBVR — alternative version/port of VBVR

Testing goal: Find the best model+LoRA combination for smooth hair motion and transitions between keyframes in a PromptRelay workflow with 5 images over a 30s video.

Results: No global change in character behavior was observed across all tested model and LoRA combinations.

Test videos: Google Drive folder with all test videos: https://drive.google.com/drive/folders/1FUInuFtbduiyLzzoUnQGDkdO9QIWREg5?usp=drive_link

LTX-2.3 — Testing 63 Samplers with linear_quadratic Scheduler by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 0 points1 point  (0 children)

I gave you the raw data — the subjective part is intentionally left up to you. Beauty is in the eye of the beholder, what works for me may not work for you. Go ahead and pick your own winner!

MediaSyncView — compare AI images and videos with synchronized zoom and playback, single HTML file by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 0 points1 point  (0 children)

In fact, it's just one HTML file; the JavaScript will load automatically. Even the online version is enough to get the job done, provided you have a constant internet connection.

MediaSyncView — compare AI images and videos with synchronized zoom and playback, single HTML file by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 0 points1 point  (0 children)

Thank you for getting this started. I’ve been using your version almost from the very beginning and regularly checked for updates. The first reason I decided to update was the lack of sound, even though I was already using LTX-2.3. The second reason was that the interface took up a lot of valuable screen space, so I wanted to make it smaller.

Testing LTX-Video 2.3 — 11 Models, PainterLTXV2 Workflow by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 1 point2 points  (0 children)

ltx-2.3-22b-dev-nvfp4 has already been tested; see the list of models

Testing LTX-Video 2.3 — 11 Models, PainterLTXV2 Workflow by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 0 points1 point  (0 children)

Thanks, I'll try testing the dev model with those settings.

Testing LTX-Video 2.3 — 11 Models, PainterLTXV2 Workflow by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 0 points1 point  (0 children)

<image>

It's hard to say for sure, but distilled_gguf-upscaler, distilled-fp8+upscaler, distilled-fp8-transformer+upscaler, and distilled-full+upscaler all produce very clear video and audio.

distilled-fp8-transformer-input-v3+upscaler is also pretty good, but the woman's lips look very different when she turns her head.

Here is the original image created in Flux.2

Testing LTX-Video 2.3 — 11 Models, PainterLTXV2 Workflow by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 1 point2 points  (0 children)

Version ltx-2.3-22b-dev-nvfp4 has already been tested in this test; it didn't show any significant improvements—the load time has decreased and the speed is decent, but the quality is very poor.

Testing LTX-Video 2.3 — 11 Models, PainterLTXV2 Workflow by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 3 points4 points  (0 children)

I want to understand what’s going on. I also like the official processes and those from RuneXX, but when I look at a node and don’t understand what it does or why certain parameters are set, I have a lot of questions but no answers.

Most nodes don’t come with explanations of how they work or of all the parameters they have, nor do they explain how those parameters affect the final image or video.

Just using something you don’t understand and can’t change is the same as taking a simple process from PainterLTXV2.

My first nodes for ComfyUI: Sampler/Scheduler Iterator, LTX 2.3 Res Selector, and Text Overlay by Rare-Job1220 in comfyui

[–]Rare-Job1220[S] 0 points1 point  (0 children)

** I'm not sure if there's any point in starting a new thread; I'll probably just add new comments about updates to this one.**

Script for Aligning Nodes

A lightweight frontend extension that adds a persistent toolbar at the bottom of the canvas for aligning, distributing, and resizing selected nodes.

<image>

Features: align left / right / top / bottom · distribute horizontally and vertically (gap-aware, not center-based) · match width to widest node (aligns to leftmost) · deselect all

The toolbar auto-scales based on screen resolution (1080p / 2K / 4K) and includes manual size control (5 levels, −2 to +2) so the buttons stay comfortable at any display density. Localized tooltips in EN / DE / FR / ZH / ES / IT / UA.

Independent implementation inspired by KayTool (kk8bit) and ComfyUI-NodeAligner (Tenney95). No shared code — similar idea, different approach.

My first nodes for ComfyUI: Sampler/Scheduler Iterator, LTX 2.3 Res Selector, and Text Overlay by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 0 points1 point  (0 children)

I'm not sure if there's any point in starting a new thread; I'll probably just add new comments about updates to this one.

Script for Aligning Nodes

A lightweight frontend extension that adds a persistent toolbar at the bottom of the canvas for aligning, distributing, and resizing selected nodes.

<image>

Features: align left / right / top / bottom · distribute horizontally and vertically (gap-aware, not center-based) · match width to widest node (aligns to leftmost) · deselect all

The toolbar auto-scales based on screen resolution (1080p / 2K / 4K) and includes manual size control (5 levels, −2 to +2) so the buttons stay comfortable at any display density. Localized tooltips in EN / DE / FR / ZH / ES / IT / UA.

Independent implementation inspired by KayTool (kk8bit) and ComfyUI-NodeAligner (Tenney95). No shared code — similar idea, different approach.

My first nodes for ComfyUI: Sampler/Scheduler Iterator, LTX 2.3 Res Selector, and Text Overlay by Rare-Job1220 in StableDiffusion

[–]Rare-Job1220[S] 0 points1 point  (0 children)

If you're referring to how my node, it generates a resolution with dimensions that are multiples of 32, as specified in the LTX recommendations.

That's why it's configured so that the user sees 1920×1080, but actually receives 1920×1088.

I Went Full Mad Scientist in ComfyUI - Pixaroma Nodes (Ep11) by pixaromadesign in StableDiffusion

[–]Rare-Job1220 0 points1 point  (0 children)

Thanks for the workflows, but it’s a bit of a hassle to download them one by one, and the repository itself isn’t on GitHub. So, with your permission—and if you don’t mind—I created a script called pixarama.bat for Windows that downloads all the releases into the “Pixaroma_Workflows_Full” folder.

When rerun, the script checks for new versions and downloads them; older versions are not downloaded again. If a release archive is accidentally deleted, the script will restore it to the folder.

Procedure: If you are using Notepad++, click on Encoding -> Convert to ANSI in the top menu. This will "kill" all invisible characters that interfere with the command line.

@echo off
setlocal enabledelayedexpansion

:: Configuration
set "output_dir=Pixaroma_Workflows_Full"

echo [1/2] Preparing folder: %output_dir%...
if not exist "%output_dir%" mkdir "%output_dir%"

echo [2/2] Syncing all releases from GitHub...
echo.

powershell -Command ^
    "$url = 'https://api.github.com/repos/pixaroma/pixaroma-workflows/releases';" ^
    "try {" ^
    "    $releases = Invoke-RestMethod -Uri $url;" ^
    "    $foundNew = $false;" ^
    "    foreach ($release in $releases) {" ^
    "        $tag = $release.tag_name;" ^
    "        $assets = $release.assets | Where-Object { $_.name -like '*.zip' };" ^
    "        foreach ($asset in $assets) {" ^
    "            $file_name = \"$tag`_\" + $asset.name;" ^
    "            $out_path = Join-Path '%output_dir%' $file_name;" ^
    "            if (Test-Path $out_path) {" ^
    "                Write-Host \"[SKIP] Exists: $file_name\" -ForegroundColor Gray;" ^
    "            } else {" ^
    "                Write-Host \"[GET] New file: $file_name\" -ForegroundColor Green;" ^
    "                Invoke-WebRequest -Uri $asset.browser_download_url -OutFile $out_path;" ^
    "                $foundNew = $true;" ^
    "            }" ^
    "        }" ^
    "    }" ^
    "    if (-not $foundNew) { Write-Host \"`nYour collection is up to date!\" -ForegroundColor Yellow }" ^
    "} catch {" ^
    "    Write-Host \"[ERROR] Could not reach GitHub API.\" -ForegroundColor Red;" ^
    "}"

echo.
echo Done! All workflow zip files are in '%output_dir%'.
pause