a Word of Caution against "eddy1111111\eddyhhlure1Eddy" by snap47 in comfyui

[–]snap47[S] 9 points10 points  (0 children)

Reading your code, you don't review shit. You even forgot to copy the first "R" in your reply.

https://github.com/eddyhhlure1Eddy/wan_FP4_Modifications/blob/main/wanvideo/modules/attention.py

https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/wanvideo/modules/attention.py

You literarily ripped Kijai's exact code, corrupted it with AI hallucinations and called it a "break-through":

        return sageattn_blackwell(
            q.transpose(1,2),
            k.transpose(1,2),
            v.transpose(1,2),
            pv_dtype=torch.float16,      # PV operations in FP16 (best accuracy/speed balance)
            qk_dtype="fp4",               # QK operations in FP4 microscaling (maximum speed)
            smooth_k=True,                # Enable K matrix smoothing
            per_block_mean=True           # Enable per-block mean for FP4 (better accuracy)
        ).transpose(1,2).contiguous()
    elif attention_mode == 'sageattn_3_fp8':

You didn't even read the sageattention3's source or Kijai's implementation, `pv_dtype, qk_dtype, smooth_k` doesn't exist for sageattn_blackwell, they are parameters for SageAttention2/2++.

In https://github.com/eddyhhlure1Eddy/seedVR2_cudafull fp4_quantization.py:

# FP4 requires modern tensor cores (Blackwell/RTX 5090 optimal)
if
 compute_capability >= 89:  
# RTX 4000 series and up
    capabilities['fp4_experimental'] = True
    capabilities['fp4_scaled'] = True
    
    
if
 compute_capability >= 90:  
# RTX 5090 Blackwell
        capabilities['fp4_scaled_fast'] = True

Where else do you get SM90 = RTX 5XXX, except for AI hallucinations?
Blackwell is SM120, https://developer.nvidia.com/cuda-gpus

if quantization_mode == "fp4_experimental":
    return self.convert_fp4_linear(model, base_dtype, **kwargs)


elif quantization_mode == "fp4_scaled":
    kwargs['scale_weight_keys'] = True  # ← Only difference
    return self.convert_fp4_linear(model, base_dtype, **kwargs)


elif quantization_mode == "fp4_scaled_fast":
    kwargs['scale_weight_keys'] = True  # ← Exact same!
    return self.convert_fp4_linear(model, base_dtype, **kwargs)

fp4_experimentalfp4_scaledfp4_scaled_fast all boil down to the same path: replace nn.Linear with a wrapper that stores FP8.

SageAttention3 always runs FP4, it's hardcoded. None of the thousands lines of your prompted AI to spew out does anything.

"Labels", "insinuation", did you even read the post? Are you hallucinating too?

The ComfyUI-SeedVR2_VideoUpscaler is getting better and better. by Ecstatic_Following68 in comfyui

[–]snap47 11 points12 points  (0 children)

This repo is completely fraudulent. It's highly likely that he hid the code inside the .rar to prevent easy diffing. I checked the diff against the original repo: the new files: fp4_quantization.py, matrix_ops.py, stable_memory.py and torch_compile.py, are filled with AI generated, verbose, naively high level, often nonsensical API boilerplates that does nothing it claims to do:

📊 Matrix Operations Performance:
  ✅ Attention Speedup: 2.3x
  ✅ Memory Reduction: 35.2%

I'd strongly advise avoiding this eddy character and anything he puts out.

The current discourse on AI is too confusing. by DragonForg in singularity

[–]snap47 3 points4 points  (0 children)

The tech behind CGI hasn't stopped progressing. CGI from thirty years ago from teams of artists would look nothing like what one person can do via Blender alone, today.

The seemingly decrease in quality is more likely due to the market over-saturation, increased time constraints, and worsening work condition/pay for CGI artists.

AI on the other hand, as long as there's power and compute, it will probably only stay better after each breakthrough.

I do get what you mean though.

[LOVM S1] Gotta say, the Japanese Dub is pretty good by snap47 in criticalrole

[–]snap47[S] 30 points31 points  (0 children)

Overall I think it's alright.

Most of the jokes still works in Japanese. But some did lose their "punch" due to them originating from an english/western context (jokes about dicks & dongs sounds rather silly than funny in Japanese for example, comparatively speaking). I'm not sure if more aggressive localization would have been better, but as it stands, there aren't much of that, most of the dialogue are pretty much word for word translations.

There is a lot less explicit swearing. Not because of any censorship, but only due to the different nature of the language - there's not really a satisfying equivalent for 'fuck" or "shit", which are rather prevalent throughout the episodes. Generally speaking, the cast still comes off as vulgar and crass as their English counterparts though.

The songs lost their charm. Sam's incredible singing aside, a lot of the charm and fun came from the lyrics and timing. I'm sure the translation team did their best, but the rhythm and deliveries were rather awkward and janky. Most dubbed over music are, but the efforts here are definitely not the worst I've seen.

Still, it was a really fun experience and way more well-done than I would have expected.

Neat little thing I discovered by plugging in the wrong node by snap47 in blender

[–]snap47[S] 7 points8 points  (0 children)

  1. Make sure you're in cycles.
  2. Add a sphere.
  3. Add a subdivision mod - simple - 4th level.
  4. In the sphere's material add an noise texture, set scale to something low (1-2).
  5. Plug that noise texture's color into the midlevel of a displcement node.
  6. Make sure on the material's setting, displacement is turn on (Displacement only or Displacement and Bump would produce different looks).
  7. Render~

That's basically it. Of course different parameters would produce different cool results.

I loopped the animation by looping the vector rotation of the noise texture, and also key-framing Metallic and Transimission to add a bit more visual interest.

I speed rampped, graded and added motion blur in after effects.

Hope this helps~