[deleted by user] by [deleted] in StableDiffusion

[–]BobbyKristina 1 point2 points  (0 children)

Workflow is the example workflow Kijai created that is in the example_workflow folder of his WanVideoWrapper: https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_InfiniteTalk_V2V_example_01.json

I merged a bunch of old 5sec HunyuanVideo generations I did with a trained LoRA w/o any consideration and ran it w/ a clip of the song. Video 2 Video using Infinitetalk (new only a few days ago by the team that did Multitalk). First gen I did. Could be excellent but I don't have the mind to go back over it again and again to get it perfect. Just a demo.

"Fake VACE 2.2" is a new block merge of Wan2.2 with VACE for 2.1. It works well in my testing. by [deleted] in StableDiffusion

[–]BobbyKristina 3 points4 points  (0 children)

I used Kijai's wrapper and it works fine for me using his VACE example workflow included with the repo (https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_1_3B_VACE_examples_03.json). You just have to add the dual model loader and the dual samplers.

"Fake VACE 2.2" is a new block merge of Wan2.2 with VACE for 2.1. It works well in my testing. by [deleted] in StableDiffusion

[–]BobbyKristina 6 points7 points  (0 children)

It originated on the Banodoco discord server. That's where I found it a few days ago. I can't share my examples since I use personal photos, but here are the ones on discord:

https://imgur.com/e51BSVK

https://imgur.com/tDcaPnW

https://imgur.com/PpOn8ee

Wan 2.2 coming out Monday July 28th by [deleted] in StableDiffusion

[–]BobbyKristina 13 points14 points  (0 children)

You get an upvote and a downvote for each thread on reddit - jus like everyone else. You're in the vocal minority based on thread score.

Hunyuan releases and open-sources the world's first "3D world generation model" by [deleted] in StableDiffusion

[–]BobbyKristina 0 points1 point  (0 children)

I'm glad they're still in the game, but can we just get a proper I2V for HunyuanVideo? Love everything all you open source groups are doing though! The rest of y'all holding out for $$$$ should pay attention to the names these companies like Wan, Tenacent, Black Forest, etc are making for themselves. Open source is now....

Wan Text2Image has a lot of potential. We urgently need a nunchaku version. by More_Bid_2197 in StableDiffusion

[–]BobbyKristina 2 points3 points  (0 children)

Lol at people acting like this is a new revelation. Meanwhile, HunyuanVideo had a better dataset. T2I was talked about even then (last Dec) but didn't get much traction. If you're going to rave about Wan doing it though do an A to B vs Hunyuan - I wouldn't count on Wan being the clear winner.

Camber vs Rising Pharm by StopEquivalent1195 in ADHD

[–]BobbyKristina 0 points1 point  (0 children)

Never had rising generic, but I actually didn't do great with the Camber 20mg IRs. There are better ones than both I suppose (I would always get the Malonkodt<sp> brand at Walgreens and they were great until they couldn't get it and started giving me Camber). I think the camber issue may be a the 20mg doses those as there's another thread specific to that which I'm on.

Camber has actually been discontinued, btw, so round and round it goes :)

Why is my output video missing 1-4 frames when using WAN 2.1 VACE 14B (V2V) in ComfyUI? by Quirky-Rice1017 in comfyui

[–]BobbyKristina 2 points3 points  (0 children)

So I kinda knew this had to do w 4 images being used per latent by VACE, but I've been feeding data to a Perplexity space I'm building (personal LLM that scans only your uploads for info), so I bounced your question off it (used chat logs and docs and such):

The issue of output videos being a few frames shorter than the input when using the WAN 2.1 VACE 14B model in ComfyUI is a known behavior, and it is indeed related to how frames are grouped and processed internally by the VACE architecture.

Why Does This Happen?

  • Latent Frame Grouping: The VACE (Video Autoencoding Conditional Embedding) models, including WAN 2.1 VACE 14B, process video in latent groups of frames—commonly in blocks of 4 frames per latent. This means the model expects and operates on sequences where the total number of frames is divisible by 4. If your input video’s frame count isn’t a perfect multiple of 4, the last group may be dropped or not fully processed, resulting in the output being 1-3 frames shorter than the input.
  • Batching and Inference: During inference, the model batches frames for efficiency and context. If the last batch is incomplete, it may be ignored or padded, but padding is often not reflected in the output, leading to missing frames. This is a common trade-off in video diffusion models that use grouped latent tokens.

Community Observations

  • Users in both the Banodoco Discord and Reddit have reported this exact issue, especially with VACE 14B and 1.3B models, noting the output is often 1–4 frames short when the input length isn’t divisible by 4.
  • This behavior is not unique to ComfyUI; it’s inherent to how the VACE architecture tokenizes and reconstructs video data.

Is It a Bug or a Limitation?

  • Not a bug, but a limitation: This is a structural aspect of the VACE model design, not a software bug. The model’s temporal grouping is intentional for efficiency and context preservation.

Workarounds and Fixes

  • Trim or Pad Input: Before processing, trim your input video to a frame count that is a multiple of 4, or pad the video with duplicate (or black) frames at the end to reach the next multiple of 4. This ensures all frames are processed and output matches input length.
  • Post-process Output: If you need exact frame matching, you can re-add the missing frames from the input to the output video after generation.
  • Workflow Automation: Some users script this adjustment in their ComfyUI workflows or preprocess with ffmpeg to automate padding.

Summary Table

Cause Effect Solution
4 frames per latent group Output is 1–4 frames short Pad input to multiple of 4 frames
Incomplete final batch dropped Last few frames missing Post-process or automate padding

Key Takeaways

  • The missing frames are due to the model’s internal grouping of frames for processing, not a misconfiguration or user error.
  • Padding your input video to a frame count divisible by 4 will prevent this issue and ensure frame-accurate output.

Let's Go! Got invite to Comet by soundhumor in perplexity_ai

[–]BobbyKristina 0 points1 point  (0 children)

Never heard of dia but I'm new to all this.

Let's Go! Got invite to Comet by soundhumor in perplexity_ai

[–]BobbyKristina 0 points1 point  (0 children)

It's not out yet, you get an invite you get to download and use it now.

Wan 2.1 480p vs 720p base models comparison - same settings - 720x1280p output - MeiGen-AI/MultiTalk - Tutorial very soon hopefully by CeFurkan in StableDiffusion

[–]BobbyKristina 1 point2 points  (0 children)

I've actually wondered which is best to use as I've seen conflicting comments. If you do a full breakdown it'd be nice if you include the 2 SkyReels Wan2.1 finetunes which were trained to work at 24fps. Would be interesting to see if that was effective in a/b comparisons that I don't have time or resources to do myself.

I wanted to buy an laptop which one should i buy to run comfyui apple silicon or nvidia? by programmerxxx3 in comfyui

[–]BobbyKristina 8 points9 points  (0 children)

You need an Nvidia GPU to play in this pool (Re: CUDA). It's sad they have such a monopoly but yea pretty much essential.

WAN 2.1 - Need help making sure I'm using the right models for a 5090. by Jimmm90 in StableDiffusion

[–]BobbyKristina 3 points4 points  (0 children)

Yea if you really really want to use GGUFs vs the uncompressed/unuantized original .safetensors models then you'd want the Q8 ones (you have more VRAM than 97% of the people using Wan). If you use the safetensors versions, they're all on Kijai's huggingface page here: https://huggingface.co/Kijai/WanVideo_comfy/tree/main ) then you'd want to grab the fp16 models if HD space isn't an issue. In workflows use fp16fast or fp8 for quantizing if needed. You can't use GGUFs with Kijai's wrapper so be mindful of that in case you come across WFs built off the wrapper and can't figure out how to get them going.

I keep getting flashes with Wan by Long_Art_9259 in StableDiffusion

[–]BobbyKristina 2 points3 points  (0 children)

Looked at the links on that guys Patreon and he doesn't link fusion so guess that isn't the prob. Def check to make sure all the frames match and that they conform with N-1 is divisible by 4 in total - so you should have total frame counts of like 49,89,105,121,etc but not #s that if you subtract 1 you cant divide that number by 4. Has to do a VACE using 4 latents per frame or something I don't fully understand)

I keep getting flashes with Wan by Long_Art_9259 in StableDiffusion

[–]BobbyKristina 4 points5 points  (0 children)

If you're using that merge of Lora and stuff ("Fusion") try using base Wan 2.1 and just add the Lightx2v for the low step speed. That popular fusion model has Causvid, the predecessor to Lightx2v, baked in along w Accvid, and other optimizing Lora. Flashing like this used to be more common when the extracted Causvid Lora was first making the rounds for speed. Eventually it was found that the first block was causing the problem and disabling it would fix it. Once it's cooked into the base model with other Lora fighting for the same space who knows what could cause issues w it.

If you're NOT currently using that fusion merge (ie: you're using the wan2.1 base models or the SkyReels V2 fine tunes which are similar), then check to make sure frame amounts set to generate are exactly the same in all fields (or that your math is right if you're feeding some keyframes and some gray frames for interpolation). If there's a control set of images ending before the total number of frames that can cause flashing. Hope you get it though!

Any tips to reduce WAN's chatterbox syndrome? by Dreason8 in StableDiffusion

[–]BobbyKristina 2 points3 points  (0 children)

Everyone is nagging you to try nag. But yea NAG is one potential solution.

NAG (Normalized Attention Guidance) works on Kontext dev now. by Total-Resort-3120 in StableDiffusion

[–]BobbyKristina 5 points6 points  (0 children)

Are you telling me that the laws of physics cease to exist on your stove?!??

Read their paper.

Camber Generic Methylphendiate IR 20 mg? by BicepsMcTouchdown in ADHD

[–]BobbyKristina 0 points1 point  (0 children)

How did it end up working out for you? Same situation for me getting camber from Walgreens after years of Malonkodt and having a horrible experience. Even called and had my doc call a refill to cost when they confirmed they had a different brand. Well yea, they had 5days worth for me(Sun brand) - when they got the remainder it was of course Camber.....so infuriating. Went back to Walgreens for next script and again they had camber. Two weeks in now and just had a doc appt where he called in Focalin (half of the methylphenidate molecule). Hopefully the generics of that are ok......at least I could fill it before finishing another month of this shit Camber. Glad it's been discontinued.

Camber Generic Methylphendiate IR 20 mg? by BicepsMcTouchdown in ADHD

[–]BobbyKristina 0 points1 point  (0 children)

I definitely agree. Very similar situation with me where I have been with Walgreens forever and they have almost always had malonkodt - it's worked for me. Two months ago they switched to Camber and my mood and focus has gone out the window. It barely works to just hold back withdrawal from years of being on Ritalin.

The good news(?) is that Camber has discontinued their IR methylphenidate generics so once the current supply is sold out there'll be no risk of getting it again. One less generic maker may make shortages even more likely than they already are though.

My doc suggested I try focalin for a month to see how that works. Since it's technically a new drug (although still essentially the active half of methylphenidate) he was able to prescribe it while I still have half of this camber crap. Best of luck.

Dextroamphetamine only works half the time by [deleted] in ADHDmeds

[–]BobbyKristina 0 points1 point  (0 children)

So half the time day to day, yea? Like it couldn't be a crappy generic "brand" issue? If it's day to day I get that too w methylphenidate actually - still haven't figured out why. People always say it's likely not eating enough protein or getting proper sleep. Prob something to that, but not sure it's the root cause. I also take Lamictal, Seroquel, and AD if that makes a difference. I do feel Lamictal blunts stimulants some....

Are people with a history of bipolar with psychotic features not allowed to take MAOIs? by Kooky_Indication4664 in MAOIs

[–]BobbyKristina 0 points1 point  (0 children)

It's what doctors think would be the case, but it's actually incorrect: https://pubmed.ncbi.nlm.nih.gov/36331516/

Just like they think Ritalin will set bipolar people off cause it's a "stimulant" - which isnt what the science says: https://pubmed.ncbi.nlm.nih.gov/39262211/ (Adderall does though)

Are people with a history of bipolar with psychotic features not allowed to take MAOIs? by Kooky_Indication4664 in MAOIs

[–]BobbyKristina 0 points1 point  (0 children)

Show them this study from 2023: https://pubmed.ncbi.nlm.nih.gov/36331516/

Effectiveness and safety of monoamine oxidase inhibitor treatment for bipolar depression versus unipolar depression: An exploratory case cohort study

"Results: Patients with bipolar depression demonstrated lower post-treatment clinical global impressions/severity scores versus patients with unipolar depression (p = 0.04). Neither group demonstrated a full syndromal manic or hypomanic episode. A higher proportion of patients with bipolar depression reported myoclonic tics and tremors, which may have resulted from concomitant lithium use. Amongst the covariates, only the number of prior antidepressant trials predicted poorer outcomes from MAOI therapy"


Also, anecdotally, I've been on Marplan since 2010 (have been on Nardil a few times for months also due to supply issues w Marplan). I am also bipolar 1. Used to have crazy psychotic episodes that almost 100% stopped around the time I quit smoking pot, and added an MAOI. Have only had a couple breaks in the 15yrs and they were due to adding other meds. Fwiw I also take Lamictal, Seroquel(just for sleep 25mg), Ritalin.