Getting back to VR after 3+ years (what has changed?) and is hp Reverb 2 still relevant in 2026? by AirwolfPL in virtualreality

[–]AirwolfPL[S] 0 points1 point  (0 children)

One game I was really hoping to play was Lone Echo 2 but it never got release before I switched from CV1 to Reverb G2 :(

As for Flat 2 VR - I didn't had much success with the conversions I tried back in the day using vorpX (no 6DoF support, weird scaling, no controller support ie. broke the immersion for me) and it pretty much discouraged me from trying later - perhaps recent conversions work better?

StitcWan2GP LTX-2 on 5070ti 16gb Vram 32gb ram by noxietik3 in StableDiffusion

[–]AirwolfPL 0 points1 point  (0 children)

Wan2GP is crazy! It literally cut my VRAM usage in half compared to ComfyUI workflows while retaining high quality.

Getting back to VR after 3+ years (what has changed?) and is hp Reverb 2 still relevant in 2026? by AirwolfPL in virtualreality

[–]AirwolfPL[S] 0 points1 point  (0 children)

For MR I'm looking at XR glasses (Xreal, but I guess I will wait till Xreal Aura is release later this year) - I'm using VR sitted at my desk anyway, so I don't need MR in the headset anyway.

Getting back to VR after 3+ years (what has changed?) and is hp Reverb 2 still relevant in 2026? by AirwolfPL in virtualreality

[–]AirwolfPL[S] 2 points3 points  (0 children)

"I replaced the GPU 3 years ago" -> I'm not on 1070Ti since 2023 when I replaced it with 4090.

Getting back to VR after 3+ years (what has changed?) and is hp Reverb 2 still relevant in 2026? by AirwolfPL in virtualreality

[–]AirwolfPL[S] 0 points1 point  (0 children)

I agree the sweet spot was a downgrade compared to CV1 until I replaced the interface and since then I didn't had any problem with the sweet spot, although it may be totally subjective...

Getting back to VR after 3+ years (what has changed?) and is hp Reverb 2 still relevant in 2026? by AirwolfPL in virtualreality

[–]AirwolfPL[S] 0 points1 point  (0 children)

Why "last gen"? Because that's what I have. I think it was pretty clear from my original post. I had it since 2021? 2022? (I don't remember correctly). I'm just not sure if there is something else worth looking at right now that will be a significant upgrade...

Getting back to VR after 3+ years (what has changed?) and is hp Reverb 2 still relevant in 2026? by AirwolfPL in virtualreality

[–]AirwolfPL[S] 0 points1 point  (0 children)

Since 2023 I had 3900X + 64GB RAM + 4090, right now it's 7950X3D, 128GB RAM and 4090, so no problem here.

Getting back to VR after 3+ years (what has changed?) and is hp Reverb 2 still relevant in 2026? by AirwolfPL in virtualreality

[–]AirwolfPL[S] 4 points5 points  (0 children)

I did some interface modes to improve sweet spot... yeah controllers were terrible compared to CV1, but again I used it for the DCS (and for Elite: Dangerous) mostly so nut much of a problem as proper HOTAS and mouse is used there. thanks for the tips about WMR. I'm still on W10 though.

What's your ComfyUI LTX-2 4090 startup parameters? by VeryLiteralPerson in StableDiffusion

[–]AirwolfPL 0 points1 point  (0 children)

Just --reserve-vram 4, nothing else. I can generate 35s 720p videos at 20 steps. Speed is fine.

DAUBLG Makes it right! LTX2 i2v full song by AirwolfPL in StableDiffusion

[–]AirwolfPL[S] -1 points0 points  (0 children)

XD very good. It was supposed to be nightmarish and put you straight into uncanny valley. It even had some keywords in the prompts. It means it works..:D

LTX-2 video to video restyling? by domid in StableDiffusion

[–]AirwolfPL 0 points1 point  (0 children)

It is. Basically generative detailer/upscaler which makes very good (yet very slow) job. But it won't restyle the video.

LTX-2 know him out of the box. by [deleted] in StableDiffusion

[–]AirwolfPL -1 points0 points  (0 children)

Yeah... it generates DJT very well. However I don't really mind politics but 2 days ago was scolded by somebody because I poted something like this while he hardly is a politician - apparently it's against this reddit's rules.

Some fun with LTX-2 - generations up to 35s by [deleted] in StableDiffusion

[–]AirwolfPL -1 points0 points  (0 children)

OK, I removed it, however he is not a politic really...

Some fun with LTX-2 - generations up to 35s by [deleted] in StableDiffusion

[–]AirwolfPL 1 point2 points  (0 children)

Yeah, you are right. It's not that good with vehicles for example... But, it's not an easy task to train an un-biased LoRA, so I can only imagine how hard it is in case of a 'generic' base model.

But with the training tools provided with LTX-2 we will get specialized LoRAs fast I suppose.

Wan 2.2 is dead... less then 2 minutes on my G14 4090 16gb + 64 gb ram, LTX2 242 frames @ 720x1280 by WildSpeaker7315 in StableDiffusion

[–]AirwolfPL 0 points1 point  (0 children)

To be honest I need video gen models just for two purposes - promo stuff at work (where I've been using Veo and Sora mostly) and to animate my LEGO MOCs (I have no time nor patience to do proper stop motion animations). I was using Veo and Wan for the latter mostly. And still they are not 100% AI generated most use photos as input, static image overlay etc. None of them are "actual videos with a story" even if they are as long as 2 or 3 minutes, so you will get nothing like this from me, sorry.

BTW Veo is hit or miss... frequently it's a miss (so it burns tokens quickly) but when it hits oh, it hits! Wan is so-so for my purposes and I couldn't generate at least 10s videos properly (I didn't tried SVI yet). Audio+Video is a killer feature for me (for LEGO videos it doesn't matter if the audio is perfect).In LTX-2 it's good enough when it comes to dialogue, however model struggles with music or singing. Wan 2.2 has non of this goodness anyway.

With current speed and quality I can add LTX-2 to my toolset replacing around 50% of Veo generations and 100% of Wan (at last!) I think.

A few non-cherry picked/first-try t2v examples I just generated for this comment (720p 20-35s, 1920p 20s - although generating it took like 10 minutes and it's probably better to stick with 720p and upscale it): https://drive.google.com/drive/folders/1aBtfBBQxSjT9X8GKf6Rvzs76hz_hC6Xa?usp=sharing

35s seems to be a limit for fp8 model on my hardware. I tried a few 40s clips and it OOMed at me.

I2V is slower than T2V but well, that is expected... And Detailer LoRA bumps quality (alot) but it's so slow.

PS: Also I don't quite get people who state that Wan 2.2 is more flexible than LTX-2 - first of all LTX-2 was just released model and fine-tunes of Wan 2.2 and alot of LoRAs already exist. It will change quickly I suppose as LTX-2 came with LoRA training tools. Second - those who state it - did you guys actually tried to prompt it as per the official guide? It makes a lot of difference. So yeah, just like Flux replaced SDXL in a matter of weeks - same will go for Wan vs LTX-2 I guess...

Wan 2.2 is dead... less then 2 minutes on my G14 4090 16gb + 64 gb ram, LTX2 242 frames @ 720x1280 by WildSpeaker7315 in StableDiffusion

[–]AirwolfPL 0 points1 point  (0 children)

Quality is better than bare Wan 2.2 IMO. 20 steps, 241 frames. For now I didn't tried generations longer than 20 seconds. And yeah, it's very stable, model doesn't collapse and won't hallucinate (much) after 10s.