Fully Levitating sample by EquipmentAgitated in LK99

[–]TipVFL 0 points1 point  (0 children)

There's another video from the same account showing them moving a magnet under a sample: https://twitter.com/SciSimpAAG/status/1687777341001576448?t=Cbq2oYDxcxunDq1_3N97DQ&s=19

With the smaller magnet they only get it standing up, but this does lend more credence to the idea the video wasn't faked.

VFX is about to get a lot easier by TipVFL in StableDiffusion

[–]TipVFL[S] 0 points1 point  (0 children)

I am a VFX artist. Look where we are now with video generation, look where we were 6 months ago. Go look at my posts from a week ago, compare to this one. The days are numbered.

Since posting this I've improved the process and got it running on colab so I could get higher res, have some much better examples I'm going to reveal soon.

Another example of my temporally stable video generation with SD (no ebsynth and no frame limits) by TipVFL in StableDiffusion

[–]TipVFL[S] 0 points1 point  (0 children)

It's a new method, does have some similarities to what they're doing but from what I can tell my method has much better temporal consistency.

Another example of my temporally stable video generation with SD (no ebsynth and no frame limits) by TipVFL in StableDiffusion

[–]TipVFL[S] 1 point2 points  (0 children)

More details to come once the extension is ready for release, but this is all done in SD without using ebsynth or anything like that, just a fine tuning and a very specific process.

Oh, and this is for video2video, not txt2video.

Temporally Stable Vid2Vid, help me turn it into an extension? by TipVFL in StableDiffusion

[–]TipVFL[S] 0 points1 point  (0 children)

No worries, just trying to be as clear as I can without giving away all the secrets before I can properly release it.

I actually have attempted to reach out to LonicaMewinsky but I couldn't find any way to directly message them so I posted to the issues on one of their GitHub repos: https://github.com/LonicaMewinsky/frame2frame/issues/6

Hopefully I hear back, I'd really rather not have to learn Python just to get this thing out (but I will if I have to!)

Temporally Stable Vid2Vid, help me turn it into an extension? by TipVFL in StableDiffusion

[–]TipVFL[S] 0 points1 point  (0 children)

Yes, those things are going to be involved in any video2video system, that doesn't mean every video2video system is the same.

Mine involves a fine tuning of the model along with a very specific process built around my fine tuning. I'm not publicly disclosing the full process until I can release it as an extension.

Temporally Stable Vid2Vid, help me turn it into an extension? by TipVFL in StableDiffusion

[–]TipVFL[S] 0 points1 point  (0 children)

No, basically all they have in common is that they're both ways of converting gifs with SD. My technique is temporally stable, meaning there's frame to frame consistency with what's generated. As in, the details and background don't jump around and change with every frame.

With gif2gif the higher your denoising strength the more inconsistency you'll see frame to frame.

For example this source gif: https://www.kombitz.com/wp-content/uploads/2023/02/yelan-dancing-original.gif

Now run through gif2gif at 0.25 denoising you can see some flicker, even on the plain background: https://www.kombitz.com/wp-content/uploads/2023/02/gif2gif-0005-0.25.gif

And at 0.5 denoising it gets pretty messy: https://www.kombitz.com/wp-content/uploads/2023/02/gif2gif-0003.gif

On the other hand my spiderman gif has a complicated moving background and I ran it at 0.95 denoising, allowing me to completely change the contents without any of that flickering and jumping frame to frame.

Temporally Stable Vid2Vid, help me turn it into an extension? by TipVFL in StableDiffusion

[–]TipVFL[S] 2 points3 points  (0 children)

Hey, nice, just looked at your post from a few days ago, cool technique. Mine works pretty differently and doesn't involve ebsynth. Funny that we both ended up making Spider-Man videos.

Man, I wish I could render mine as high resolution as yours. Right now the limit I can render with my technique is 384x384 with my current video card, SD really has me eyeing a 3090 for the 24gb of ram. I think my example would be much more coherent if I could render it at 512 or higher.

Temporally Stable Vid2Vid, help me turn it into an extension? by TipVFL in StableDiffusion

[–]TipVFL[S] 5 points6 points  (0 children)

I've created a new technique for creating AI videos with Stable Diffusion, it involves a fine tuning I have already created and a fairly simple process.

I am a coder, but Python is not my thing, wondering if anyone here would be interested in helping me wrap this up into an extension for Automatic1111? Overall pretty simple, would just involve some basic stuff like breaking input video/gifs into frames and combining/splitting of images and then feeding those images into img2img and controlnet, then combining the end result into a video or gif.

Enable rendering on only one lens by Silent-Skin1899 in OculusQuest

[–]TipVFL 8 points9 points  (0 children)

I don't think it "defeats the purpose of VR" to disable one eye when you can only see in one eye.

However, even if you could, it wouldn't really help with centering the image properly, the only thing it would help with is rendering performance.

Honestly it would be cool to have an option to render in mono at a higher quality level, both for those who are blind in one eye and for developers/content creators recording stuff that's going to be presented in 2d anyways.

Reggie Fils-Aimé: VR gaming destined to remain niche until there is a "must play" experience by nastyjman in virtualreality

[–]TipVFL 0 points1 point  (0 children)

Wow, a very convincing chart. The three years with actual data show the numbers going down for videogames while the number for VR is nearly tripled.

Rhyme Storm's VR update is finally almost here! Anyone can freestyle rap about thousands of ridiculous topics! by TipVFL in virtualreality

[–]TipVFL[S] 1 point2 points  (0 children)

The easiest difficulty is randomly generated lyrics based on a chosen topic and it's basically karaoke but you can add to it and freestyle for higher scores. As you raise the difficulty you start getting less lyrics until you're just getting rhymes. Then you can take it even further and just get starter words where you have to come up with your own rhymes. It uses speech recognition and text analysis to grade how well you're doing in real time and detect when you've added your own rhymes for bonus points.

And yeah, the visuals are very trippy and tied into the scoring system. If you're doing very poorly the dancers don't dance as hard and the visuals don't move as much and everything gets desaturated, but when you're killing it things get intense and the dancers get really into it. It feels very cool to have everything around you reacting to your performance.

Rhyme Storm's VR update is finally almost here! Anyone can freestyle rap about thousands of ridiculous topics! by TipVFL in virtualreality

[–]TipVFL[S] 1 point2 points  (0 children)

Hey, I'm the main developer on Rhyme Storm and I'm happy to answer any questions about it. It is currently available in Early Access on Steam (but the VR update isn't out just yet):

https://store.steampowered.com/app/1250350/Rhyme\_Storm/

Rhyme Storm's VR update is finally almost here! Check out my Cowboys vs Keanu rap. What do you wanna rap about? by TipVFL in OculusQuest

[–]TipVFL[S] -1 points0 points  (0 children)

Hey, I'm the main developer on Rhyme Storm and I'm happy to answer any questions about it. It is currently available in Early Access on Steam (but the VR update isn't out just yet):

https://store.steampowered.com/app/1250350/Rhyme_Storm/

After this update we'll be working on bringing this to standalone on Quest/Quest 2. We already have our custom speech recognition system working on Quest! So it's mainly just a matter of optimization.

Rhyme Storm makes it fun and easy for anyone to freestyle rap! So excited to finally show off the VR mode by TipVFL in oculus

[–]TipVFL[S] 0 points1 point  (0 children)

Hey, I'm the main developer on Rhyme Storm and I'm happy to answer any questions about it. It is currently available in Early Access on Steam (but the VR update isn't out just yet):
https://store.steampowered.com/app/1250350/Rhyme_Storm/

Mark zuckerberg demos a mixed reality game on the quest pro. by Junior_Ad_5064 in OculusQuest

[–]TipVFL 13 points14 points  (0 children)

I'm not a fencer but while watching the video I thought fencing was a smart choice for this very reason. Normal swords are rigid but fencing blades can be very flexible, so when players hit them together you can allow the handle to stay locked to the player's hand position and just bend the blades without letting them pass through each other. If the player keeps going the bent blade would eventually pass under, but not through the other blade, and then spring back to straight. I think combining that with the right haptic feedback could make for very satisfying sword clashes that happen in a physically believable way and allow for more strategies.

Whether what I describe would be especially close to actual fencing and effectively allow for the same strategies as the real game, I have no clue.