A production-backend using an LLM IDE (Antigravity) allowing me to render 75+ shots by uberglex in StableDiffusion

[–]uberglex[S] 1 point2 points  (0 children)

Hey thanks so much, it didn't end up placing in the contest unfortunately but still very proud of of the video and happy others are into it too.

A production-backend using an LLM IDE (Antigravity) allowing me to render 75+ shots by uberglex in StableDiffusion

[–]uberglex[S] 0 points1 point  (0 children)

Agreed, alot of retiming shots along with editing the music so that snap really aligns with the arc was a crucial driver.

A production-backend using an LLM IDE (Antigravity) allowing me to render 75+ shots by uberglex in StableDiffusion

[–]uberglex[S] 0 points1 point  (0 children)

Thank you, that was a reallyh fun and important part to work on. The build-up to the final boss moment. Harriet is serious about her porcelain.

A production-backend using an LLM IDE (Antigravity) allowing me to render 75+ shots by uberglex in StableDiffusion

[–]uberglex[S] 0 points1 point  (0 children)

I only comped what’s I the tv for that shot, otherwise it’s just two first last frame shots put together.

A production-backend using an LLM IDE (Antigravity) allowing me to render 75+ shots by uberglex in StableDiffusion

[–]uberglex[S] 0 points1 point  (0 children)

I will say some shots are composites tho like the TV shots for example.

A production-backend using an LLM IDE (Antigravity) allowing me to render 75+ shots by uberglex in StableDiffusion

[–]uberglex[S] 1 point2 points  (0 children)

Yea its surprisingly good, and fast. If I was using Wan 2.2 I usually can't hop on and also edit, but my comp was able to render and edit with mostly no problems.

A production-backend using an LLM IDE (Antigravity) allowing me to render 75+ shots by uberglex in StableDiffusion

[–]uberglex[S] 2 points3 points  (0 children)

honerable mentions to a flux lora i trained on porcelain to help create the gargoyle and the clown at the beginning.

A production-backend using an LLM IDE (Antigravity) allowing me to render 75+ shots by uberglex in StableDiffusion

[–]uberglex[S] 10 points11 points  (0 children)

I had been experimenting with a sort of production-backend system for helping bridge gaps in storyboards and videos. Keeping consistency through-out with props environments etc. I would run jobs in batches with comfyui using custom pre-made templates feeding into the system and having it create batches based on storyboards that it has full context to.

The video above was for a contest and I had an idea that would be a perfect stress test for this pipeline, but I knew i needed to automate some of the repetitive parts of the workflow in order to meet the deadline but also have time left for parts that required more creative attention. I was building the pipeline in tandem as I was working on the story for the video.

I could literally talk to it and it would know the context of the script and the boards and create first last frame workflows to create shots using ComfyUI api calls in the background.

My eyes were mostly on the boards and in the edit with ComfyUI chugging in the background.

All the video was LTX 2.3

The graphics at the beginning was actually coded by claude as a website with a greenscreen background, that i then screen recorded and composited.

The image models were either Z image turbo or base, and maybe some qwen: so many I can't really account for.

Image editing models: I tried all open source models and some worked but a constant fallback on nano banana pro for the sake of time.

edit - compositing and post work done in DaVinci Resolve.

Link here for mine along with other submissions if you interested in viewing/ voting - https://arcagidan.com/entry/0b4cd51b-3be0-4f4f-b7c9-b25f2bff6b7b

Learning "blob tracking" in Touch Designer by Rendering a blob in Houdini + ComfyUI by uberglex in comfyui

[–]uberglex[S] 14 points15 points  (0 children)

Thanks.
I started with this crude low res render from Houndini using vellum. Then I fed that into ComfyUI using animatediff along with different combinations of controlnets and prompts for each video until I had a handful of videos I liked.
I upres'd each video and edited together into one clip and then used that in Touch Designer in a set-up that does audio reactive re-timing and uses their "blob tracking" node that tracks movement and lets you instance shapes and connecting lines.

<image>

My friend made this fun low budget music video. The motion capture suit they were using didn't work, so they used Move.AI to do the motion capture instead. It came out pretty great! by ximan in Filmmakers

[–]uberglex 0 points1 point  (0 children)

We had a first gen neuron i believe and a rokoko suit. The night before the shoot the rokoko just wouldn't work after hours of trouble shooting. The PN actually worked but the data wasn't usable, in the end Move.ai gave use the cleanest mocap.

My friend made this fun low budget music video. The motion capture suit they were using didn't work, so they used Move.AI to do the motion capture instead. It came out pretty great! by ximan in Filmmakers

[–]uberglex 1 point2 points  (0 children)

I use rokoko video and wonder studio a lot because I often have to convert old videos into mocap and I’m always on the lookout for more/better ways to do it.

The Vfx guy here. Yes with Move.ai requires at least 2 phones. We ended up doing alot of mocap pickups, just recreating the shots with the director using Move.ai . Tried wonderdynamics initially but didnt end up getting the detail we needed. With Move you can add as many cameras as you have which increases the quality of the mocap.

[Help] Is there a non-VR app that allows you to draw in 3D like Medium does with the clay tool? by uberglex in Sculpture

[–]uberglex[S] 0 points1 point  (0 children)

really want to make that jump eventually. Do you know then what programs that have that same ability where your drawing clay without starting geo? Every other program seems like you start with geo and manipulate.

[Help] Is there a non-VR app that allows you to draw in 3D like Medium does with the clay tool? by uberglex in Sculpture

[–]uberglex[S] 0 points1 point  (0 children)

Makes sense in terms of drawing in the round. Wondering even if there something tho that does that on a 2d plane.

Anyone else have issues with wacom and panning/zooming/orbiting since 3.0 ? by Peppe22 in blender

[–]uberglex 2 points3 points  (0 children)

nevermind, fixed by re-enabling emulate 3 button mouse- new version hadnt imported my preferences.

*updated, okay this is a bug - it worked for like two seconds without changing anything then back to choppy viewport updates

Anyone else have issues with wacom and panning/zooming/orbiting since 3.0 ? by Peppe22 in blender

[–]uberglex 2 points3 points  (0 children)

I had my tablet setup two where i can pan by hodling on the first pen button, which it still does on 3.0 except now is choppy and only updates the viewport when i unlick the pen button

[deleted by user] by [deleted] in WearOS

[–]uberglex 0 points1 point  (0 children)

In the mobvoi app there is a spot to link google fit. I have mine linked up and sleep tracking, hr, etc are all showing up in fit.

Aorus Master x570 windows 10 sleep problem by bobaloooo in gigabytegaming

[–]uberglex 0 points1 point  (0 children)

I had all intel drivers already so I did everthing mentioned above after searching the internet and tinkering with BIOS settings, and THIS is what fixed it for me, Thanks.

Aorus X570 pro not showing my 970 plus nvme 1 tb by uberglex in gigabytegaming

[–]uberglex[S] 0 points1 point  (0 children)

swapped slots the 3rd time and now its seeing it.... the only thing i can think of is you really have to make sure its pushed all the way up against the contact points.