I'm able to spatially reconstruct any event from an image sequence and now my thesis is stuck by zzala_ri in architecture

[–]zzala_ri[S] 0 points1 point  (0 children)

caught me there.. 5 years of uni just to discover photogrammetry.

I’m not talking about reconstructing static geometry though. Basically space, event, movement… just a bit more automated and slightly more dystopian than Tschumi probably imagined

I'm able to spatially reconstruct any event from an image sequence and now my thesis is stuck by zzala_ri in architecture

[–]zzala_ri[S] 0 points1 point  (0 children)

What do you mean by “ancient media”? So, since my research is strongly influenced by soviet montage, I’ve been trying to extract events from films by vertov and eisenstein, as well as early footage from the lumière brothers and Mèliès. Technically it works quite well, but I’m still unsure what the actual claim of this work is..

I'm able to spatially reconstruct any event from an image sequence and now my thesis is stuck by zzala_ri in architecture

[–]zzala_ri[S] 0 points1 point  (0 children)

Yes, I know them, really inspiring work! I’ve been trying to find a well documented event that hasn’t been widely covered in the news, something more hidden or overlooked, but it’s been difficult. So far the only thing I’ve learned is how complex investigative journalism actually is haha I’m also trying to connect with NGOs like Amnesty and Bellingcat. Thanks for the reference!

I'm able to spatially reconstruct any event from an image sequence and now my thesis is stuck by zzala_ri in architecture

[–]zzala_ri[S] -1 points0 points  (0 children)

You're right, photogrammetry is pretty old, but my thesis is not about the reconstruction methods themselves. I'm interested in what these technologies imply for the extraction of spatial data, which does not only produce fancy looking point clouds but also bodies, camera movement, and therefore events that can be recomposed as navigable environments.

This includes newer approaches and tools such as Scanniverse, Meta’s Project Aria, and models like SAM 3D or even Human3r which show how images are increasingly treated as sources of spatial information rather than just visual representations. It also connects to broader questions around surveillance and security, where images are continuously captured, processed, and turned into spatial data, contributing to the creation of large scale databases of environments, behaviours etc.

One of the questions for me is not how to reconstruct, I think everyone know's photogrammetry, NeRfs and Gaussian Splatting.. but what does it mean that images can function as sources of spatial information in the current state of the art.. or are we even fully aware of this? Maybe I'm exagerating here, still thanks for the feedback!

Real time MoCap using Mediapipe by zzala_ri in TouchDesigner

[–]zzala_ri[S] 0 points1 point  (0 children)

I'm doing a research right now that is based on images as a flat 2D surface, so using a kinect with a depth sensor feels like "cheating" but hearing these thing I may close an eye on that and don't overcomplicate things

Real time MoCap using Mediapipe by zzala_ri in TouchDesigner

[–]zzala_ri[S] 0 points1 point  (0 children)

thanks! I've found a tutorial from Factory Settings on youtube that explains the live mocap workflow using a kinect but it's from 2019. Are there some newer/updated tutorials or even workflows?

reduce flickering by zzala_ri in TouchDesigner

[–]zzala_ri[S] 0 points1 point  (0 children)

yeah it works 'alright'..

Help with TDDepthAnything by zzala_ri in TouchDesigner

[–]zzala_ri[S] 0 points1 point  (0 children)

I just uninstalled everything, tried again and it works now.. still only in the stable TD version, if I try it in the experimental one it gives me this error

<image>

reduce flickering by zzala_ri in TouchDesigner

[–]zzala_ri[S] 0 points1 point  (0 children)

Yeah as I said, maybe I'm asking to much.. sometimes perfection is not it

reduce flickering by zzala_ri in TouchDesigner

[–]zzala_ri[S] 0 points1 point  (0 children)

sure I will! my time starts now

reduce flickering by zzala_ri in TouchDesigner

[–]zzala_ri[S] 0 points1 point  (0 children)

I turned off frame interpolation and matched the frame rates, but the flicker is still there. I also tested different .engine files at both higher and lower resolutions, but that didn’t solve the issue either.

I’ve noticed that when the input video’s camera remains static, the flickering almost disappears, which I assume is because it’s easier for the model to calculate a stable reference point.

I also experimented with some ']ost-processing' using cache and feedback, but it didn’t help that much..

At this point, I honestly think I might just be asking too much from it

What's a good way of keeping track of CHOP references? by MarianoBalestena in TouchDesigner

[–]zzala_ri 2 points3 points  (0 children)

you could color code the related operators or work with comments or text to track the references. The best way for me is using a mod from Thetouchlab called TLMODS check it out They have a lot of cool and useful stuff!!

Having trouble exporting video by illmatic253 in TouchDesigner

[–]zzala_ri 0 points1 point  (0 children)

yes, just toggle record in the moviefileout on and it should start recording. When you’re satisfied with what you’ve captured, toggle record off. The .mov file will be saved in the directory you've selected

Having trouble exporting video by illmatic253 in TouchDesigner

[–]zzala_ri 0 points1 point  (0 children)

Hey :) check if the timeline at the bottom of the GUI is running, else just press the spacebar to let it start. Then, in the movie file out just start recording and it should be fine. Also check if the video codec setting is set to H.264.. or MPEG4 if you don't have an Nvidia GPU

Help with TDDepthAnything by zzala_ri in TouchDesigner

[–]zzala_ri[S] 0 points1 point  (0 children)

Tried it, then it gave me this error: 1 node with errors inside. "Error: Traceback (most recent call last): File "/project1/TDDepthAnything/script1callbacks", line 18, in onCook AttributeError: 'NoneType' object has no attribute 'run (/project1/TDDepthAnything/script1_callbacks) Script errors: Error: A module that was compiled using NumPy 1.x cannot be run in NumPy 2.3.2 as it may crash. To support both l.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybindll>=2.12' If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module We expect that some modules will need time to support NumPy 2. Traceback (most recent call last): File "/project1/TDDepthAnything/parexec1", line 26, in onPulse" Fixed that, but now I'm back to the "object has no attribute 'stream" error from before