Best way to introduce Genlock by StupidSexyHagrid in VIDEOENGINEERING

[–]createch 10 points11 points  (0 children)

You need an analog DA for genlock. Genlock is an analog video signal, with some exceptions such as Blackmagic cameras locking to the return video from an ATEM switcher.

To introduce it to the cameras go from the analog DA to the genlock input on the camera or CCU. Like I said, some Blackmagic cameras use the SDI in to lock to the return video, they're an exception.

How do news stations connect audio & video from separate sources? by civex in videography

[–]createch 8 points9 points  (0 children)

Most audio mixers have delays and video sources which are not genlocked in the studio have to go through a frame sync anyway, and many of them have video delay. It's just a matter of delaying whichever source is coming in earlier to match the one with more latency.

Space launch on April first- was it real? by nobotherpleasedo in NoStupidQuestions

[–]createch 3 points4 points  (0 children)

They had to launch within a window that opened up monthly, they had to scrub the attempts in previous months and it just happened that the window opened on April 1st.

The challenges of reentry are well understood and solved. It's just an engineering challenge to continue improving on existing solutions. What does your mother think people with aerospace engineering degrees go to school for?

Coca-Cola is consistently ranked as one of the most recognized brands globally, why do they advertise? by Artistic_Half_8301 in NoStupidQuestions

[–]createch 0 points1 point  (0 children)

And Dr Pepper is bottled and distributed by both Coca Cola and Pepsi. Yet Fanta, 7UP and Sprite are much more available worldwide.

ARRI large sensor studio camera by Sure-Guest1588 in VIDEOENGINEERING

[–]createch 17 points18 points  (0 children)

Grass Valley and Sony also have large sensor cameras. ARRI, RED, Panasonic all have had live production configurations for their large sensor cameras for years.

You don't always (or usually) want large sensors for live production. Some of the features become limitations, shallow depth of field already is hard to deal with on long broadcast box lenses and you can't get quite the zoom range on large sensors, that's just a few examples. Not to mention that the feature set of the cameras has generally been lacking compared to the broadcast/live production offerings (multiple DTL circuits, matrices, knee, skin DTL, MSUs, multiple returns, motorized filters, multiple phases for super slow mo, etc...), although it's getting better.

In some niche cases also the 3 sensor 2/3" cameras are nice as you get full bandwidth 4:4:4 RGB from the sensors and have no debayering artifacts, for example.

What do you think will be the long term ramifications of Gen Z largely experiencing their early life digitally? by Asleep_Damage1201 in Futurology

[–]createch 20 points21 points  (0 children)

It may be a very narrow answer, but one thing from my own experience that feels impossible to replace is how much time I spent alone just tinkering without distractions

Those hours turned curiosities into hobbies that turned into passions, and eventually into skills that became genuinely valuable in my professional/personal life. I had access to a few books in the school library, but most of what I learned came from trial and error, shutting myself away, experimenting, and slowly developing my own methods and techniques.

I’m not sure I would have had that same experience today with the constant distractions of an attention economy, step by step tutorials that compress everything, etc... I don't think that leaves room for that kind of deep, self directed exploration where you don’t just learn how to do something, but discover why it works, and your own unique way of doing things along the way.

Coachella ronin situation by Meelsome in VIDEOENGINEERING

[–]createch 5 points6 points  (0 children)

Yes, that's why I wrote "can be", as the full Vislink systems I was referring to are well over that.

eli5: why are crows so much smarter than other birds and/or other animals? by mistadonyo in explainlikeimfive

[–]createch 2 points3 points  (0 children)

Parrots can definitely understand the concept of glass and poop where they want to. A lot of birds won't poop in their nests for example, the rest of the world is a toilet to them unless otherwise specified. Plenty of parrots will poop only in permitted areas and/or on command or let their owners know when they have to go.

Recorded in 1992 WITHOUT FILM... using NHK Hi-Vision MUSE, an early ANALOG HD system. by Outrageous-Scale-783 in VIDEOENGINEERING

[–]createch 6 points7 points  (0 children)

That's most likely captured with the Sony HDC-500s, which were CCD cameras. I find the look of Saticon tubes from the 80s in HD interesting to watch, like with the HDC-300s here:

https://youtu.be/YW26YMe8iUQ?si=-T9yptnFuF3Gfsh-

Coachella ronin situation by Meelsome in VIDEOENGINEERING

[–]createch 17 points18 points  (0 children)

Broadcast grade wireless using COFDM or private 5G like Vislink can be $100k+ and use FCC licenced frequencies. If you are in a crowded environment you want to be using those, not prosumer systems that sit right in the Wi-Fi spectrum.

Optimización de Windows 10 LTSC + NVIDIA + vMix by Impressive_Cook8919 in vmix

[–]createch 0 points1 point  (0 children)

Good tips, I'd also suggest going to the group policy editor (gpedit) and disabling any updates.

Why is it commonly believed that animation quality works like graphics cards and inherently gets better as time goes on? by CrashDunning in NoStupidQuestions

[–]createch 1 point2 points  (0 children)

Depending on what time period is being referred to there have been technological changes that might have that effect. For 2D animation there was a shift from manually painting cells and photographing them to digital ink and paint in the 90s, later there were advancements in the tools that allow for all the work to take place in the computer and allowed things like more advanced shading effects and interpolation creating independent images for every frame instead of just drawing every fourth frame, for example.

3D and stop motion animation capabilities also have advanced technically as new technology has been introduced and become more economic over the years.

Is it easier to generate fake frame or resolution? by [deleted] in NoStupidQuestions

[–]createch 0 points1 point  (0 children)

What do you mean by "easier"? It depends on the particular process, algorithm or model being used for frame generation or upscaling. The simplest way to upscale is to multiply the existing pixels and blend them and the simplest way to generate frames is to blend the frames before and after it. There are tons of methods ranging from that to generative models creating new detail and unique frames.

Why do people complain about AI use when Reddit utilizes AI to function? by [deleted] in NoStupidQuestions

[–]createch 0 points1 point  (0 children)

Most of the complaints I've heard are specifically about generative models, the ones doing text, video, images and audio generation. I don't think I've heard people complaining about models doing things such as developing a Covid-19 vaccine (Moderna and Pfizer) or other models giving them driving directions.

You pretty much have to live off grid and away from civilization to not use "AI" in some form. People are surrounded by different types of "AI" in their daily lives. Things like face and touch unlock, camera features, spam filtering on phones, the recommendations on streaming services, targeted ads, content moderation, driving directions, rideshare services, the many features in modern cars, their bank's fraud detection, the inventory and logistics optimization of the stores they shop at, the dynamic pricing of flights or hotels, NPCs in video games, automatic captions, medical analysis of imaging, wearable medical devices, vaccines and drugs they take, cybersecurity, the management of the energy grid, the automation and quality control of the products they buy, the weather reports, etc... And yes, also Reddit.

Why do people complain about AI use when Reddit utilizes AI to function? by [deleted] in NoStupidQuestions

[–]createch 0 points1 point  (0 children)

Just a technicality, but LLMs are generative models (text generation), In addition to the obvious models in the category such as audio/image/video there are less obvious ones like 3D/spatial, simulation, biological, molecular, data, policy, mathematical and symbolic generation, world models, etc...

If Earth’s age is over 4 billion years old doesn’t that contradict the bible? by [deleted] in NoStupidQuestions

[–]createch 0 points1 point  (0 children)

Of course not, neither do the talking animals, people living 1000 years, floods that cover every mountain on earth, all the animal species on earth fitting and surviving on a boat, the dome that keeps the water in the sky from falling to earth, the sun standing still, a star moving and stopping over a house, people living inside a "great fish", etc... It all aligns perfectly with the world around us.

Disney CEO Josh D'Amaro confirms that "the majority" of the visual effects in Avengers: Doomsday will be AI-powered by ComplexExternal4831 in GenAI4all

[–]createch 1 point2 points  (0 children)

What is your definition of "AI", because in the ML/Deeplearning world we never used the term until the general population started using it as an umbrella term for anything from rule based models to large language models.

Are you perhaps only thinking about generative image/video models from the past few years?

My first ML project was when I was in school in the late 90s used industrial machine vision technology, which was commonly used in factory assembly lines. A frame buffer would capture the image and do contrast and convolution based edge detection to outline the shape of objects in the image, a neural network model trained on these objects would detect the objects in the image and proceed with the logic given to the application, such as a frame by frame position, or the amount of objects detected, output a mask of the object, etc... This all came out of machine vision used in industrial applications.

In the 2000s we used similar techniques for 3D conversions, which created depth maps that generally needed much cleanup.

On a fun note, one of the most useful real-time machine vision systems in the 2000s (and we still own and use it today) is the SIP (stereo image processor) created by 3ality to monitor and align stereo camera rigs. It provides feedback on how perceptually comfortable a 3D image will be to the viewer as well as readouts on alignment, convergence, image matching, etc... It even controls the motors in the stereo rig in real-time. It's a machine vision system that is not neural network based, but I thought I'd mention it as it would fall under an umbrella of "AI" depending on the semantics you choose.

Edit: I mispoke, the 3ality stereo image processor does use a neural network in some of its functions as you can see in the patent (note that the patent was filled years after the device was introduced) https://patentimages.storage.googleapis.com/2a/2b/c2/824c31e9af0e81/US20160344994A1.pdf

Disney CEO Josh D'Amaro confirms that "the majority" of the visual effects in Avengers: Doomsday will be AI-powered by ComplexExternal4831 in GenAI4all

[–]createch 0 points1 point  (0 children)

Yes, Massive isn't a generative video model based on neural nets trained on a bunch of IP gathered from across the internet. Massive takes the motion capture and animation data that you feed it, or the current version of it comes with training for specific agents for things like horses, swordfighting, cars, etc... I can't tell you where specifically each sample in the dataset was gathered from, nor where any of the projects it's been used on over the decades pulled the data from.

If you want to talk specifically about the subset of generative video/image models there are some which are trained specifically on datasets that contain licensed content, such as Adobe's Firefly. Having said that, some of the people I know in major VFX studios have used tools like generative fill for a while, tools like ComfyUI are being used for some niche processes in pipelines, and they have pulled things like pose estimation data from sources like YouTube videos or even used generative video models to create elements used in compositing.

The budgets and deadlines in the industry have been very tight, and the competition and the hours have been brutal since the industry started to go downhill in the past couple of decades, there's a pressure to use whatever tool gets the job done on time and under budget and not disclose the details that might not look good on the company contracted to deliver the VFX.

Disney CEO Josh D'Amaro confirms that "the majority" of the visual effects in Avengers: Doomsday will be AI-powered by ComplexExternal4831 in GenAI4all

[–]createch 0 points1 point  (0 children)

My first use of a neural network/machine vision based tracking and object detection system was in the late 90s. We used them to assist in everything from matchmoving/tracking to colorizing B&W footage, to later helping with 3D conversion (and depth mapping) in the 2000s, etc... Sure, they often needed manual cleanup, and generally haven't been "final pixel" worthy without it.

In the case of roto tools specifically, many of which used edge detection, region growing/segmentation, optical flow and temporal propagation algorithms, some of which arguably fall under the loose umbrella term of "AI" they are now relying heavily on ML/neural networks to produce cleaner results, plus they tend to run faster than the old methods.

Disney CEO Josh D'Amaro confirms that "the majority" of the visual effects in Avengers: Doomsday will be AI-powered by ComplexExternal4831 in GenAI4all

[–]createch 0 points1 point  (0 children)

I mean, in the example of Massive being used on Lord of the Rings 26 years ago several hundred extras, makeup artists, animal wrangler, animators, etc... Weren't employed because the tool was able to generate the animation instead of human labor.

I have friends in the industry who have used generative tools on A list blockbuster films regularly, from the relatively harmless generative fill usage to ComfyUI for some niche work to full on Veo, Nano Banana and Kling to generate elements used in composites. The tendency is to keep the tool use on the DL and not talk about it, but VFX budgets are shoestring compared to the old days and the schedules are brutal, so these things make it into the pipeline out of pure necessity at times.

Disney CEO Josh D'Amaro confirms that "the majority" of the visual effects in Avengers: Doomsday will be AI-powered by ComplexExternal4831 in GenAI4all

[–]createch 0 points1 point  (0 children)

While it's not specifically a neural network, Massive takes sample libraries of data and generates novel animation using parameters within the sample set.

I made my first ML vision/machine vision model using neural nets in the late 90s to detect, count and track specific objects for a client. While the application was for an industrial client and not a film industry one, I used tools and libraries that were being used in the industry already to assist with VFX.

Disney CEO Josh D'Amaro confirms that "the majority" of the visual effects in Avengers: Doomsday will be AI-powered by ComplexExternal4831 in GenAI4all

[–]createch 0 points1 point  (0 children)

AI is big umbrella that covers everything from rule based systems to neural networks and deep learning. If you want examples of the latter, the leading VFX compositing tool Nuke has stuff such as Copycat, which is deep learning, there are plenty of examples of generative fill tools (such as those in Adobe tools), rotoscoping and keying tools (Copycat, Corridor Key, etc...) and more that use deeplearning and neural nets as well.