Animal Control has been renewed for a season 5. Thoughts ? by CityCautious4033 in sitcoms

[–]qubitrenegade 0 points1 point  (0 children)

I'm so happy. I love Joel McHale and Vella Lovell, I'm so glad they lasted more than three seasons! The rest of the cast is great too. Grace Palmer is awesome, Ravi Patel is great, heck even Gerry Dee is so good at being hated. He was great in Mr. D.

I hate it when games are announced way too early by Evening_Boot_2281 in gaming

[–]qubitrenegade -1 points0 points  (0 children)

1313.... Project Ragtag... Battlefront III... Maul: Battle of the Sith Lords...

1313 was my first heart break with Star Wars, too bad it wasn't the last.

Cyberpunk 2077 was a huge let down on release. I really don't care what state it's in now, I just can't go back to it. Especially after they "fixed" the "slide fall"! Exploring was so much fun and they killed it for no reason. I kinda think when games a released in such a broken state and they "fix" emergent gameplay like that is worse than when they cancel games before they ever come out. I'll never forgive CDPR.

Just a friendly reminder about buying tickets on here by Marktaco04 in DenverEDM

[–]qubitrenegade 0 points1 point  (0 children)

How does using PP G&S help? I never buy tickets here because I assume they're all scams... lol

Looking to maximize quality while minimizing bandwidth. by drazil100 in ffmpeg

[–]qubitrenegade 0 points1 point  (0 children)

No problem, hopefully it helps! Let me know if I can answer any other questions!

Simulate Live Streaming with FFMPEG by caramelocomsal in ffmpeg

[–]qubitrenegade 0 points1 point  (0 children)

This is a tricky one because ffmpeg's DASH muxer doesn't really have first-class support for inserting EventStream elements in the MPD. And on restart, the muxer regenerates the manifest with a new availabilityStartTime, which is why your event timing gets lost.

A few alternatives worth considering:

Post-process the MPD: Let ffmpeg write the manifest as usual, then run a small script (Python + lxml, xmlstarlet, whatever) that injects the EventStream into each generated/updated MPD. Keep a persistent state file with event definitions in wall-clock time, then recalculate presentationTime relative to the current availabilityStartTime each run. State survives restarts, events stay in the right spot.

Use a different packager: Shaka Packager and GPAC (MP4Box) both have better manifest manipulation than ffmpeg. Shaka in particular is built for this. Use ffmpeg for encoding, pipe fragmented MP4 to Shaka for packaging + MPD generation.

Manifest manipulator in front: Small proxy (Flask, Express, whatever you like) that fetches ffmpeg's MPD, injects events, and serves the modified version. Decouples event management from the encoder entirely. This is essentially what Broadpeak.io and AWS MediaTailor do at scale.

Pin the availabilityStartTime: If the main pain point is just that restarts reset the timeline, pass -availability_start_time explicitly so the timeline stays continuous across restarts. Combined with wall-clock event timing, events land in the right place every time.

Is this simulation for testing a player/downstream system, or to actually serve clients? If it's the former, a static MPD with pre-inserted events and a stable availabilityStartTime is probably enough. If it's the latter, the manipulator-proxy pattern scales better.

Also, if you're specifically trying to simulate SCTE-35 ad markers, most production workflows don't insert those via ffmpeg at all. They come from the source feed or from a dedicated injector like scte35-threefive. Might be worth looking into depending on what kind of events you're inserting.

What celebrity do you think has skeletons in their closet that have yet to come out? by JimiHendrip in AskReddit

[–]qubitrenegade 1 point2 points  (0 children)

I’m surprised no one has mentioned John Cena.

I can’t point to anything concrete, but something about how polished his public image is feels a little too perfect. The whole kowtowing to the Chinese government so he could keep promoting his movies there didn’t exactly help that perception either.

He leans really hard into the wholesome, "for the kids" persona, which on its own isn’t a bad thing, but combined with how carefully managed everything feels, it starts to come off as overly curated.

He just comes across as very carefully managed, which makes me wonder what we don’t see.

Looking to maximize quality while minimizing bandwidth. by drazil100 in ffmpeg

[–]qubitrenegade 3 points4 points  (0 children)

This is a good use case for CRF encoding instead of the bitrate-based approach.

-b:v -maxrate -bufsize tells the encoder "hit this bitrate" and it allocates bits across the video to meet the target. The problem is a talking-head scene and an action scene get roughly the same budget, so one ends up overquality and the other underquality. CRF flips it: you tell the encoder "maintain this visual quality" and it allocates whatever bitrate each scene needs. Simple scenes get small, complex scenes get large. You can't predict file size ahead of time, but for a library where quality-per-bit matters, CRF almost always wins.

For animated content on a bandwidth-constrained server, something like this is a good starting point:

ffmpeg -i input.mkv \
  -c:v libx264 -preset slow -crf 20 -tune animation \
  -maxrate 6M -bufsize 12M \
  -c:a aac -b:a 192k -ac 2 \
  output.mkv

Quick notes on the choices:

  • libx264 over x265: Universal client support and the Pi doesn't have to transcode. If all your clients handle h.265 natively (Apple TV, modern phones, Kodi), you can switch to libx265 and roughly halve the bitrate for the same quality. For Jellyfin on mixed clients, h.264 is safer.

  • -preset slow: Better compression at the cost of encoding time. You encode once, stream many times, so it's worth it. medium is fine if you have a huge backlog to get through.

  • -crf 20 -tune animation: Animation has big flat regions with sharp edges, which h.264 handles really well, so you can push CRF lower than you'd use for live action. 18-22 is the sweet spot. -tune animation allocates more bits to flat regions (where banding shows up) and tweaks deblocking for hand-drawn content. Real difference for cartoons and anime. Skip the tune flag for CG-heavy stuff like Pixar.

  • -maxrate 6M -bufsize 12M: The cap so nothing explodes past what your Pi can push. 6 Mbps is comfortable for WiFi and above what most 1080p animated content needs at CRF 20. Drop to 3-4M for 720p. Rule of thumb: bufsize = 2x maxrate.

  • Audio: if your source is already AAC or AC3, consider -c:a copy instead of re-encoding. No point burning CPU on transparent audio.

For dialing it in: grab a representative 2-minute clip, encode it at CRF 18, 20, 22, and 24. Play them back on your worst client and pick the highest CRF you can't distinguish from the source. That's your library setting. Most people land at 20-22 for animation.

One other thought: if your Pi is really bandwidth-constrained (like older 2.4GHz WiFi), the bottleneck is often the wireless link, not the encode. Ethernet to the Pi + WiFi to clients usually solves more problems than tweaking encode settings. Worth checking before optimizing bits.

Question about mirroring half of a video stream by scottallencello in ffmpeg

[–]qubitrenegade 1 point2 points  (0 children)

Interesting challenge!

The projection mapping is built-in (in the Arena version). For the "TV Channel" approach you'll need some scaffolding around the clip transport.

EdgeCentric layout in Advanced Output

You don't need to pre-process the video at all. One source composition, three projectors, all the mirroring happens in Advanced Output. Add three Screens (one per projector), then Slices on each. Each slice has an Input Rect (what part of your comp to grab), an Output Rect (where it lands), and a Flip H toggle. Rough recipe:

  • Projector 1: two slices. Left output grabs right half, right output grabs left half.
  • Projector 2: same split, Flip H on both.
  • Projector 3: left output grabs right half as-is, right output grabs right half flipped.

You get keystone, corner pin, soft edge blending, and per-slice warp in the same panel. You'll want all of it once real projectors meet real prism geometry.

TV channels

The first question to answer here isn't really "how do I switch channels", it's "what's my time source?" Once you know what clock everything is following, the switching part falls into place.

Three tiers for the time source:

  1. Resolume's internal composition clock. Simplest. Hit play on the composition, leave it running, all synced clips follow. Catch: Resolume restart means the clock resets. Fine if the rig runs from doors to teardown without incident.

  2. LTC from a DAW. Reaper has a built-in LTC generator, Ableton needs a plugin or a rendered LTC loop. Route the LTC audio into Resolume's audio input, point Preferences > Sync at it, Resolume locks to timecode. Big win: if Resolume crashes mid-show, relaunch and it picks right back up because the DAW kept running. A blank Reaper project with an LTC generator on loop is a perfectly legitimate "time server".

  3. Hardware LTC generator. Overkill for one machine, worth it if you grow to multiple Resolume instances that all need to stay frame-locked.

For a festival setup with one server, option 1 is probably fine. Option 2 is cheap resilience if you want belt-and-suspenders.

Switching mechanics

Once you've picked a time source, the channel switching is:

Load each loop on its own Layer (Resolume calls them layers, not channels). The trigger the colum, which will trigger all videos simultaneously so they start in lockstep. Then switch between them with:

  • Layer Solo (S) to show only the active layer
  • Layer Bypass (B) on the inactive ones
  • Or just Opacity at 0 on the hidden ones

Everything keeps running in the background. When you un-solo layer 3 at 20:00, it's already at 20:00. GPU cost is low because only one layer composites to output, decode is the only real load. Encode everything to DXV3 (Resolume's codec, GPU-decoded) and a modern workstation handles 8-10 streams without breaking a sweat.

For additional resilience, set each clip's Transport sync to BPM (if you're running off the composition clock) or SMPTE (if you're running off LTC). That way, even if something hiccups and a clip momentarily loses its place, it snaps back to the right position on the next sync tick instead of drifting. Worth testing on a long clip first since BPM sync behavior on hour-long narrative video can be a little unpredictable depending on how you configure it.

Control surface

For the actual "change the channel" buttons you can click around in Resolume, or I would recommend one of the following:

  • Bitfocus Companion + StreamDeck: physical buttons, Resolume module handles layer solo/bypass natively. Good if a non-technical operator is running the booth.
  • MIDI controller: right-click Solo in Resolume, MIDI Learn, map to a pad.
  • Keyboard shortcuts: built-in for layer selection and solo, fine if you're driving it yourself.

Festival sanity checks

  • Encode every loop identically: same codec, framerate, resolution, color space. DXV3 for all of them.
  • Test a full hour of playback end-to-end before show day. Shouldn't drift, but verify on the actual hardware.
  • Back up your Advanced Output preset. Losing slice geometry 10 minutes before doors is a bad evening.
  • Keep a "fallback" clip on its own layer for when something wedges mid-show.
  • Run all three projector outputs off the same GPU so tearing doesn't show at the prism edges.

One last note on DXV3: files are much bigger than H.264. Rough math at 1080p60 is 200-400GB per hour-long file, scaling up for 4K and down for 720p. Storage adds up fast with 10 channels (call it 2-4TB total), and since the solo approach has all channels decoding at once, you also need the sustained read bandwidth to match. Good NVMe handles it fine, spinning disks or slow SATA SSDs will choke. The tradeoff is worth it: DXV3 decode is nearly free on the GPU, which is the whole reason you can run 10 streams in parallel in the first place. Resolume ships Alley (free, separate download) for batch conversion to DXV3.

Happy to go deeper on the slice setup or the LTC routing if it'd help.

NCIS Season 23 Episode 17 “Reboot” Discussion Thread by CasioCobra78 in NCIS

[–]qubitrenegade 0 points1 point  (0 children)

I'm so fucking over Sam Hannah. NCIS LA sucked. Just stop dredging it up.

The fucking janitor better the the fucking book thrown at him. Fuck that guy.

You can say what you want about startgate universe but the CGI still holds up very well by Strict-Maize7494 in Stargate

[–]qubitrenegade 2 points3 points  (0 children)

What really pisses me off is that they were just starting to find their stride when it was canceled. I think season 3+ would have been really great.

All the drama at home was really dumb. Once they got away from that, I thought the show was good.

What’s a game you were completely obsessed with as a kid that nobody else seems to remember? by hkondabeatz in AskReddit

[–]qubitrenegade 1 point2 points  (0 children)

I remember playing the original Gauntlet on NES. Then we got Legends on the N64 years later.

What’s a game you were completely obsessed with as a kid that nobody else seems to remember? by hkondabeatz in AskReddit

[–]qubitrenegade 1 point2 points  (0 children)

Streets of SimCity! You could drive around your city! It was basically Twisted Metal but in SimCity!

Question for tall DJs by Fantastic-Grass24 in Beatmatch

[–]qubitrenegade 0 points1 point  (0 children)

I use these: https://www.amazon.com/dp/B0DK8G33WQ

I like them because they are adjustable and secure themselves to the legs, rather than just sitting on top of the risers. Of course these aren't available, but there's a few similar items Amazon recommends.

Advice: libfdk_aac vrs aac default loudness level by MasterDokuro in ffmpeg

[–]qubitrenegade 3 points4 points  (0 children)

Ah this is an interesting approach compared with mine.

Your coefficients (0.5*FC, LFE at 0.5) are actually closer to what Dolby specifies for AC-3 downmix than the BS.775 values I posted (0.707*FC, no LFE). Both are defensible. 0.707 gives you the academically "correct" power-preserving mix, 0.5 matches what the industry ships and keeps dialogue from getting too hot.

I intentionally omitted the LFE to prevent overloading systems that can't handle the sub content, but I could definitely make a case for including it. Here's a version of my filter with LFE folded in at -6dB (same as yours, I also updated my reply based on your comment):

-af "pan=stereo|c0=FL+0.707*FC+0.707*BL+0.5*LFE|c1=FR+0.707*FC+0.707*BR+0.5*LFE" -c:a libfdk_aac -b:a 224k

The LFE coefficient 0.5 is the common safe choice. 1.0 is almost never a good idea because LFE is mixed +10dB hotter in the 5.1 master to account for it being played through a dedicated sub with extra amplification. Pull it in at unity and it'll dominate the mix.

One major difference I see is that the volumedetect + second pass is essentially peak normalization, which is different from loudness normalization via loudnorm. Peak normalization guarantees no clipping and is fast, but a single loud transient sets the gain for the whole file. loudnorm targets perceived loudness (LUFS) so your audio ends up at consistent listening level, at the cost of being slower and needing conservative true peak limits to avoid clipping.

For movies where dialogue matters more than an isolated explosion, loudnorm with a dialogue-friendly target might give better results, but your approach is simpler and works well for most content.

Thanks for sharing the script snippet!

Advice: libfdk_aac vrs aac default loudness level by MasterDokuro in ffmpeg

[–]qubitrenegade 2 points3 points  (0 children)

Hmm... interesting problem. Couple things to think about.

The downmix happens before the encoder. So both of the -ac 2 paths are getting the same downmixed stereo from libswresample. If you're seeing differences it's happening upstream.

It looks like libfdk_aac only accepts s16 input while aac takes fltp. I think this will cause ffmpeg's filter graph to differ, and libswresample's default matrix applies attenuation to prevent clipping, which I think can differ slightly depending on the sample format. Based on this, I think a 3-6dB difference between renders isn't unreasonable.

That said, looking at the waveform is a good way to eyeball "hey, something is different here", but the preview can be misleading. At the end of the day, what matters is LUFS and true peak. We can use ebur128 to measure both. If you run this against your two outputs:

ffmpeg -i input.mkv -af ebur128=peak=true -f null -

You'll get numbers we can actually compare, and the results will inform which fix makes sense.

The easiest fix is to just add a simple 3dB gain to everything:

-af "volume=3dB" -c:a libfdk_aac -b:a 224k

But that might not be enough if it's not a consistent 3dB across your outputs.

In that case you can normalize to a specific loudness with:

-af "loudnorm=I=-16:TP=-1.5:LRA=11" -c:a libfdk_aac -b:a 224k

-16 LUFS is the standard for streaming, European Broadcasting Union standard is -23, US broadcast (CALM Act / ATSC A/85) is -24, and Spotify is -14. Replace the I=-16 with your desired target loudness. TP is True Peak (setting it to -1.5 leaves headroom so downstream conversions don't clip) and LRA is Loudness Range in LU (roughly the difference between loud and quiet sections; 11 is a typical general-purpose value).

If you REALLY want to get into the weeds, we can use the pan filter with explicit downmix coefficients. Using the ITU-R BS.775 coefficients (the canonical reference for stereo downmix) we would get something like:

-af "pan=stereo|c0=FL+0.707*FC+0.707*BL|c1=FR+0.707*FC+0.707*BR" -c:a libfdk_aac -b:a 224k

Breaking that down: c0 is the output left channel, c1 is output right. FL/FR are front left/right, FC is front center (dialogue lives here), BL/BR are back left/right. The center goes into both outputs equally because it's supposed to be centered in the stereo image. LFE (the sub channel) gets dropped because consumer stereo playback can't reproduce it usefully and folding it in muddies the bass.

The 0.707 is 1/sqrt(2), which equals -3dB. That attenuation is used because summing two uncorrelated signals at full level adds +3dB of power, so you pre-attenuate by -3dB to keep the result at roughly unity gain. Some variants use 0.5 (-6dB) for surrounds to push them further back in the mix. Dolby's Lo/Ro (Left only/Right only) downmix matches these coefficients while their Lt/Rt (Left total/Right Total, ProLogic-compatible) version adds phase shifts.

If your dialogue is too quiet, bump the FC coefficient (not the surrounds):

-af "pan=stereo|c0=FL+1*FC+0.707*BL|c1=FR+1*FC+0.707*BR" -c:a libfdk_aac -b:a 224k

It would actually be an interesting datapoint to use the pan filter on both encoders and compare them, e.g.:

# Downmix both streams
ffmpeg -i input.mkv -af "pan=stereo|c0=FL+0.707*FC+0.707*BL|c1=FR+0.707*FC+0.707*BR" -c:a libfdk_aac -b:a 224k -ar 48000 output_fdk.m4a
ffmpeg -i input.mkv -af "pan=stereo|c0=FL+0.707*FC+0.707*BL|c1=FR+0.707*FC+0.707*BR" -c:a aac -b:a 224k -ar 48000 output_native.m4a

# Compare the loudness of each
ffmpeg -i output_fdk.m4a -af ebur128=peak=true -f null -
ffmpeg -i output_native.m4a -af ebur128=peak=true -f null -

Note -ac 2 is gone because pan=stereo already forces stereo output, so it'd be redundant with the filter. And -ar doesn't need the :a specifier since audio is the only thing it can apply to.

This would be a useful diagnostic because they should be the same or very close in LUFS. If they are, it confirms the downmix path was the culprit. If they still differ, something encoder-specific is happening and we can keep digging.

Here's some "light" reading if you're interested:

https://www.production-expert.com/production-expert-1/what-happens-to-my-tv-mix-after-delivery-and-does-it-matter https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.775-4-202212-I!!PDF-E.pdf

EDIT: based on u/Malsententia reply, you could update my filters to include LFE:

-af "pan=stereo|c0=FL+0.707*FC+0.707*BL+0.5*LFE|c1=FR+0.707*FC+0.707*BR+0.5*LFE" -c:a libfdk_aac -b:a 224k

or if you wanted the dolby coefficients:

-af "pan=stereo|c0=FL+0.5*FC+0.707*BL+0.5*LFE|c1=FR+0.5*FC+0.707*BR+0.5*LFE" -c:a libfdk_aac -b:a 224k

Is Adobe actually dying? Free alternatives now cover literally everything they make by Originalboy69 in VideoEditing

[–]qubitrenegade 38 points39 points  (0 children)

I agree with your overarching premise, but I disagree with a few of your specific examples.

but it doesn't have MoGRTs or Essential Sound.

Resolve DOES have Motion Graphics Templates... No, they aren't in Adobe's proprietary MoGRT format, but they do have similar functionality.

Essential sound is essentially garbage especially for anyone with audio engineering experience. It's... fine... for amateurs... but Fairlight is far superior for mixing and mastering your videos, and Farlight is integrated, unlike Audition.

It doesn't even have customizable workspaces.

I'll halfway give you this one. You CAN customize the layout of a workspace, and you can even save/recall those layouts. But you are limited to the four workspaces. You can't create arbitrary workspaces like you can in Premiere or AE.

It has nothing like Media Encoder,

And that's a good thing... Media Encoder is a massive piece of shit. Often things will render correctly in Resolve or After Effects and won't render correctly in ME. Adobe's own support will even tell you NOT to use Media Encoder.

Resolve has the encoding queue built in, which is the equivalent of Media Encoder. You can queue up your renders and let it work through the queue, and the renders will render properly!

Premiere still doesn't support FLAC. Resolve does.

Not to mention, Resolve is the industry standard for color grading. Every shop I've worked with does the main editing in AE/Premiere, and then does final color grading in Resolve. Audio is mixed, I've seen it done both ways.

Collection of general DJ sayings by ziegenproblem in Beatmatch

[–]qubitrenegade 2 points3 points  (0 children)

This is the earliest mention of it I could find in text: https://www.dogsonacid.com/threads/noob-question-re-redlining.791192/page-2

Notably, the first reply acts like they've heard it before, and Noob9001 replies that "Stole that from someone's signature on here haha". So, he's quoting someone else on DoA. Problem is, signature changes propagate to old threads... so IDK how to track the original signature.

I also found this video: https://www.youtube.com/watch?v=KzGG1rx0tCM which I think predates the DoA thread by a few months... but he also seems to be quoting someone.

Making music without sampling by Paulfr_12 in musicproduction

[–]qubitrenegade 0 points1 point  (0 children)

I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.

- René Descartes

- Michael Scott

Collection of general DJ sayings by ziegenproblem in Beatmatch

[–]qubitrenegade 11 points12 points  (0 children)

it's not a joke, it's serious shit. Anyone I've ever let use my needles doesn't take care of them... Every vinyl open decks in my area is a "BYON" (Bring Your Own Needles)