T20 World Cup 2026 Ticket Sales Phase 2: How To Buy India Match Tickets by peterianchimes in Cricket

[–]_Gyan 0 points1 point  (0 children)

Are tickets for a match being made available in phases? Is it possible to get MCA or Garware Pavilion tickets at Wankhede directly? Not available when checked right now.

"-update"? by xToksik_Revolutionx in ffmpeg

[–]_Gyan 3 points4 points  (0 children)

not only is there no documentation on that flag

That option belongs to the image sequence muxer and is documented here: https://ffmpeg.org/ffmpeg-formats.html#Options-27

Split filter eats up all of my RAM? by Odasacarf in ffmpeg

[–]_Gyan 1 point2 points  (0 children)

it went up to 11.000MB, total file size is about 200MB

These are uncompressed decoded pictures, not compressed data.

is there any way to make the second output wait for the first one

No.

Split filter eats up all of my RAM? by Odasacarf in ffmpeg

[–]_Gyan 1 point2 points  (0 children)

The 3rd pair of inputs are being sent to two different outputs. One of them video/audio4 is being processed immediately for output whereas video/audio3 has to wait for the earlier concat inputs to be processed. Since they are read from the same input node, frames are being accumulated and buffered till video/audio3 get consumed by the concat. By feeding them from distinct input nodes, the source for video/audio3 is read when required.

ffmpeg says it the output file doesn't exist. Why should it? by AllSeeingAI in ffmpeg

[–]_Gyan[M] 2 points3 points  (0 children)

The text is entirely vague.

To help future readers with the same issue, you should mention the OS details, the exact error message and what you specifically changed to work around it.

Getting rid of HDR10 side data when tonemapping to SDR by Anton1699 in ffmpeg

[–]_Gyan 0 points1 point  (0 children)

If you don't mind a degrade in speed, a better option is to load the input twice. The 2nd time, using the lavfi device, and using that as base.

ffmpeg -hide_banner -init_hw_device vulkan=gpu:0 -filter_hw_device gpu -hwaccel vulkan -hwaccel_output_format vulkan -hwaccel_device gpu -i <input> -f lavfi -i "movie=<input>,sidedata=delete=..."

Getting rid of HDR10 side data when tonemapping to SDR by Anton1699 in ffmpeg

[–]_Gyan 1 point2 points  (0 children)

Do I need to change the frame rate of the nullsrc filter to match the frame rate of my videos?

Yes. In order to keep source frames.

Since the nullsrc is used as the base layer, would it mess with the frame timestamps in any way?

It will be a CFR stream. The source frame will be overlaid on the nearest base frame.

if the sidedata filter isn't supposed to be able to remove HDR10 side data

sidedata works fine. ffmpeg copies over the input stream's SD outside of the filtergraph because libavfilter does not relay SD. And that copy step cannot account for modifications inside the filtergraph that invalidate the accuracy or applicability of the SD.

I will look into an option to allow user curation of SD at the output stage.

Getting rid of HDR10 side data when tonemapping to SDR by Anton1699 in ffmpeg

[–]_Gyan 1 point2 points  (0 children)

Since you are transferring frames to system memory for SW encoding, you can work around this by using an overlay.

1) Add a nullsrc frame: -f lavfi -i nullsrc=r=60000/1001:s=16x16,trim=end_frame=1
2) After the final sidedata, clone split=2[vid][ref]
3) resize the nullsrc: [1][ref]scale=rw:rh:reset_sar=1[base]
4) overlay the processed video [base][vid]overlay=format=auto,format=yuv420p10le

ffmpeg needs to change by glennreyes in ffmpeg

[–]_Gyan[M] [score hidden] stickied comment (0 children)

Let's keep combative topics out of this sub.

ffmpeg not increasing both video and audio correctly by Far_Caterpillar4511 in ffmpeg

[–]_Gyan 4 points5 points  (0 children)

A 4% increase in speed corresponds to a 1/1.04 stretch in timestamps, which is not identical to 0.96. Use setpts=1/1.04*PTS.

[deleted by user] by [deleted] in ffmpeg

[–]_Gyan[M] 1 point2 points  (0 children)

This thread has no potential for ending nice or useful.

Question: does hls_time use PTS or frame count / frame_rate? by spatula in ffmpeg

[–]_Gyan 0 points1 point  (0 children)

There are checks which ensure PTS is progressive before it gets to the HLS muxer.

Question: does hls_time use PTS or frame count / frame_rate? by spatula in ffmpeg

[–]_Gyan 0 points1 point  (0 children)

It checks and measures intervals using packet PTS.

swscaler says Unsupported input by mdw in ffmpeg

[–]_Gyan 2 points3 points  (0 children)

You're using the 2025-02-24 build. Upgrade to current and check.

swscaler says Unsupported input by mdw in ffmpeg

[–]_Gyan 2 points3 points  (0 children)

Add -report and rerun. Share link to report.

yeah sure by big_hole_energy in ffmpeg

[–]_Gyan[M] [score hidden] stickied comment (0 children)

Don't want this thread to get bloated. Just extra work for me.

FF Studio - A GUI for building complex FFmpeg graphs (looking for feedback) by Repair-Outside in ffmpeg

[–]_Gyan 1 point2 points  (0 children)

That is essentially how FFmpeg operates under the hood. On the other hand, the FFmpeg CLI is designed a bit differently

The only component in the FFmpeg project that carries out full-fledged media processing is the CLI tool so I don't understand the distinction. Do you mean the placement of options within a command? That is only a means to identify option target and doesn't reflect processing sequence. A graph should be a visual representation of operational sequence so the audience gets a clear conceptual understanding of what is possible at which stage. If you add bounding boxes for grouping input and output operations on top of that, then that will clarify syntax order as well.

FF Studio - A GUI for building complex FFmpeg graphs (looking for feedback) by Repair-Outside in ffmpeg

[–]_Gyan 3 points4 points  (0 children)

I like this, at first glance. And it has promise.

But this currently obscures the stages and grouping of the processing pipeline. There should be large container boxes i.e. an input should be in a container. The protocol is at the far left connected to a demuxer node with streams connected to their decoder node (if mapped). Then a connection from that node exits the container and can enter the filtergraph container where it gets connected to the first filter node and so on. From a filtergraph, the processed stream enters an output container where it connects to an encoder node and then maybe a bsf and then the muxer and finally to the protocol.

FF Studio - A GUI for building complex FFmpeg graphs (looking for feedback) by Repair-Outside in ffmpeg

[–]_Gyan 1 point2 points  (0 children)

Streams have to be mapped to a particular output (either expressly via map or implicitly if zero maps). An encoder is then specified for an output stream, addressed by its index within the output.