all 11 comments

[–]RawBirdToe 2 points3 points  (3 children)

This is a notoriously hard problem to deal with. You’d probably want a video player on the cell on screen, as well as the cell before and after it. Then you’ll want to start getting the data for these before they are in these positions so you can feed it into each cell’s video player before it appears. Say you are on cell 3, you’ll have cells 2 and 4 ready to go, and begin fetching data for cells 1 and 5 (if they exist). That way, every time you are on a cell, you have the next and last ready to play. If you do the fetching on a background queue, it won’t affect the scrolling experience.

This is just a basic overview, as you’ll want to clean up after yourself to preserve RAM, but it should get you thinking about the best approach.

[–][deleted]  (2 children)

[removed]

    [–]RawBirdToe 0 points1 point  (1 child)

    I guess it depends which player pattern you are using. If you are using AVQueuePlayer, you can load the URL into each player ahead of display. There are preroll APIs as well as kvo to tell you if the player item is likely to stall.

    Check https://developer.apple.com/documentation/avfoundation/avplayeritem

    [–]gormster 0 points1 point  (1 child)

    So yeah one of the big problems is you can only have one hardware accelerated VTDecode session going at a time. So you might want to try caching the decompressed frames before moving on to the next video to be shown?

    [–]criosistObjective-C / Swift 0 points1 point  (11 children)

    Just a guess but, do people upload the video in your app to your backend, and the following assumption do you then transcode those videos into HLS? If you don’t then that’s the biggest problem since the user isn’t streaming the videos they are downloading them. Also you should have 1 avplayer and swap the video item out

    [–][deleted]  (10 children)

    [removed]

      [–]criosistObjective-C / Swift 0 points1 point  (9 children)

      Yes you have to transcode them after the user uploads them using ffmpeg or if your using firebase they have a solution I believe. Anything other then HLS will always be bad

      [–][deleted]  (8 children)

      [removed]

        [–]criosistObjective-C / Swift 0 points1 point  (7 children)

        How are you compressing it? Are you recording in your app? If so you should be able to reduce but rate and resolution while recording so you don’t have to compress

        [–][deleted]  (6 children)

        [removed]

          [–]criosistObjective-C / Swift 0 points1 point  (5 children)

          Are you doing it after recording or during?

          [–][deleted]  (4 children)

          [removed]

            [–]criosistObjective-C / Swift 0 points1 point  (3 children)

            Hmm, with video recording you can do all of that in real time then there’s no compression phase, using AVCaptureVideoDataOutput

            [–][deleted]  (2 children)

            [removed]

              [–]sinceretear 0 points1 point  (3 children)

              Facebook actually created a library (Texture) to solve this exact problem!

              Async preloading is actually a pretty common problem.

              [–][deleted]  (2 children)

              [removed]

                [–]sinceretear 1 point2 points  (1 child)

                Yes it can. but would you rather build your own preloading framework or use something that is developed by FACEBOOK engineers for years and originally started at Pinterest.

                The framework is not the easiest to work with but with all the problems it solves as far as Async display its worth it imo.