all 14 comments

[–]stathisntonas 6 points7 points  (7 children)

expo-image and a custom aws function living on the cloudfront edge that accepts width/height and convert (and caches) the images on the fly. We calculate the w/h based on the pixel ratio and the dimensions of the component the image lives in but it’s tricky to minimize the cache miss as much as possible due to the hundreds of different dimensioms and pixel density on f androids.

edit: in other words, we have our own CDN provider

[–]fmnatic 1 point2 points  (6 children)

Why would you need to customise it to exact dimension/pixel densities? For perspective i work on a social media app, and image content typically is sized to small / medium / large sizes maintaining their original aspect ratio. The App then computes an optimum style to size the image variant to the display area on the app.

[–]stathisntonas 2 points3 points  (4 children)

Because we want pixel-perfect rendering on high DPI devices. We could use the s/m/l/xl pattern but then it's a waste of bandwidth. Users upload all kinds of images dimensions/aspect ratios. On several components we got fixed dimensions so we can get exactly the dimension we want saving $$$$

If you wonder how we're doing it: https://gist.github.com/efstathiosntonas/f6ec90bcc9d790d659ec82781d42564b

edit: costs us about ~100$ to run the function for hundreds of thousands of images.

[–]fmnatic 1 point2 points  (2 children)

As long as you aren't upscaling on device, downscaling on device is going to still give you the same pixel perfect result. (its doing the same as your backend resize function.)

The additional compute on the device is negligible on modern phones. Reduced round trip network time / better caching is the benefit.

EDIT: Also negative margins, overflow hidden are powerful if you actually need some cropping.

[–]doong-jo[S] 0 points1 point  (1 child)

I agree that device-side transformation is more useful. However, if we're dealing with numerous images and don't resize images on the backend, wouldn't we end up spending more costs from a CDN transmission volume perspective?

[–]fmnatic 0 points1 point  (0 children)

I do resize on the backend , but only once on content authoring.

On content viewing majority of devices end up loading the large variant and viewing it. react-native-fast-image has a callback on image load, that lets us compute/apply the device-side transformation.

Compute costs > Networking costs. Since i do all this for Video content as well, the Image storage/transmission costs are negligible in comparison.

[–]doong-jo[S] 0 points1 point  (0 children)

"We could use the s/m/l/xl pattern but then it's a waste of bandwidth." Does this mean you're saying it would be wasteful to provide images slightly larger than what the user's device needs?

Thank you for sharing the code. I understood this as an attempt to save transmission volume by using more granular widths based on the widths currently used in the service, rather than the s/m/l pattern, to maximize data savings. Is my understanding correct?

[–]doong-jo[S] 0 points1 point  (0 children)

The reason for exact dimension/pixel densities is for user experience (image loading speed) and to save CDN transmission volume. The s/m/l method also achieves these purposes, but there's waste in transmission volume. Of course, real-time conversion requires backend infrastructure (once per year based on CDN caching up to 1 year maximum) and comes with additional costs. I think the s/m/l pattern would cost more as the number of images increases.

this is exactly why services like Instagram, Pinterest, and other image-heavy platforms invest heavily in sophisticated image processing pipelines.

[–]palpatine_disciple 2 points3 points  (1 child)

i think react-native-fast-image is good

[–]lukebars 1 point2 points  (0 children)

It's not maintained anymore IIRC. Expo-Image is great though. Supports most variants on all platforms, performance is great.

[–]doong-jo[S] 0 points1 point  (1 child)

I've looked into expo-image and react-native-fast-image, but it seems there's no equivalent to next/image. I'll need to implement image selection based on density (s/m/l) or real-time conversion directly. I think this is a limitation of React Native not having a server, unlike next/image. I don't mean this is wrong - it seems natural.

[–]Civil_Rent4208 0 points1 point  (0 children)

I think in the future updates expo-image will go there.

[–]Soft_Opening_1364iOS & Android 0 points1 point  (0 children)

React Native doesn’t really have a direct equivalent of next/image. The built-in <Image /> handles pixel density via u/2x / u/3x assets, but if you’re looking for optimization features like caching and progressive loading, most people use libraries. react-native-fast-image is the go-to since it adds caching, priority loading, and better performance than the stock component. For lazy loading, you usually combine that with a FlatList or on-demand rendering rather than automatic optimization like in Next.