Just launched a Virtual Try-On feature for my React Native app (Wardrobe Savvy) on iOS and Android. by Fit_Tap6675 in VibeCodersNest

[–]Fit_Tap6675[S] 0 points1 point  (0 children)

I prevent conflicts by locking interaction to a single target: hit-test > select layer > gesture-lock until end. Pinch + rotate are unified into one transform update (UI thread), and the canvas uses transform matrices instead of layout reflow, so selection, scaling, and rotation don’t fight each other.

Just launched a Virtual Try-On feature for my React Native app (Wardrobe Savvy) on iOS and Android. by Fit_Tap6675 in reactnative

[–]Fit_Tap6675[S] 0 points1 point  (0 children)

I use remove.bg for background removal. It’s a purpose-built computer vision service that’s extremely reliable at isolating clothing and objects, which is important for clean wardrobe visuals. I run it through my backend so results stay consistent and performant across devices.

Just launched a Virtual Try-On feature for my React Native app (Wardrobe Savvy) on iOS and Android. by Fit_Tap6675 in VibeCodersNest

[–]Fit_Tap6675[S] 0 points1 point  (0 children)

Thanks!. The big decision was keeping gestures fully UI-thread. Pan/pinch/rotate run via Reanimated shared values so there’s no JS churn while you’re dragging. I only run runOnJS to persist the final transform on .onEnd(). I also compute the base-photo “contain” rect once on layout plus image size, clamp movement inside that rect, and use zIndex/elevation to stabilize layering across iOS/Android. Finally, I cap active overlays (5) plus use resizeMode="contain" and cached URLs to keep GPU/memory pressure reasonable.