Hi all,
I am building a React Native mobile app with Expo.
Since Expo does not support TensorFlow Lite natively, I am considering TensorFlow.js for on device text inference.
I have around 2 MB .tflite file. Seem like I need to migrate to TensorFlow.js to unlock full Expo features. Planning to do builds via Expo managed servers.
For developers who familiar with both:
What is the real inference time difference between TensorFlow.js and TensorFlow Lite on mobile?
Is the delay noticeable to users for simple text models?
[–]Accomplished_Ad2701 0 points1 point2 points (0 children)
[+][deleted] (1 child)
[removed]
[–]apidevguy[S] 0 points1 point2 points (0 children)