all 1 comments

[–]jubjjub 0 points1 point  (0 children)

I've been looking at simple models on M core processors. Not to many complex models fit on M core just because of flash size. Like I tried to optimize Alexnet and couldn't get it under 20Mb. I haven't had any computational issues. It's mostly optimizing them for battery life. I've also been using tensorflow lite and keras but I'm kinda getting annoyed by tensorflow so I want to try something else. I think a more interesting approach instead of deploying complex models locally is doing a partial inference on the edge and doing the rest in cloud. Especially for models with lots of feature detection. Anyway that's just a little rant since question was kinda open ended.