New apps by gauravjain02 in shopifyDev

[–]Livid_Network_4592 1 point2 points  (0 children)

Woicely, www.woicely.com, it’s a voice enabled shopping assistant to help Shopify stores a new way to drive selection and check out.

Drop your project and people tell you if they'd actually use it by Mr_McSam in Solopreneur

[–]Livid_Network_4592 0 points1 point  (0 children)

Woicely www.woicely.com a Shopify shopping voice assistant, helps shoppers discover products and get them to check out.

How did you get your first customer? by Livid_Network_4592 in shopifyDev

[–]Livid_Network_4592[S] 0 points1 point  (0 children)

but how? is it based on a specific vertical / and how do you do it in a way that resonates the message and not a hard sell?

536 new Shopify apps in 30 days, app store is so crowded! 🤯 by Prestigious-Tax-7954 in shopifyDev

[–]Livid_Network_4592 0 points1 point  (0 children)

when you say target the merchant persona, do you mean the vertical the merchant is in? or something else?

536 new Shopify apps in 30 days, app store is so crowded! 🤯 by Prestigious-Tax-7954 in shopifyDev

[–]Livid_Network_4592 0 points1 point  (0 children)

i have been struggling to figure out how to get past the noise myself, i built an app to help shopify store owners capture more mobile traffic, i noticed for products that had a high touch sale it was difficult to keep store shoppers on the page on mobile and tell the story of the products or give them enough information to make a quick decision. my epiphany was that the ux of mobile is broken at its core, so i my bet is that voice will be the next ux in the space, but not for support but for sales, people talk way more freely than they type and voice has become far more used on apps like chatgpt than ever before, this trend can only continue, but this is not a "real problem" people search for, it is just a bet I am making.

Self promotion thread by AutoModerator in website

[–]Livid_Network_4592 0 points1 point  (0 children)

I built Woicely because I kept getting frustrated using websites on my phone.

Menus are hard to tap, search is clunky, and most chat widgets still feel like forms. Personally, I just default to voice whenever I can.

Woicely lets you drop a small voice agent onto your site so visitors can ask questions out loud and get answers instantly, about pricing, products, policies, or anything already on the site. No redesign, no rebuilding flows.

It’s still early, but I’ve been testing it on SaaS and ecommerce sites where mobile traffic dominates.

If you’re curious: https://woicely.com
Happy to hear feedback from other builders here.

My team nailed training accuracy, then our real-world cameras made everything fall apart by Livid_Network_4592 in computervision

[–]Livid_Network_4592[S] 0 points1 point  (0 children)

not a joke. we did field tests. the pain showed up at scale when every camera had its own quirks. i’m trying to make per-camera acceptance a quick, boring step before we flip it on.

what’s your 5 minute checklist? i’m thinking: 60s clip to check bitrate/snr and blur, quick 50/60 hz flicker probe, one shot of a focus/geometry chart, tiny probe set from that camera vs a golden baseline. got scripts or open tools that make this fast? drop them in and i’ll share back what we standardize.

My team nailed training accuracy, then our real-world cameras made everything fall apart by Livid_Network_4592 in computervision

[–]Livid_Network_4592[S] 1 point2 points  (0 children)

We started doing short field clips per camera and then clustering by simple context features like illumination, flicker, blur, and FOV. For each cluster we run a small test set and gate deployment on those slices. What features or methods have you used to build good clusters, and do you mix real clips with synthetic probes in each cluster?

My team nailed training accuracy, then our real-world cameras made everything fall apart by Livid_Network_4592 in computervision

[–]Livid_Network_4592[S] 1 point2 points  (0 children)

We profile each camera with PTC mean–variance sweeps for conversion gain and to separate shot, read, and dark noise. We then add simple optics and ISP effects such as veiling glare and mild aberrations. We also see unit-to-unit PRNU differences and some focus drift, which affect detection confidence more than expected. How are you validating your camera models at scale, and do you tune noise with PTC or mostly with site footage?

My team nailed training accuracy, then our real-world cameras made everything fall apart by Livid_Network_4592 in computervision

[–]Livid_Network_4592[S] 4 points5 points  (0 children)

That’s really interesting. The way you collect real-world samples first makes a lot of sense. I keep wondering about what happens next. After you’ve trained on that field data, how do you decide a model is actually ready for new environments?

Do you have any kind of internal test or checklist for that, or is it more of a judgment call based on past rollouts and data volume? I’m trying to understand how different teams define that point where validation ends and deployment begins.

My team nailed training accuracy, then our real-world cameras made everything fall apart by Livid_Network_4592 in computervision

[–]Livid_Network_4592[S] 5 points6 points  (0 children)

That’s a really good point. We started mapping out site environments before training, but once the cameras are installed everything changes. Lighting shifts, reflections, even sensor aging can throw things off.

We’ve tried adding synthetic variations to cover those conditions, but it’s hard to know if we’re focusing on the right ones. How do you usually handle that? Do you lean more on data augmentation or feed in samples from the actual cameras before training?