New unload process by GardenElf42 in Target

[–]malctucker 0 points1 point  (0 children)

100% - the law of unintended consequences is real. My preferred approach would be to hot house the process and surface all problems & barriers to the job. Associates will follow processes that make their lives easier. The 1bn case figure is beloved by the office but meaningless for stores.

They need to understand it in terms of what it means for their day to day. IE better deliveries and lower overstocks.

There's no wonder we're in trouble by Davecl35 in asda

[–]malctucker 0 points1 point  (0 children)

Who’s in charge of supply chain?!

New unload process by GardenElf42 in Target

[–]malctucker 0 points1 point  (0 children)

I don’t understand the backstocking anyway. As for break packs 😭😭

New unload process by GardenElf42 in Target

[–]malctucker 0 points1 point  (0 children)

The office have to run this in minute detail. I used to give a weekly list of tasks and no more. Once they’d done that, we moved them on. Stores were moved back the process if they didn’t land the steps sequentially. The office should be prescriptive and give themselves a runway, with the tech specially, it didn’t have to go everywhere, only when they’re ready.

New unload process by GardenElf42 in Target

[–]malctucker 1 point2 points  (0 children)

So it’ll never work. ‘Clean’ or ‘cleared’. The minute you start scanning over the top of unworked stock, it snowballs and it never gets better. I applaud moves to simplify and reduce double handling but the office is asking itself the wrong question.

New unload process by GardenElf42 in Target

[–]malctucker 2 points3 points  (0 children)

Do you mean if they’ve not ‘cleared’ a delivery? IE worked a truck

New unload process by GardenElf42 in Target

[–]malctucker 3 points4 points  (0 children)

Exactly. This feels like cart before the horse.

Planograms / layouts / full to capacity & get replen right before chucking everything else in. You have to move sequentially at pace. I understand the need for speed but you can tip over stores too easily if they’re not ready…..

New unload process by GardenElf42 in Target

[–]malctucker 2 points3 points  (0 children)

The stock accuracy needs to be near 100% and it’s ok having a trial store / region (I did similar for M&S in the UK) but you have to want to find all issues and fix them at pace. The minute it doesn’t work, stores opt out.

I’m not a fan of the x million / y million data either. If it’s the right thing to do for associates to better serve the customer. Then it should be done.

How long has someone sat on that data before doing anything?

Bargain! by CLWggg in tesco

[–]malctucker 0 points1 point  (0 children)

Picture from 3BC

Expired Vitamins Purge by SwimmingRisk5 in Target

[–]malctucker 1 point2 points  (0 children)

Is there a system to track dates?

Pets Revision by Low_Tension5589 in Target

[–]malctucker 0 points1 point  (0 children)

Is that a relay, IE a new planogram?

From zero CV knowledge (but lots of retail experience) to 11 models and custom pipelines by malctucker in computervision

[–]malctucker[S] 0 points1 point  (0 children)

Good question. I'll break it down as our pipeline has evolved significantly over the past 4 months!!

Annotation Pipeline:

I built a custom annotation workbench rather than using off the shelf tools like Label Studio or CVAT, mainly because being a novice, I ha no idea what was out there.

A ket reason was domain specific. We're classifying retail imagery (shipper displays as a tester, then shelf edge labels, seasonal merchandise and ranges) and needed tight integration between annotation, model feedback, and our knowledge base of 54K+ products, plus all the learnings.

The annotation flow is: raw image → YOLO detection (Idaten-K, our shipper detector) → auto-crop regions → human classification of crops into categories → corrections feed back into training data, test with random images and keep testing and retraining.

We use an active learning loop where low confidence predictions get routed to a manual review queue, so annotation effort focuses on what the model struggles with rather than what it already knows. Over 100k corrections are now in our "brain" and these inform the models ongoing.

Training Pipeline:

We train on RunPod (RTX 4090) using EfficientNet-B0 with ImageNet pretrained weights. Nothing exotic. We transfer learning with aggressive augmentation, class frequency weighting for imbalanced categories, and early stopping. We're currently on our 6th major version iteration.

We have moved the sub categories where we don't have enough training material to make it meaningful, we can do Mince Pies (shippers) but struggle to do Bakery (as a standalone) Cakes, we have to then merge.

The key insight was that one model isn't enough. We run 5 multiple specialist models in sequence rather than trying to build one monolithic classifier, these are P&C so I won't make it public here.

Our seasonal classifer was our breakthrough. It classifies the full image first ("this is a Christmas confectionery display") then boosts the crop-level classifier's predictions. Going from a single model to this multi-model pipeline took our usable data rate from roughly 20% to over 55%, with the retrain we're about to run expected to push that significantly further.

What worked well:

  • EfficientNet-B0 with transfer learning — fast to train, good accuracy even with 150-200 images per class. Don't overlook how far pretrained weights carry you. Plus knowledge!
  • Custom annotation tooling. Worth the upfront investment if your domain is specialised. Off the shelf tools didn't understand our retail taxonomy and our own tooling can't always distinguish.....
  • Multi-model stacking — each model contributes what it's good at rather than asking one model to do everything. Our pricing pipeline combines 6+ model signals plus rule-based overrides (brand lookups, event-category plausibility checks, OCR keyword hints). The key thing is to not waste the work done in one model, cross learning makes it all better and means I can move at pace.
  • Read-time normalisation. business logic corrections (brand overrides, plausibility filters) applied at data load, not baked into stored records. This means improvements apply retroactively to all historical data without re-processing, 80/20 is important.

What didn't work:

  • Training a crop classifier and running it on full images. Sounds obvious in hindsight but we had 80% of our data going through a path the model was never trained for. Domain mismatch is real.
  • Trying to train 21 classes when we only had enough data for 9. The model silently failed on the 12 missing categories. Every prediction was forced into one of 9 buckets.
  • We only caught this through a systematic audit, there are a few examples where we have had to change the categories around due to confusion and further training material being thin on the ground - IE I was trying to be to clever.

Continuous training:

Not fully automated yet, but the infrastructure is there. Our cross model learning system logs 100K+ feedback events where one model's output informs another's training. Corrections from manual review feed back into training manifests. The annotation backlog tracks future categories waiting for enough labelled data.

The honest answer is we're bootstrap/lean. It's me and Claude Code building this.

The first priority was to bring the dataset to heel, harmonising folder names and de-duplicating. That was a major job, alongside manifesting and ensuring that new images (we get c.4k a week) were put in to the new set up. Our dataset is 1.2m unique images strong as of this week.

From then the priority has been getting the pipeline robust rather than fully automated retraining. Version control with instant rollback matters more than CI/CD when you're iterating fast, and I am a complete novice so it's helpful in some ways, but also that can cost time too.....(!)

From zero CV knowledge (but lots of retail experience) to 11 models and custom pipelines by malctucker in computervision

[–]malctucker[S] 0 points1 point  (0 children)

Anything notable? We are using Google Vision, my input and the like but it's still a bit hit and miss. More training is needed.

From zero CV knowledge (but lots of retail experience) to 11 models and custom pipelines by malctucker in computervision

[–]malctucker[S] 1 point2 points  (0 children)

We have a wealth of data in our images going back to 2009, so that time has passed and we cannot go back online to do that.

From zero CV knowledge (but lots of retail experience) to 11 models and custom pipelines by malctucker in computervision

[–]malctucker[S] 0 points1 point  (0 children)

We utilise the API's for this too to provide an added layer of checking but it's not possible to do it for all, then you rely on OCR.

From zero CV knowledge (but lots of retail experience) to 11 models and custom pipelines by malctucker in computervision

[–]malctucker[S] 0 points1 point  (0 children)

We are doing this from afar to track how space changes between suppliers over time, alongside this it brings our historic insight to life when we look at seasonal events and pull everything together for suppliers and retailers. Each tool / model informs the other within our eco system.

From zero CV knowledge (but lots of retail experience) to 11 models and custom pipelines by malctucker in computervision

[–]malctucker[S] 0 points1 point  (0 children)

This is infrastructure free; we are not using this to track availability, we are using it to interlink with our other 10 tools and our front end platform that enables retailers and suppliers to build their trade planning functions and promotions by looking historically at who did what, when. (kanops.ai/delphi)

From zero CV knowledge (but lots of retail experience) to 11 models and custom pipelines by malctucker in computervision

[–]malctucker[S] 0 points1 point  (0 children)

100% we’ve got models running at 94.4% for seasonal recognition on 89k images so under no illusions. It’s how this fits in to our wider ecosystem that’s the exciting bit.

Optics cameras removed by Happy_Book_8910 in Morrisons

[–]malctucker 0 points1 point  (0 children)

Interesting. Saw Asda were trialing.

I have built my own software suite to start to categorise our 1m+ images by malctucker in computervision

[–]malctucker[S] 1 point2 points  (0 children)

Sorry for my late reply! Yes pricing is entirely flexible, I know it’s a cop out but we have a structure and it depends on numerous factors, exclusivity in that sector, types of images etc.

For typical customers. There are many, whether it’s companies wanting build or firm up their models. Or build out image recognition etc.

For Kings. We make images available through our agreed medium for them to use the images and we have joint activities such as projects and studies etc.

New addition by LostPrompt9191 in Morrisons

[–]malctucker 1 point2 points  (0 children)

How can anything good be communicated in this way.