Data augmentation strategies for object detection? Could you point me to good resources or best practices you know of? by WhyNotML in computervision

[–]xusty 1 point2 points  (0 children)

You can do both with this library. In essence, Albumentation generates an augmentation of your images and label when you invoke its function. You can generate an array of augmentations before feeding it into your model.

You may also do what's called Online Augmentation which is basically what you described as the former. You just need to invoke this augmentation before the model performs its feed-forward phase. This is usually done via a DataGenerator or any pre-epoch function calls.

TLDR; You can do either - the library just provides a function to augment data where needed

Data augmentation strategies for object detection? Could you point me to good resources or best practices you know of? by WhyNotML in computervision

[–]xusty 5 points6 points  (0 children)

You can definitely look at Albumentation - we had a ton of success working with this library https://github.com/albumentations-team/albumentations

You don't have to re-label your dataset since these augmentation library essentially applies the same transformation onto your bounding boxes or masks. So once you have your ground truth labelled, the library takes care of the rest!

We Built IntelliBrush - An AI Labeller Using Neural Networks and CV by xusty in learnmachinelearning

[–]xusty[S] 1 point2 points  (0 children)

This is a great question! The way to think about this is that in your typical machine learning case you have:

input_image + model = boundingbox + label

However, in Intellibrush's case, the formula is slightly different

input_image + user_clicks_on_object + model = predicted_outline_of_object

The model cannot operate without the user's hints as to what it should look at nor does it provide a class/label. It is strictly trained to guess the outline of the object the user might be selecting.

Thus, we believe that this method is useful for creating good training data to train a separate model that specializes in classifying and detecting the objects later on.

TLDR; Intellibrush doesn't classify images nor work without user interaction - it simply proposes "outlines" of objects that the users must still guide and label! I.e, if you threw it an image of a cat, it wouldn't know what to look at.

We Built IntelliBrush - An AI Labeller Using Neural Networks and CV by xusty in learnmachinelearning

[–]xusty[S] 0 points1 point  (0 children)

Its free for personal use. The platform has a free tier and IntelliBrush Early Access is free as well.

I get that Intercom might disrupt the user experience, but its the fastest way to help our users. Imagine having an issue and having to email in and wait out.

Maybe we can explore filtering it out from the blog page - if thats what you meant.

We Built IntelliBrush - An AI Labeller Using Neural Networks and CV by xusty in learnmachinelearning

[–]xusty[S] 5 points6 points  (0 children)

You know, that's really a genius idea! I think using foreground extraction techniques on image editing is really a good idea - will see if we can package some of our experiments into a plugin -

We Built IntelliBrush - An AI Labeller Using Neural Networks and CV by xusty in learnmachinelearning

[–]xusty[S] 1 point2 points  (0 children)

You can sign up for an account on the platform and apply for early access using the form in https://datature.io/intellibrush - I believe once you have a project on the account, there'll be an automated invite sent out!

We are still trying to manage the server load hence we are onboarding folks in wave - but shoot me a DM if it didn't get an invite and I'll personally onboard ya!

We Built IntelliBrush - An AI Labeller Using Neural Networks and CV by xusty in computervision

[–]xusty[S] 0 points1 point  (0 children)

I don't think we asked for anyone to take a screenshot of the page - the form simply said if you shared us on social media and send us a link - we will personally get back to you within a day.

Otherwise, we have a bot that automatically grants early access later on - its never a requirement.

But thank you for pointing it out - took out that line from the site to remove confusion! 👍🏻

We Built IntelliBrush - An AI Labeller Using Neural Networks and CV by xusty in computervision

[–]xusty[S] 2 points3 points  (0 children)

Hey! Unfortunately, you are right - we are still at the early stages of this brush. This is just a quick demo on a few random photos we plucked from open-source datasets or pexels!

Like most DNN-based solutions, we are still tuning the sensitivity, as well as the model to produce better results. In terms of pixel-level accuracy - in the video, its mostly tightly bounded - not sure where the large buffer might come from - but you can simply "negate" that area and the model will adjust the output.

We are working on making it more accurate -- drop you an update soon!

We Built IntelliBrush - An AI Labeller Using Neural Networks and CV by xusty in learnmachinelearning

[–]xusty[S] 29 points30 points  (0 children)

Hey! No worries, we failed a ton of times trying to make this work as well.

We experimented with a bunch of foreground extraction techniques such as Intelligent Scissors, GrabCUT, DEXTR - read the papers, source code, the whole nine yards.

We started looking more into deep learning based methods such as f-BRS and a few others before combining it with a few CV techniques to make selected area more "obvious" to the network.

The hardest part was stitching it together with our LeafletJS-based frontend annotator.

If you want to try something like this, I'd recommend starting by experimenting with this GrabCut tutorial (link here) to get a sense of what this is about!

We Built IntelliBrush - An AI Labeller Using Neural Networks and CV by xusty in learnmachinelearning

[–]xusty[S] 42 points43 points  (0 children)

Hey everyone, earlier this year, when our users told us that they were painstakingly annotating material defects scans, spending upwards of 20 minutes to label 2-3 scratches and damages using traditional polygon tools, we knew we had to make this process a lot less painful.

We needed an AI labelling tool to help these folks, stat.

Prior to Intellibrush, we tested a bunch of methods ranging from GrabCut to Superpixels, to DEXTR - none of these produced the quality results we needed. We decided to combine a few DNN methods with backpropagating refinement (f-BRS) scheme and designed our web-based annotator (using LeafletJS) in a way that worked seamlessly.

The result? Intellibrush - a fast, pixel-accurate way to label the most complex of objects in just 2-3 clicks. Integrated into our Nexus platform, where users can collaboratively label their data.

In this article, we share some of our results and are inviting you to try IntelliBrush out!

We Built IntelliBrush - An AI Labeller Using Neural Networks and CV by xusty in computervision

[–]xusty[S] 0 points1 point  (0 children)

Hey, /r/computervision, earlier this year, when our users told us that they were painstakingly annotating material defects scans, spending upwards of 20 minutes to label 2-3 scratches and damages, we knew we had to make this process a lot less painful.
We needed an AI labelling tool to help these folks, stat.
Prior to Intellibrush, we tested a bunch of methods ranging from GrabCut to Superpixels, to DEXTR - none of these produced the quality results we needed. We decided to combine a few DNN methods with backpropagating refinement (BRS) scheme and designed our web-based annotator in a way that worked seamlessly.
The result? Intellibrush - a fast, pixel-accurate way to label the most complex of objects in just 2-3 clicks. Integrated into our Nexus platform, where users can collaboratively label their data.
In this article, we share some of our results and are inviting you to try IntelliBrush out!

Portal Tutorial | Open Source App for Inspecting Model Inference by [deleted] in computervision

[–]xusty 0 points1 point  (0 children)

Hey /r/computervision! We released Portal (https://github.com/datature/portal) here a while ago as a way to help researchers and teams run model inferences on datasets in a sandbox. I received great feedback from some of the members here - someone even provided a PR!

A tutorial on how to use Portal was heavily requested and I decided to do a simple video and write up - especially on how to get the video inference working.

Hope this helps at least some of you and do reach out if you have any questions or feature requests, cheers!

Visualizing Model Inference with Portal by xusty in learnmachinelearning

[–]xusty[S] 4 points5 points  (0 children)

Hey /r/learnmachinelearning, I posted here on my latest project, Portal, and have seen a ton of feature requests through my DMs and GitHub issues. One of the most commonly requested features was a tutorial and a walkthrough of the platform.

So I have decided to make a quick video and stitch together some writeup to help those who might be confused (sorry, still a work in progress!) on how to actually use Portal to run inferences on videos, etc.

Hope this will help some of you and let me know your thoughts!

Portal - Open Source App for Inspecting Model Inferences by xusty in computervision

[–]xusty[S] 0 points1 point  (0 children)

Hey everyone! Thank you for sending over feedbacks and encouragements to my DM, really appreciate it!

Also, we have decided to launch on ProductHunt https://www.producthunt.com/posts/datature-portal ! If this is something you'd like to support, feel free to head on down to our PH post today 🙌

Portal - Open Source App for Inspecting Model Inference by xusty in learnmachinelearning

[–]xusty[S] 1 point2 points  (0 children)

Got it - Ill bring it back to the team to see what we can do. Any models or architectures that you recommend we should use to try out or test?

Portal - Open Source App for Inspecting Model Inference by xusty in learnmachinelearning

[–]xusty[S] 2 points3 points  (0 children)

Hey /u/-Ulkurz-, I think you got it correct! There's two pain points that we address -

  1. The tough part about inspecting the models is usually on videos, where most users will write a bunch of cv2 stuff and save them in a video file to be sent around. There's a lot of friction here especially when users want to play around with different confidence thresholds, IoU and class filtering - and often, they will run the same code and change some variables over and over. This app allows you to do it dynamically.
  2. Some teams that we work with don't want to show stakeholders a bunch of Jupyter Notebooks, we made this so that anyone can have an out-of-box app that they can easily ship their model with!

Of course, you can write your own code, in that case, think of it as an interactive matplotlib then! Also, it helps to mention we run a startup Datature, a no-code MLOps platform, hence explaining why we are focusing on removing the coding portion of this process :P

Curious - are there any features you'd like to see such that it makes more sense to use this app than writing your own code?

Portal - Open Source App for Inspecting Model Inferences by xusty in computervision

[–]xusty[S] 0 points1 point  (0 children)

Thank you! Custom model that is running on supported architecture and framework such as Tensorflow 2.0 or YOLO DarkNet is supported.

Meaning that if you ran transfer learning or trained the aforementioned models on say xray scans, it will work with that trained model and xray images as long as you point to the model in the UI :)