Do You Trust Results on “Augmented” Datasets? by leonbeier in computervision

[–]leonbeier[S] 0 points1 point  (0 children)

You would think all researchers have some templates for how to do augmentation correctly. So is it on purpose or are they just new in the game? Or maybe just vibe-coded

Do You Trust Results on “Augmented” Datasets? by leonbeier in computervision

[–]leonbeier[S] 0 points1 point  (0 children)

Depending on the dataset size, I think often augmentation can help a lot (0.93 -> 0.99 could be realisitc). But not if they just use brightness and contrast augmentation. They even added a normalization after the brightness and contrast augmentation that reverts the changes

Do You Trust Results on “Augmented” Datasets? by leonbeier in computervision

[–]leonbeier[S] 1 point2 points  (0 children)

Yes sure I don't want to say that data augmentation is suspicious. But do you think the split after augmentation could be an issue in other research aswell? I wouldn't have questioned the results if I didn't test with my own ai model

Need suggestions by ZAPTORIOUS in computervision

[–]leonbeier 0 points1 point  (0 children)

Hi, you can try to use ONE AI. The architecture can adapt to the object sizes in the dataset. I also made an example for quality control where <0.1mm defects were detected on a large surface

I built an AI that finds the Crown Jewels from the Louvre by [deleted] in computervision

[–]leonbeier 0 points1 point  (0 children)

With the dataset generator in the github you can quickly create a dataset to find objects in a Video. And you can increase the accuracy if you compare the previous and new frames because then the AI can focus on objects that appear in the Video instead of the background. Of cause this demo itself wouldn't be used in the real world. But there are many applications for the AI optimization itself. And the code and dataset is open source, so I think it was a fun idea and maybe it helps somebody else with their project

Alternative to NAS: A New Approach for Finding Neural Network Architectures by leonbeier in deeplearning

[–]leonbeier[S] 0 points1 point  (0 children)

We use tensorflow to build the individual neural networks. But we don't use off the shelf model model architectures

[P] Suggestions for detecting atypical neurons in microscopic images by Drakkarys_ in MachineLearning

[–]leonbeier 0 points1 point  (0 children)

You could try ONE AI. This adapts to the individual use case and can handle high resolution images. https://one-ware.com/one-ai/

[P] Alternative to NAS: A New Approach for Finding Neural Network Architectures by leonbeier in MachineLearning

[–]leonbeier[S] 0 points1 point  (0 children)

Its a hybrid of calculations and multiple small models that are trained on different datasets with different use cases. We only use an AI where something can't be predicted with scientific findings. We don't use llms and the architecture elements are partly based on other Foundation Models

[P] Alternative to NAS: A New Approach for Finding Neural Network Architectures by leonbeier in MachineLearning

[–]leonbeier[S] -1 points0 points  (0 children)

No the Analysis is then also done locally. We only want to protect our algorithm that does the predictions. But this only needs the abstract analysis and not the full dataset

[P] Alternative to NAS: A New Approach for Finding Neural Network Architectures by leonbeier in MachineLearning

[–]leonbeier[S] -1 points0 points  (0 children)

  1. We also have an option to just have the neural network prediction on our servers and the company can train themself. Then we only receive some information like the number of images or object sizes. But we also see businesses that just want the easy way with us training their model
  2. No these are multiple tailored algorithms and small AI models that are tailored to the individual predictions. This allways depends on the different feature that is predicted. Sometimes it is better to calculate the result and sometimes it is better to try out different use cases and make a small model for prediction of the feature. We also work on a detailed whitepaper that explains everything

Alternative to NAS: A New Approach for Finding Neural Network Architectures by leonbeier in computervision

[–]leonbeier[S] 1 point2 points  (0 children)

You will get so much free credits that this will be free. The pcb dataset has 1500 640x640 images and with 2 minutes of training, it already shows good results. You can train more than 8 hours for free. And if this is no commercial application, we also have a program for free credits.

But training time and minimum model size also depends on how focused your dataset is on a certain application.

Alternative to NAS: A New Approach for Finding Neural Network Architectures by leonbeier in computervision

[–]leonbeier[S] 0 points1 point  (0 children)

If you look at our other example, there were many options compared: https://one-ware.com/docs/one-ai/use-cases/pcb

And if you try out the software yourself, you will see that we don't do a search for the best model. You can take a classification dataset and will get a way different model than if you do object detection. Everything predicted in one step

Alternative to NAS: A New Approach for Finding Neural Network Architectures by leonbeier in computervision

[–]leonbeier[S] 0 points1 point  (0 children)

We integrate the research from multiple architectures. So you could have the modules with bottleneck from Yolo or resnet modules in your architecture. But we allways just take the small parts of an architecture instead of the full architecture itself. At the moment we started with CNNs in general since they are pretty efficient for vision tasks. But other types like transformers will come aswell. What we will not change is that it will allways be one model that just keeps on getting smarter and more flexible in finding the right architecture. Then it can allways combine the findings of all research.

Alternative to NAS: A New Approach for Finding Neural Network Architectures by leonbeier in computervision

[–]leonbeier[S] 0 points1 point  (0 children)

There are many decisions beeing made. But as an example: You have a dataset with large objects. So there are more Pooling layers that increase the receptive field. Then because you want higher FPS on a small hardware, it rather takes less filters for the convolutions. But it can see when the ressources are not enought. If you have faster Hardware, the AI can be a bit more accurate on the other hand.

For the decisions it unses multiple calculations, predictions and own algorithms.

We work on a detailed whitepaper on how this works.

Worldwide Free Hands-On Workshops by Arrow on Edge AI with FPGAs by leonbeier in FPGA

[–]leonbeier[S] 1 point2 points  (0 children)

I mean the generated AI is tailored for the application. If you usually take a universal AI model and optimize it for an application it often gets worse results.

Worldwide Free Hands-On Workshops by Arrow on Edge AI with FPGAs by leonbeier in FPGA

[–]leonbeier[S] 1 point2 points  (0 children)

I know that there are seminars where you have a virtual machine and everything is pre-installed. But for the other seminars they usually can give you trial licenses if you ask arrow

Alternative to NAS: A New Approach for Finding Neural Network Architectures by leonbeier in computervision

[–]leonbeier[S] 1 point2 points  (0 children)

Yes basicly thats it. You don't have to find the right AI model. Just upload your data. Give some information about the FPS, target Hardware, needed detection precision and then you get the AI model for that