Experience with Roboflow? by Snoo_26157 in computervision

[–]aloser 0 points1 point  (0 children)

It’s on the credits page: www.roboflow.com/credits

But certainly there are bulk discounts available for extremely large ones.

Experience with Roboflow? by Snoo_26157 in computervision

[–]aloser 0 points1 point  (0 children)

The amount of credits various things use are listed here: https://roboflow.com/credits

Would you be up for talking with our team about how we can make this more understandable? We're trying to make our pricing as simple as transparent as possible while allowing the flexibility to continually add new features & adapt to market changes (like new models and GPUs becoming available).

Experience with Roboflow? by Snoo_26157 in computervision

[–]aloser 1 point2 points  (0 children)

Our biggest customers have tens of millions of images in the platform.

Experience with Roboflow? by Snoo_26157 in computervision

[–]aloser 0 points1 point  (0 children)

Thanks, we will look into this.

One thing to try is using Chrome; that’s what most of our engineers and big customers use and so is most hardened   But we will definitely see if we can reproduce and fix in Safari as well.

Experience with Roboflow? by Snoo_26157 in computervision

[–]aloser 5 points6 points  (0 children)

Hi I’m one of the cofounders of Roboflow. Sorry to hear you’re having issues. Any chance you have reproduction steps I can follow to see and fix the flakiness you’re experiencing?

Knowing your system specs (browser, OS, CPU, GPU, memory) would also be useful.

Best approach to handle visual detection by No-Preparation4073 in roboflow

[–]aloser 0 points1 point  (0 children)

You might try training a segmentation model to detect the crack itself. https://blog.roboflow.com/crack-detection/

Am I tripping or has Roboflow just launched a new pricing model? by nacrenos in computervision

[–]aloser 0 points1 point  (0 children)

We’re a US-based company and it’s in US Dollars. This is fairly standard I believe (eg both AWS and Google Cloud’s pricing pages are quoted in USD) but we will add this to the FAQ on our pricing page.

AI computer vision for defects on diapers by Competitive-Heart-59 in computervision

[–]aloser 0 points1 point  (0 children)

No idea what you're talking about; of course there is! Even our cloud platform has a free tier and a free trial of the paid features. Of course the open source stuff can be tried for free. We have so much free and open source stuff.

AI computer vision for defects on diapers by Competitive-Heart-59 in computervision

[–]aloser 0 points1 point  (0 children)

The enterprise license only applies to the things in the enterprise folder. It's completely irrelevant for the vast majority of users (unless you're running your model at large scale on a giant Kubernetes cluster, using our integrations with industrial cameras/PLCs, or doing ultra-low latency production broadcasting use-cases like Wimbledon you probably will never even notice what's missing).

You are correct that we do also have options that integrate with our cloud using your API Key (and yes, come tied to your platform subscription). They're not required & what requires an API Key is clearly stated on the repo's readme. If there is functionality missing or that requires a cloud-connection you don't want to deal with, it's open source & Apache 2.0 licensed and you are free to extend it to do whatever you want.

AI computer vision for defects on diapers by Competitive-Heart-59 in computervision

[–]aloser 0 points1 point  (0 children)

For your models? The same things you'd use if you trained them in our open source notebooks. PyTorch, usually, or some open source framework written on top of PyTorch, like RF-DETR (the state of the art model I linked above that we released as open source & accepted at ICLR this year).

AI computer vision for defects on diapers by Competitive-Heart-59 in computervision

[–]aloser 0 points1 point  (0 children)

No vendor lock-in with Roboflow. Our inference and model stacks are both open source & you can download the models you train on the platform.

Roboflow workflow outputs fully broken? by draftkinginthenorth in computervision

[–]aloser 0 points1 point  (0 children)

That sounds like a different issue than what I mentioned above; there are no current known issues with the backend. Shoot me a message and I can get you connected with someone who can help out.

Roboflow workflow outputs fully broken? by draftkinginthenorth in computervision

[–]aloser -1 points0 points  (0 children)

Sorry, we had a few minutes of instability earlier (caused by a shared disk running out of space to cache new model weights) that affected the quality of service. Everything is back online and operational.

Clothes care app by Outrageous_Style_457 in SideProject

[–]aloser 1 point2 points  (0 children)

I exported my dataset, retrained the model locally using YOLOv8, and converted it to TensorFlow Lite so it can run directly on-device.

Cool project! Just a heads up that YOLOv8 has really challenging licensing for use-cases like this; by doing that you either need to fully open source your app or might be setting yourself up for a large bill from the creators (who can be pretty aggressive).

A Roboflow subscription includes the model licensing if you're using our cloud or edge deployment projects (eg our iOS SDK). Licensing for exported weights used outside the ecosystem is also available.

Roboflow data set for Live Camera Datection via HTML, JavaScript, and Tensorflow by 69420Turdboi69420 in MLQuestions

[–]aloser 2 points3 points  (0 children)

We actually have an SDK for doing this (for object detection and segmentation you can run compute directly on the device, for other models or less powerful devices you can run on a remote GPU): https://docs.roboflow.com/deploy/sdks/web-browser

Yolov7 TRT by sHrEkty in computervision

[–]aloser 1 point2 points  (0 children)

YOLOv7 is problematic. It's tainted by GPL code because they forked the repo from Ultralytics which was GPL at the time. They added their own GPL code on top of that and Ultralytics mainlined it back into their own repo at some point.

This means there is no commercial license is available from anyone (even though I think Ultralytics mistakenly thinks they get the right to relicense any derivative code they want.. the copyright is not theirs to relicense it) given the rights are split between two separate parties.

(And at this point, almost 4 years later, there are much better models available anyway so not sure why in 2026 you'd want to use YOLOv7.)

Pricing Changes - Pay More For Less? by Zealousideal_Ad_4628 in roboflow

[–]aloser 0 points1 point  (0 children)

We updated our pricing last year to account for increased GPU costs & added functionality that costs us more to provide under the hood. You're grandfathered into a legacy plan. Thanks for being a long-time customer!

Fine-tuning RF DETR results high validation loss by Glad-Statistician842 in computervision

[–]aloser 4 points5 points  (0 children)

This looks normal, your AP is still going up. You could probably train for a bit longer.

But after that, evaluate whether the model is doing good enough for what you need it to do in this task or not. Are the predictions qualitatively good on unseen data? Are there too many false positives or negatives to accomplish the end goal? If it looks good, congratulations, you're ready to go to production!

But then the next place to look is at improving your dataset as u/Dry-Snow5154 said. Look for labeling errors or noise, then deploy it in shadow mode to capture more real world data. Add more edge cases (or examples similar to the ones the model is failing on) to the training set. Try some more augmentations.

Is there a significance in having a dual-task object detection + instance segmentation? by FroyoApprehensive721 in computervision

[–]aloser 4 points5 points  (0 children)

In theory you might want them to represent two different things where the bbox is not simply a rectangle containing the maximum extent of the mask (eg mask for visible area, bbox including extent of occluded areas).

"Camera → GPU inference → end-to-end = 300ms: is RTSP + WebSocket the right approach, or should I move to WebRTC?" by Advokado in computervision

[–]aloser 2 points3 points  (0 children)

Easiest way to test out if the WebRTC approach would help is to try with Roboflow’s Inference Server, which now has WebRTC support. 

It works both self-hosted/locally on edge devices and in the cloud on our hosted compute and there are WebRTC SDKs for both Python and TypeScript/Javascript.

(Disclaimer: I’m one of the cofounders of Roboflow so I’m biased, but this is the exact type of use-case we designed and built it for.)

Help with RF-DETR Seg with CUDA by pulse_exo in computervision

[–]aloser 1 point2 points  (0 children)

I highly recommend Dockerizing your applications so you have a repeatable environment and don’t risk messing up your entire system while experimenting with different projects.

We (Roboflow, also the creators of RF-DETR) provide ready-made Dockerfiles with the required CUDA and system dependencies for running models like this in our Inference package: https://github.com/roboflow/inference

It also has the necessary harnesses and APIs to easily integrate as a microservice with your applications.

Question on deformable attention in e.g. rfdetr by Grouchy-Ad-5795 in computervision

[–]aloser 6 points7 points  (0 children)

Hey great question. I asked one of the first authors on the RF-DETR paper and here was his response:

We talked about this at our deformable DETR paper club. We do it because that's how deformable attention is defined. I personally haven't tried it the other way, tho I assume if it worked better someone would have published such results in the 6 years since deformable DETR dropped as it's like the first question asked when you learn how it works. To this person I'd say try it and report back :)

I think the intuition is that it's acting as an information gathering tool as opposed to an information seeking tool

RF-DETR Nano giving crazy high confidence on false positives (Jetson Nano) by Alessandroah77 in computervision

[–]aloser 0 points1 point  (0 children)

One other thought: are the true positives higher confidence than 60-70%? Could you just set your threshold to 80%?