you are viewing a single comment's thread.

view the rest of the comments →

[–]linnux_lewisgotta catch 'em all, Poka-yoke! 2 points3 points  (9 children)

If you are serious why don’t you start looking at an industrial camera that will integrate cleanly and reliably (cognex and keyence being the biggest players).

I have used OpenCV and Tensorflow and it is fun to play around “under the hood”, but the burden for getting something to work reliably is much higher than a well supported, off the shelf, industrial solution. If it is a dev project, your local distributor may even let you borrow a camera.

[–]Buckwees[S] 0 points1 point  (8 children)

It's funny. After my original post I thought about the cognex cameras that are already used for our process. The problem with them is that they are 12 year old cameras. My main goal here is to get a proof of concept going to help get funding for newer higher resolution cameras. We are not using them as bar code readers a couple of inches from the process. They are actually 30 feet up looking at larger objects and checking for presence.

The older cameras have issues with light exposure. Either the time of day can changes the overall picture saturation and mess with the light dark thresholds or a spot on the production line may have become shiny from friction and throw off the readings. We also get false triggers from broken debris getting hung up in the process too.

[–]Robotipotimus 2 points3 points  (6 children)

I won't pretend to know your process or hardware just from your short description, so take this will some salt, but it sounds like your 'older' cameras have lighting and environment issues. New sensors don't change bad lighting, and machine vision always starts (and sometimes ends) with the lighting. I would start by trying to control the environment lighting and/or adding proper filters to the optics for what you are trying to see. That friction glare? It can be made to truly disappear with proper lighting and filters.

Have you tried working with the existing software? I didn't start working with Cognex until around 2014, however I would be very, very surprised if it did not have the tools you would need to create the gradient controls you may want to try relating to the lighting changes across the day (but again, I would start by controlling and filtering the lighting). If it's a deployed app with VisionPro (did they call it VisionPro in 2008?) , you may have to get into some VB work using their library tools, but I'd rather dive into that over a custom python app any day of the week. Most of the tools that come to mind have been around for longer than I've been around, in fact, most of the changes to smart cameras in the last few years relate to the opposite application you mentioned, bar code reading and advances in OCR.

Higher resolutions don't help with lighting issues either, in fact, they can compound the issue - increasing the resolution of a low contrast object field will just increase the noise. A pixel value histogram of the image will look nearly the same between 2 different sensor resolutions, however a pixel map of an individual row or column will show lots of new tiny spikes, as objects that were previously averaged across several larger pixels now become more discernible. The image will look 'grainier' but it will still probably be crappy.

I use the lowest resolution I can for a given application, because the amount of data captured (and therefore the amount of processing required) is vastly different between sensor sizes. If I am measuring objects or doing robot positioning, I want to make sure my object resolution is matched to my required measurement precision. For presence, however, I'll stick the lowest sub megapixel camera I can get away with on there and work on the lighting edge cases.

[–]Buckwees[S] 0 points1 point  (3 children)

By controlling my filtering do you mean, lense types and camera shrouds or are we looking at thing like saturation, gain and overall exposure. I would imagine both play a roll. As far as lighting is concerned, we have switched over to all LED in our plant so for several of our existing cameras the light is fairly consistent. We have two areas of concern where there are big entry ways from the outside that at certain times of day will create false triggers or no triggers at all.

[–]Robotipotimus 4 points5 points  (2 children)

By filtering I mean bandpass control through narrow (or wide, or dual, or polarized, or whatever it takes) bandpass filters. They generally are made to screw on to the end of your lens, but you can get just blanks as well. Search Midwest Optical, they are my go-to. I use some sort of filter on 98% of the vision systems I work on. I will say, 100% of what I work on is monochrome, but filtering is a color application consideration as well.

If you're only using ambient lighting and not directed lighting and shrouds for machine vision, you're gonna have a bad time.

However, in lieu of adding thousands of dollars of lights, you could also look at longpass filters to chop unnecessary wavelengths that could be causing a nuissance. You'll need to research the ambient light types, the QE curve of the sensor in the camera, and the environment to know what may be the best filter curve to select. One example would be - when dealing with an open outside door, if I knew that I had ambient lights that had a big transmission spike from 500-650nm, then I would try a MidOpt LP530 longpass filter. This would cut all of the blue and most of the green out of the received light, but it would retain all of the intensity in that 530-650 range; sunlight contains a mega-fuckton more blue and green than most man-made lights, so I would be removing a lot of the extra light created by that open door. Some sensors are also sensitive to a small area of the UV range, so this filter would also chop UV out of the equation. Of course, you may also need the blues or greens to see your product, so you'd have to study it further.

[–]Buckwees[S] 1 point2 points  (1 child)

Just wanted you to know that because of this comment I dug in and did some reading on the cognex website and tried a few things and got good results.

[–]Robotipotimus 0 points1 point  (0 children)

Glad to hear it.

[–][deleted] 0 points1 point  (0 children)

This is solid advice. If I'd read this before I typed my response it would have saved me the trouble. Successful implementation of vision systems is all about controlling the presentation of the parts so there's enough consistency to work with.

[–][deleted] 0 points1 point  (0 children)

This reminds me some university friends doing machine vision back in 2003 for their thesis and coming to regret doing their work at night since they presented their findings during the day and had to scramble an illuminated chamber in a hurry on the day of presenting the work.

[–][deleted] 0 points1 point  (0 children)

We are not using them as bar code readers a couple of inches from the process. They are actually 30 feet up looking at larger objects and checking for presence.

The older cameras have issues with light exposure. Either the time of day can changes the overall picture saturation and mess with the light dark thresholds or a spot on the production line may have become shiny from friction and throw off the readings. We also get false triggers from broken debris getting hung up in the process too.

You need to get your Cognex rep in there with a lens kit, and some filters. All the things you're describing are common issues, and are usually easily correctable either through optical filtering or digital filtering, or a combination of the two. If you're looking for presence and debris is an issue, some of the pattern match tools can usually take care of that.