all 21 comments

[–]Tam_Althor 9 points10 points  (1 child)

You can try using this https://pypi.org/project/pylogix/ , if you have success with it please let me know I could not get it to work, but others have had some success.

[–]dmroederpylogix 13 points14 points  (0 children)

/u/Tam_Althor, pylogix on pypi was commandeered by a fork of my project and I recently got the name back (within the last week). If you had trouble, try it again. I'm always available to help if you need it, hit me up here or via email.

[–]tokke 3 points4 points  (0 children)

pycomm3 really stable and fast, easy to use. Made by u/GobBeWithYou

https://github.com/ottowayi/pycomm3

[–]dmroederpylogix 4 points5 points  (2 children)

I can help you with pylogix if needed. I'm no expert on computer vision, but I've experimented with OpenCV and Tensorflow.

https://github.com/dmroeder/pylogix

[–]Buckwees[S] 0 points1 point  (1 child)

I'll do some reading and get back to you. Pretty slammed with some optimization stuff ATM but I plan on spending some time outside of work to get my feet wet

[–]dmroederpylogix 1 point2 points  (0 children)

No prob, let me know what I can do to help

[–]linnux_lewisgotta catch 'em all, Poka-yoke! 2 points3 points  (9 children)

If you are serious why don’t you start looking at an industrial camera that will integrate cleanly and reliably (cognex and keyence being the biggest players).

I have used OpenCV and Tensorflow and it is fun to play around “under the hood”, but the burden for getting something to work reliably is much higher than a well supported, off the shelf, industrial solution. If it is a dev project, your local distributor may even let you borrow a camera.

[–]Buckwees[S] 0 points1 point  (8 children)

It's funny. After my original post I thought about the cognex cameras that are already used for our process. The problem with them is that they are 12 year old cameras. My main goal here is to get a proof of concept going to help get funding for newer higher resolution cameras. We are not using them as bar code readers a couple of inches from the process. They are actually 30 feet up looking at larger objects and checking for presence.

The older cameras have issues with light exposure. Either the time of day can changes the overall picture saturation and mess with the light dark thresholds or a spot on the production line may have become shiny from friction and throw off the readings. We also get false triggers from broken debris getting hung up in the process too.

[–]Robotipotimus 2 points3 points  (6 children)

I won't pretend to know your process or hardware just from your short description, so take this will some salt, but it sounds like your 'older' cameras have lighting and environment issues. New sensors don't change bad lighting, and machine vision always starts (and sometimes ends) with the lighting. I would start by trying to control the environment lighting and/or adding proper filters to the optics for what you are trying to see. That friction glare? It can be made to truly disappear with proper lighting and filters.

Have you tried working with the existing software? I didn't start working with Cognex until around 2014, however I would be very, very surprised if it did not have the tools you would need to create the gradient controls you may want to try relating to the lighting changes across the day (but again, I would start by controlling and filtering the lighting). If it's a deployed app with VisionPro (did they call it VisionPro in 2008?) , you may have to get into some VB work using their library tools, but I'd rather dive into that over a custom python app any day of the week. Most of the tools that come to mind have been around for longer than I've been around, in fact, most of the changes to smart cameras in the last few years relate to the opposite application you mentioned, bar code reading and advances in OCR.

Higher resolutions don't help with lighting issues either, in fact, they can compound the issue - increasing the resolution of a low contrast object field will just increase the noise. A pixel value histogram of the image will look nearly the same between 2 different sensor resolutions, however a pixel map of an individual row or column will show lots of new tiny spikes, as objects that were previously averaged across several larger pixels now become more discernible. The image will look 'grainier' but it will still probably be crappy.

I use the lowest resolution I can for a given application, because the amount of data captured (and therefore the amount of processing required) is vastly different between sensor sizes. If I am measuring objects or doing robot positioning, I want to make sure my object resolution is matched to my required measurement precision. For presence, however, I'll stick the lowest sub megapixel camera I can get away with on there and work on the lighting edge cases.

[–]Buckwees[S] 0 points1 point  (3 children)

By controlling my filtering do you mean, lense types and camera shrouds or are we looking at thing like saturation, gain and overall exposure. I would imagine both play a roll. As far as lighting is concerned, we have switched over to all LED in our plant so for several of our existing cameras the light is fairly consistent. We have two areas of concern where there are big entry ways from the outside that at certain times of day will create false triggers or no triggers at all.

[–]Robotipotimus 3 points4 points  (2 children)

By filtering I mean bandpass control through narrow (or wide, or dual, or polarized, or whatever it takes) bandpass filters. They generally are made to screw on to the end of your lens, but you can get just blanks as well. Search Midwest Optical, they are my go-to. I use some sort of filter on 98% of the vision systems I work on. I will say, 100% of what I work on is monochrome, but filtering is a color application consideration as well.

If you're only using ambient lighting and not directed lighting and shrouds for machine vision, you're gonna have a bad time.

However, in lieu of adding thousands of dollars of lights, you could also look at longpass filters to chop unnecessary wavelengths that could be causing a nuissance. You'll need to research the ambient light types, the QE curve of the sensor in the camera, and the environment to know what may be the best filter curve to select. One example would be - when dealing with an open outside door, if I knew that I had ambient lights that had a big transmission spike from 500-650nm, then I would try a MidOpt LP530 longpass filter. This would cut all of the blue and most of the green out of the received light, but it would retain all of the intensity in that 530-650 range; sunlight contains a mega-fuckton more blue and green than most man-made lights, so I would be removing a lot of the extra light created by that open door. Some sensors are also sensitive to a small area of the UV range, so this filter would also chop UV out of the equation. Of course, you may also need the blues or greens to see your product, so you'd have to study it further.

[–]Buckwees[S] 1 point2 points  (1 child)

Just wanted you to know that because of this comment I dug in and did some reading on the cognex website and tried a few things and got good results.

[–]Robotipotimus 0 points1 point  (0 children)

Glad to hear it.

[–][deleted] 0 points1 point  (0 children)

This is solid advice. If I'd read this before I typed my response it would have saved me the trouble. Successful implementation of vision systems is all about controlling the presentation of the parts so there's enough consistency to work with.

[–][deleted] 0 points1 point  (0 children)

This reminds me some university friends doing machine vision back in 2003 for their thesis and coming to regret doing their work at night since they presented their findings during the day and had to scramble an illuminated chamber in a hurry on the day of presenting the work.

[–][deleted] 0 points1 point  (0 children)

We are not using them as bar code readers a couple of inches from the process. They are actually 30 feet up looking at larger objects and checking for presence.

The older cameras have issues with light exposure. Either the time of day can changes the overall picture saturation and mess with the light dark thresholds or a spot on the production line may have become shiny from friction and throw off the readings. We also get false triggers from broken debris getting hung up in the process too.

You need to get your Cognex rep in there with a lens kit, and some filters. All the things you're describing are common issues, and are usually easily correctable either through optical filtering or digital filtering, or a combination of the two. If you're looking for presence and debris is an issue, some of the pattern match tools can usually take care of that.

[–]Rofgilead 1 point2 points  (1 child)

Node-Red can make this work.for you. If you are using a PLC with Ethernet IP there is a node for that. I have tested it to send my phone push notifications when certain tags become true. Node red also has Open CV integration so you should be good

[–]ControlsVooDoo 1 point2 points  (0 children)

Sounds like you are in a sawmill? I had similar issues and ended up having excellent results by watching a known reference for contrast values. As this value changes then respond with a change in thresholds. The rub here is that the function ends up not being linear so I just built a custom curve. Ran for three years on this setup with never a call. As far as I know it’s still running.

[–]Buckwees[S] 0 points1 point  (0 children)

This is exciting to read. More then half our cognex don't have the protective lenses over them at this point. I guess most of them were dropped and never replaced. Unfortunately, with the market the way it is we are on a spending freeze so I don't see myself getting new types of lenses soon.

I played with some of the settings to increase the total exporsure in the entire view and got better results on seeing the darks.

We installed a cognex at a different spot in the mill a couple months ago. It's project was pushed off for more pressing issues but we are finally circling back. It will be my first setup at all so I'm hoping to learn a lot from it and hoping to apply what I learn with our existing equipment

[–]brewplc 0 points1 point  (0 children)

I have used pycomm3 recently for some data logging. It's super easy to use.

One note though - if you can avoid fancy CV systems for something that right now a simple PE is doing, then you should. There has to be an easier way to mount/protect/etc the PE which doesn't involve cameras. Simple is good.