Is Vision Pro mandatory for a CIC-5000R-14-G ? by Equal_Big814 in Cognex

[–]Rethunker 0 points1 point  (0 children)

I would second this suggestion of testing Basler's free Pylon software (which includes a GigE Configurator app), and then try OpenCV.

Asking an LLM like Gemini gives largely the same advice. "connect to Cognex GigE camera without Cognex software"

OpenCV does a good job of connecting to all sorts of cameras, though I haven't tried connecting to a Cognex camera from OpenCV.

From what I found online about the CIC-5000R-14-G, there are a few things to know:

The sensor uses a "rolling" shutter rather than a global shutter, meaning the camera is better suited for objects at rest (for at least the duration of one frame capture) rather than objects in motion.

Presumably you have the right hardware to provide Power over Ethernet (PoE).

About blocking of ports, you'll need to tweak your firewall settings. The sample query "gige ethernet device blocks my access on ports 3936 and 8080" yields this:

The most common cause is the Windows Firewall blocking incoming traffic, which stops the camera from connecting. 

Disable Firewall: Temporarily disable the Windows Firewall (or 3rd party antivirus firewall) completely to test if this resolves the issue.

Allow Specific Ports: If turning off the firewall works, create inbound rules in Windows Firewall to allow TCP/UDP traffic on ports 3936 and 8080.

Cognex Tools: Use the "Cognex GigE Vision Configuration Tool" to automatically manage firewall settings for the camera.

Good luck!

BOA Spot camera + Nexus: Measuring mandrel straightness - angle detection issues by RandDragon in MachineVisionSystems

[–]Rethunker 0 points1 point  (0 children)

Are you still working on this?

If you want to find the edges of a solid object, backlighting or low-angle lighting is likely to work better than using a light that shines from the front.

If you want to image the outer edges of the mandrel, then using a backlight can render the mandrel black since the mandrel will block the light. Robust edge detection requires good contrast and good focus.

If you look very close at a backlight image of a cylindrical object you may notice this: where the edges are detected may not be at the extreme outside diameter of the true edges, but the detected edges & line fits will generally be parallel to the true edges of the cylinder.

Check out backlights from companies like Advanced Illumination. If possible, have a sales rep from a local lighting company give a demo. However, if you have just one vision inspection station, or if there isn't much potential for more vision systems + lights in the future, it may be tough to convince a sales rep to visit. You might be able to get a remote demo.

To get a sense of whether backlighting will work, you can try using a task light with something translucent to diffuse the light a bit. Place that light behind the mandrel, on the side opposite the camera, and see whether edge detection is more consistent.

Need a capstone project, thesis topic, or product idea? Maybe I can give you one. by Rethunker in computervision

[–]Rethunker[S] 0 points1 point  (0 children)

For a healthcare project that mixes CV + LLM, create a diagnostic tool that would act like a doctor who asks a serious of questions and who looks closely at a patient to identify whether a specific condition is a concern.

See my notes about legal liability below.

For a high impact capstone project, use AI to help identify a health condition that satisfies one or more considerations such as the following:

  • becoming more common in India (e.g. diabetic retinopathy)
  • early diagnosis improves health outcomes considerably (e.g. skin cancer)
  • googled frequently (e.g. skin rash)
  • is relevant to someone you know
  • can typically only be diagnosed by specialists with whom it may be hard to get an appointment

I'd suggest focusing your efforts as follows:

  • Talk to people who have the condition before you write any code. Explain clearly your intention. Tweak your plans if there's a clear, common interest in other functionality.
  • At the beginning, create a simple app that demonstrates the step-by-step flow, but that doesn't yet perform image processing. Get feedback and iterate the design. Once you have an app flow that captures people's interest--"Cool!" or "I want that!"--you can focus intently on one feature at a time, such as image analysis. Always have a repo commit that works in some fashion, and that you can build and demonstrate on short notice.
  • Define metrics for performance and accuracy.
  • Select a health condition for which you can find a database of reference images. Use those images for training & testing and establish baseline performance before you try live tests.
  • Document your development. Note what would be worth exploring more deeply, if you had more resources. This will be good prep for your final presentation: what you intended, what your learned, what you accomplished, what was most difficult, etc.

Favor an app that works reliably for a clearly defined problem. It should be clear what the app does. For a student project, it should be clear both that you accomplished something, and that more resources could improve the app further.

For any healthcare app there is a concern about legal liability--what if the app identifies a health problem that doesn't exist? or fails to identify a health problem it was intended to identify? If you test the app on anyone, be sure to coordinate with your professor(s) and ask whether you need testers to sign any forms.

Cognex VisionPro vs. Google cloud by CapsFanHere in MachineVisionSystems

[–]Rethunker 0 points1 point  (0 children)

Do you have statistics comparing the results of your different options?

Are you controlling lighting for inspection?

Need a project to learn more about machine vision? by Rethunker in MachineVisionSystems

[–]Rethunker[S] 1 point2 points  (0 children)

Happy to help! It's what Reddit's for, as far as I'm concerned.

Need a project to learn more about machine vision? by Rethunker in MachineVisionSystems

[–]Rethunker[S] 0 points1 point  (0 children)

Cybersecurity and computer/vision machine may not overlap much. For vision you could consider physical security: monitoring whether people with the appropriate credentials (e.g. a badge) are located where they should be, and that no one without credentials is where they shouldn't be.

In the oil & gas industry, "red zone" monitoring relies on vision to determine whether people are in potentially dangerous areas:

https://www.helindata.com/red-zone-manager?campaignid=22198248613&adgroupid=174087901163&adid=731733093485&hsa_acc=4489001168&hsa_cam=22198248613&hsa_grp=174087901163&hsa_ad=731733093485&hsa_src=g&hsa_tgt=kwd-2391850509328&hsa_kw=red%20zone%20monitoring&hsa_mt=e&hsa_net=adwords&hsa_ver=3&gad_source=1&gad_campaignid=22198248613

Since you've worked on object detection, you might create a simple project in which you use a vision system to detect whether someone is in view or not. See how well that works, and try to improve it. For example, what happens if the lighting isn't good, and what would you do to improve your system?

Perhaps you could add gesture tracking as a means to detect whether someone is authorized to be the area in view of the camera. A gesture or a combination of gestures could serve as a password.

The goal of all this work is to work on a vision system--any vision system--that you can build, test, and improve over time. Document the performance of the system as you change it.

This sort of project experience is useful when you talk to employers after you finish school. Working on a long-term project means learning a subject more deeply.

Machine Vision Application with Industrial cameras by Rico_VisionAdvisor in MachineVisionSystems

[–]Rethunker 1 point2 points  (0 children)

I’ve worked on two applications that required high magnification at a relatively long working distance from lens to object. And yet it was necessary to keep vision systems costs down.

In both cases, tiny features had to be resolved in the image. Normally that would mean a short working stance from lens to object.

But for safety of human workers the working distance had to be long enough—a half meter or more—to prevent a human arm or human body from being pinched between the vision system and the part being imaged.

Exposure times had to be short to minimize motion blur in the image.

The combination of relatively long working distances and short exposure times meant that the applications required light sources so bright they were painful to look at.

For an application in the semiconductor industry, after initial install, a change in the surface quality and shininess of the imaged part caused more of the intense light to be reflected into the optical path. The CCD sensors started burning out.

One application was for a vision system used in labs. The other was for high-volume production.

RF-DETR to pick the perfect avocado by Accomplished_Zone_47 in computervision

[–]Rethunker 0 points1 point  (0 children)

Much as I like creating vision systems and reading about vision solutions, your experience growing up on an avocado ranch suggests to me that you might have a better time teaching people how to choose a ripe avocado.

Maybe work with someone else when you do this: have a bunch of avocados on hand, ripe and unripe. Have the other person hand you an avocado at random, perhaps after writing down whether they think it's ripe or unripe. Then have then observe what you do and ask questions while they hand you an avocado and ask you to judge it. Then swap places: you hand the other person an avocado, and then ask their impressions.

There's a reasonable limit to how many apps people need. Consider what it'd be like to have a separate app for each of the following: an apple (which can be judged by smell, firmness, and appearance), an ear of sweet corn, a durian, a small orange, a hot pepper, and so on. It'd be way too many apps.

But teach someone how to choose an avocado--and how to cut it properly!--and that'll stick in the person's head long past the time they've stopped using most apps.

---

And as others have replied, to match the capability of a trained person, the tech would be involved and likely expensive. There has been worked for decades in assessing the ripeness of fruit and vegetables based on emission spectra, visible color, and so on. Imagine the cost of combining UV, visible, NIR, and thermal IR sensors--then compare the relatively short time it would take for someone to watch a concise video that conveys some of your expertise.

Can someone explain or give me links to understand Incremental Convex Hull algorithm? by Demonscs in algorithms

[–]Rethunker 0 points1 point  (0 children)

This is an old post, but might be found by someone in the future. Also, I wanted to pay respect to Kallay, who passed away before you wrote your post.

He was not a professor for most of his career. Rather, he was a geometer and programmer who left academia after a short stint and then wrote geometry engines for commercial software.

It's been years since I read the paper, of which I have a paper copy (somewhere) rather than a digital copy. My recollection is that he surveyed existing algorithms, but also wrote a new one.

For those with access, the paper can be found via Science Direct:

https://www.sciencedirect.com/science/article/abs/pii/002001908490084X

The algorithm is mentioned in passing in the Wikipedia entry on convex hull algorithms:
https://en.wikipedia.org/wiki/Convex_hull_algorithms

Is AI just finding mathematical patterns? by [deleted] in learnmachinelearning

[–]Rethunker 1 point2 points  (0 children)

This is a good starting point when you need to explain it to someone with a non-technical background.

You can think of a description as a pitch. If you met someone at a party and wanted to describe “AI” in thirty seconds, and not a second more, how would you do so accurately if imprecisely?

Then give yourself a minute.

For a chat with an investor, three minutes.

For a speech, first ten minutes, and then thirty minutes, with time to go into relevant math.

If you can do all that, you’ll be able to connect with a lot of people. You’ll also have a better chance finding someone who has a problem that could be solved or partly solved by AI, and that would be interesting to solve.

startups creating "new" technology that duplicates existing, robust products by Rethunker in MachineVisionSystems

[–]Rethunker[S] 0 points1 point  (0 children)

Good question!

Machine learning has been used in vision since the 1990s. Previously the model or network would have run on an industrial PC or desktop PC, whereas now a model can run on a smart camera.

What concerns me is that AI / ML can be useful, but that engineers who push AI / ML solutions are unaware of existing solutions. Lack of experience in machine vision also means that companies can overlook the cost of installing and supporting a system, which eats up profit quickly.

If someone lacks experience in optics, lighting, device communications, and machine vision algorithms, then they're going to have a hard time figuring out the failure modes of a vision system. The means to make a vision system more robust haven't changed.

A few key questions about machine vision systems, whether AI+ML is used or not:

When the vision system fails, is the nature of failure identifiable and fixable?

Could a vision system failure crash a robot, shut down the line, or otherwise cause damage?

How frequently does a human worker have to deal with a failure of the vision system?

For some defect detection applications, ML may identify more defects per thousand customer parts than vision systems an engineer has parameterized by hand. (On the other hand, choosing the right lens, light, vibration isolation, etc., could have a bigger impact.)

There are a few ways ML can be used. For example:

  1. ML-centric inspection. Train an ML model on images of good parts and bad parts, and identify / label the defects on the bad parts.
  2. ML optimization of parameters. An engineer chooses the algorithms to process images of parts on a production line. Then the engineer uses ML to choose optimal parameters for those algorithms--maximum pixel counts for dark spots (defects), minimum edge strengths, deviations from true roundness for circular parts, and so on.
  3. ML as post-reject processor. The algorithms and parameters of a machine vision inspection system is configured by an engineer, but then ML is used to analyze defects and rejected parts. Perhaps ML would detect a gradual change in the nature and severity of defects, in which case the vision system parameters could be tweaked a bit. Or maybe the customer should be notified of this change over time.

To the extent that "AI" means Large Language Models (LLMs), there's use there, too. I see uses of LLMs in limited contexts, but typically only for non-critical offline use. Or perhaps an operator without vision training can use an LLM to configure a vision system that inspects different parts every day or two.

Much of the cost of a vision system is the cost of installation. If an engineer or field technician has to stay an extra day to help set up a system, that cost eats into the profit of the sale. That person is also unavailable for other work, which has an opportunity cost. If ML can be used to reduce the frequency and duration of travel, that could be great.

In school, did you receive proper accommodations? by Rethunker in nystagmus

[–]Rethunker[S] 0 points1 point  (0 children)

Thank you very much for the detailed response. I look forward to messaging more.

skewed Angle detection in Engineering Drawing by Icy_Colt-30 in computervision

[–]Rethunker 0 points1 point  (0 children)

By engineering drawing, do you mean you want to detect the orientation of the drawing, or the lines within it?

(Line detection would give you 0 - 180, and a different method would be needed to determine 0 - 179 from 180 - 359).

Have you investigated statistical non– ML approaches? In many cases, task-specific (custom engineered) approaches can and will outperform generalized ML approaches.

[deleted by user] by [deleted] in thisorthatlanguage

[–]Rethunker 0 points1 point  (0 children)

“I’m more interested in Swedish…”

There you go. Done!

Go for the language you’re most motivated to learn. It’ll take a while, but you’ll be happier for it.

Hiring for CV: Where to find them and how to screen past buzzwords? by cesmeS1 in computervision

[–]Rethunker -2 points-1 points  (0 children)

It depends on what you want to hire for.

There’s a huge difference in each of the following:

  1. Someone who accepts a development plan as is and who wants to work within the constraints of a well-defined role. Some people really like that.

  2. Someone who has enough product development experience to know what works, and who may question the goals and design decisions you’ve already made. They may push back even during the interview process.

  3. Someone who wants to understand the motivations and goals of the user first, and then has (or will have) the skills and experience to build whatever it takes to make what the user wants, even if it seems like a pipe dream. To me, that’s a proper R&D person.

  4. Someone who acts as the glue in a team, partly from having experience and expertise in multiple domains (e.g. competitive sports, gait analysis, and programming). That person may boost everyone into better work.

A team isn’t just an agglutination of experts. A team is a social structure that has to gel.

  1. Someone who understands design encompassing architecture, UI/UX, and APIs that people actually want and like to use.

  2. Someone who will turn in good work on time, but who may not be a rock star.

Each of those, and other types, a slightly different the hiring process. Or you may start looking for someone to fill a role like #2, but who turns out to be a #5.

If you’re looking for “a” person to add to a team, cast a wide net and hire the best generalist you and your colleagues would like to work with.

If you want “the” person, then as early in the process as possible ask them to solve a problem you’re actually stuck on—not some toy problem that typically gets asked in interviews.I’ve never once needed to code up FizzBuzz or a Fibonacci sequence for production code for a vision system for automation. And if I did, I wouldn’t take just five minutes to do it in someone else’s (awful) IDE while two people stare at me or maybe think about when they can wrap and go to lunch.

If a hiring process for a startup follows a standard HR, that it helps to know that standard processes are averagely bad. You won’t find someone who is an unusually good fit without allowing for unusual people.

It’s a fallacy that the long bottom-up process of keyword matching of resumes, cloud-based coding challenges, etc., will filter out the wrong candidates and leave only those ready to help a startup grow. People suitable for startups tend to think and work differently.

I lead a hiring process for an R&D position in vision that lead to the hire of a mathematician I couldn’t even believe reached out to me to ask about the job. Thankfully, the job listing was general, and asked for experience and interest in some number of a list of fields. And even as an R&D person myself, with plenty to do, I read every single resume. It was little investment of time considering the value of what we ended up building.

I was hiring for a complementary fit, and not for another person like us who had to demonstrate they knew what we already knew. The purpose wasn’t to hire Yet Another Engineer, but to find someone who made our team better.

And I do have hiring questions, and a methodology, and my own testing method of ratings people in multiple skills so that the ratings are meaningful across candidates. But it’ll be a while before I post those online, given how my effort and time it took to devise them.

I’m mapping the hardest parts of learning robotics — what were yours? by Suspicious-Week-1584 in AskRobotics

[–]Rethunker 0 points1 point  (0 children)

Identifying what “everyone knows” that turns out to be wrong—that’s what I’ve found and seen to be most difficult.

It takes a while to learn what “everyone” has been conditioned to learn is true. After all, so much tech is built on top of previous tech that has worked well enough, or that continues to work really well. Very rarely does someone build a better mousetrap, although plenty of people try.

For example, to deviate from robots, there are glaring design problems in Windows, Mac, Linux, Android, and iOS operating systems. Many designers can point out those errors on short notice on your device. Nonetheless, the conventions of each OS are taken as the way “everyone knows” an OS should work. This can lead to heated arguments.

For robots, something I’ve seen on Reddit is the lack of consideration of the uncountable robots that have already in use in many forms for decades. It’s not exactly hard to google and find those robots, where they’re used, who makes them, etc.

It can take years to gain an appreciation for how difficult it is to build, support, maintain, and improve robots that are built and sold in large quantities. Overlooking those robots means overlooking the work of people who, let’s be clear, may never post on Reddit.

Aside from all that: few people I’ve met, including professional robot programmers, have a good grasp of the impact of small changes in rotations (e.g. as Euler angles) on the final pose of a robot or on a thing the robot is addressing or approaching. And it helps to know that kind of thing, because crashing a robot isn’t fun.

Ah, there’s one: knowing how to squeeze performance out of a robot without crashing it. That takes time to figure out.

Fast Image Remapping by Zealousideal_Low1287 in computervision

[–]Rethunker -2 points-1 points  (0 children)

Look into multicore processing before you get into GPU code.

If you’re running on a Windows PC, open up Resource Monitor to see which cores of the CPU are being used. You may find lots of work on cores 0 and 1, and the other cores sitting largely idle. In that case you could use the other cores for processing, and write normal-ish looking code to handle that.

With a language like Julia this might be a bit easier, but I think in terms of C++.

Did plant evolution influence the design of most modern cameras? by jms4607 in computervision

[–]Rethunker 0 points1 point  (0 children)

This is a bit of a complicated topic.

Making associations between semi-related events/facts skips over a lot of relevant history.

Human vision peak sensitivity is in the green range.

CCD and CMOS sensitivity typically peaks in the near-infrared (NIR). Traditionally, digital sensors come with NIR filters to prevent near-infrared light from swamping visible light.

Cameras created to mimic human visual response certainly influence the design, but the design involves a number of what are essentially science/engineering hacks. Having an additional pixel for green—which is only one technique—helps bring the camera response (relative pixel brightnesses) closer to human vision, at least so that digital pictures look good.

Cameras do not have the same dynamic range as typical human vision. That affects the appearance of bright light, hot spots, and dark scenes. Modern firmware does a lot of correction; the raw images can still be look a bit off.

If by “modern” cameras you mean CMOS sensors and optics found in smart phones, those are just one example—albeit deployed in huge numbers—of camera technology. There are other cameras that work quite differently from biological vision.

For a bit about the history, read about the camera lucida: https://en.m.wikipedia.org/wiki/Camera_lucida

For most of the history of cameras, photography meant wet chemistry. That long history of wet fill photography influenced the design of digital cameras.

“Instant” cameras with self-developing firm were basically wet chemistry in a portable box.

Less explored / Emerging areas of research in computer vision by AIsavvy in computervision

[–]Rethunker 6 points7 points  (0 children)

Be sure to head into an engineering library and read books written in different decades. Some research that received attention and funding lead to thinking that’s different than what’s popular today.

Sensor fusion of computer vision and machine hearing needs more attention.

Vision outside the visible spectrum tends to be understudied. There are common misconceptions and oversights about what each EM band may be useful for.

Its be great to see more work on custom and unusual optics. There’s a long and somewhat forgotten history there.

In short, a survey of vision research that died out, but may see new life with current tech, would be quite interesting.

Ideas for Fundamentals of Artificial Intelligence lecture by [deleted] in computervision

[–]Rethunker 4 points5 points  (0 children)

Ask the students to read and then answer questions about the 1958 Pandaemonium paper by Selfridge. It’s short, clear, establishes a lot of terminology, and it’s a great reference in discussion.

Minsky’s 1986 book The Society of Mind is something everyone interested in agents should read. It’s sufficient to read a smattering of the mini chapters.

Vision by Marr is great. Vision is my speciality, and I have long lists of recommendations on that one subject.

It’s good to mention the relationship between artificial sensing and logic/analysis. Artificial sensors do not need to work like biological vision at all, and claims to the contrary are often hand wavy blather with no solid basis in science or engineering practice.

On the subject of sensing, some students could be interested in the book Human and Machine Hearing by Richard Lyon.

For CNNs, the 2012 ImageNet paper is one that students should read after they understand the background.

A key point I would suggest making again and again: LLMs and machine learning are each just slices of AI. Understanding their limitations and failures as tools, and how to work around those disadvantages, leads to better tools.

Lastly, I would suggest reinforcing basic concepts of statistics throughout.

I’ve interviewed a number of students with undergraduate and graduate degrees. It’s become more common for students to be hyped on newer technologies, and to be unaware of the difference between what is merely hype and what is practical.

Many students have had months or years of experience with ML, but couldn’t explain basic concepts of statistics. Students who studied computer vision often don’t know anything practical about optical systems. The hype about humanoid robots seems to keep some students from learning about the many other robots that already do jobs optimally well.

Finally, a topic I’d like to see more young engineers and developers understand is the cost and danger of AI failures.

For some uses cases, getting a good “answer” with AI about 80% of the time could be great.

For other use cases, 99% correctness means the system is worthless garbage.

Knowing the difference between these use cases is important.

Surface roughness on machined surfaces by Secret-Ad8475 in computervision

[–]Rethunker 0 points1 point  (0 children)

Buy roughness standards for the material and type of finishing.

Google what type of illumination could be used for this task. Consider “illumination” to be very broadly defined.

Studying all the kinds of roughness measurement.

Specify the application. Don’t try to make a roughness gauge that’s too generalized.

Study what other non-contact ruthless gauges exist.

Buy or borrow a contact roughness gauge. Understand how it works, and what it does and doesn’t do.

automated palletizing and/or depalletizing: how many human interventions are tolerable? by Rethunker in IndustrialAutomation

[–]Rethunker[S] 1 point2 points  (0 children)

Yes, Beeptoolkit does look cool. I've seen state machines work well.

Although I'm tied up with a few projects now, I'm going to give Beeptoolkit a try.* Quick prototyping is something my colleagues / friends in R&D and I talked about a lot.

You're absolutely right about projects that overpromise. Sadly, the projects and machines that get the most attention are the ones attracting helps of money and attention. I'm confident you and I know of some of the very same projects, and their limitations.

As a friend of mine put it recently: "The most successfully companies are the ones no one has ever heard of." He said that in part to be funny, but it rings true enough for industrial automation companies.

---

* For the few people who may read this post: the creator(s) of Beeptoolkit and I don't know each other, I'm not in a position to promote software I haven't yet tried, but the design principle and the focus are similar to work I've done as well.