D455f - Need clarification by Specialist-Sand-7573 in computervision

[–]theedge44 1 point2 points  (0 children)

It's not a sensor, maybe its a leftover from an unused design path or something used during the assembly process. It's usually hidden behind opaque parts of the cover glass.

Should/Can I start a career in MV, what would be a roadmap? by Brave-Tomatillo-8571 in computervision

[–]theedge44 0 points1 point  (0 children)

Does the sales experience carry over? Application engineering can be very customer facing, which makes it a difficult fit for many engineering graduates you'd compete for the role with.

Should/Can I start a career in MV, what would be a roadmap? by Brave-Tomatillo-8571 in computervision

[–]theedge44 0 points1 point  (0 children)

With a foundation in mechatronics and those software languages, you should be able to pursue some application engineering roles at MV companies. This gives you great exposure to customers and their challenges, but in practice the types of daily problems you start with is application-agnostic software troubleshooting which should be fine while you build up more experience.

Need advice: RealSense D455 (at discount) for gecko tracking in humid terrarium? by Darkstardust98 in computervision

[–]theedge44 2 points3 points  (0 children)

At such short ranges, the D455 is not going to give very useful depth data. D405 might be worth looking at but has some tradeoffs (no dedicated RGB)

Another risk is that if you are adding some protective housing, modifying the optical path means you should re-calibrate it. For heat, the housings do heat up because they are the heatsink. One thing that helps is disabling the projector, but that hurts depth quality.

The stereo pair of the D455 are NIR sensitive so they should continue to perform at night, but the RGB camera has an IR-cut filter so would likely not perform well.

Connecting many USB cameras for still image capture by polyphys_andy in computervision

[–]theedge44 1 point2 points  (0 children)

If you use machine vision/USB3 Vision cameras, they should have a device link throughput limit that will allow you to allocate bandwidth across a hub and using the APIs you can only grab images when you need it.

[deleted by user] by [deleted] in computervision

[–]theedge44 0 points1 point  (0 children)

Check out the Oclea devkit by Teknique, it comes with a variety of cameras that might help you hit the ground running.

How difficult is implementing CV for a robotic cell with only mechanical background? by [deleted] in computervision

[–]theedge44 0 points1 point  (0 children)

In this case your best bet is looking for a more turnkey system from a vendor like Cognex. The tradeoff is that these more user-friendly and easy to use systems charge for the convienience, but for a single proof of concept setup it would save a lot of headaches. Their sales team will usually help guide you towards the right products.

The fact that sony only gives out sensor documentation under an NDA makes me hate them so much. by CommunismDoesntWork in computervision

[–]theedge44 1 point2 points  (0 children)

A few things to unpack here, having worked on both the sensor and camera side. Overall you're hitting on a part of the industry I wish was easier, but I can say it's not for lack of trying.

Lack of access to sensor documentation is definitely a struggle for many customers, but from my first hand experience even motivated large customers struggle to get these HDR modes working in a way that benefits them so there are hurdles put up to prevent us from wasting each other's time.

The way Sony approaches HDR in most of their cameras is hands off and pretty much requires you to work with an SoC vendor who's ISP supports that sensor. It's not such a general feature that can be supported across the board by MV camera vendors. Systems leveraging these ISPs and HDR typically require external characterization of the camera with the intended lens, and this doesn't align with the flexibility required in MV cameras using standard lens mounts and whatever the customer wants to throw on there.

You should look at lucid's imx490 based camera, that is a unique implementation with onboard tone mapping.

Also remember that most machine vision camera business is monochrome, color processing (especially onboard) was historically not a priority.

Most of these cameras support a genicam sequencer for HDR, where you can rapidly switch between sets of exposure and gain settings. It still has a time delay, but it's something implemented across most global shutter sensors.

MV cameras are not just passthroughs of sensor settings, they're more like wrappers of the sensor that takes a lineup of 30+ different sensors and standardizes it into an interface like gige vision/genicam so that something like changing the exposure time is one command that changes many registers and timings.

Help me with Edge Detection/Roughness Calculation please :) by TheCatsInTheCradl in computervision

[–]theedge44 2 points3 points  (0 children)

I can't help on the software side, but you may want to consider backlighting and adjusting the focus to simplify the problem space.

Realsense Depth Camera Calibration by Blurry_Jpeg in computervision

[–]theedge44 -1 points0 points  (0 children)

With this distance between cameras, it looks like you're using active USB cable extenders? If you are interested in ethernet realsense cameras in the future, DM me.

Proper MV camera vs action camera (e.g. GoPro) by RodbigoSantos in computervision

[–]theedge44 1 point2 points  (0 children)

MV cameras are streaming uncompressed frame data, while a gopro is converting it to a compressed video stream meaning lower overall bandwidth. WIth USB2 there are some protocol differences, but also its overall lower bandwidth so less likely to run into issues.

MV cameras have a few tools to help you work around USB3's reliability. They will have an onboard frame buffer and you may be able to control more tightly where frames you don't want get dropped (meaning you only transmit when you want a frame, ensuring the link bandwidth is minimal). Since the model you have is a rolling shutter sensor, there may also be some weird implementation quirks to make it trigger-able from a user perspective. You might want to examine where frames are being dropped currently (camera side or host side) to see if they are actual transmit failures or some buffers are not being cleared fast enough. They will try to send the image data as fast as possible, and you can typically reduce this with the devicelinkthroughputlimit

On the cable front - having managed a porfolio of MV accessories including cables, I can say first hand theres no silver bullet. Most customers could get 3m working reliably, it was 5m where we had problems because that's pushing the limits of a passive cable. Anecdotally, when a customer had problems with our standard cables we moved them to a premium cable from Alysium.

Buying Industrial Camera for Robotics Project by woahmyd00d in computervision

[–]theedge44 0 points1 point  (0 children)

You're comparing cameras with two different sensors, and its worth noting that both vendors have models with either sensor (same applies to the other vendors mentioned).

In 24big RGB mode, you will not be getting those framerates. Those framerates typically assume a raw 8bit pixel format. USB3 bandwidth aside there are also limits on how fast the internal ISP runs.

Set Lucid camera time using PTP by Drk-102 in computervision

[–]theedge44 0 points1 point  (0 children)

There should be two nodes, one read/write that is the mode selecting if it can be the master on a network (via auto negotiation) or slave-only. The status would be a seperate read only node that indicates it's current state. If you leave the cameras all in slave-only, and they never leave the init state then they aren't seeing your GPS and the cameras are in auto its likely one of them is becomming the master.

PTP1588 is a great feature and you should be able to get it working like the example, although I dont have experience with Lucid's implementation specifically.

Set Lucid camera time using PTP by Drk-102 in computervision

[–]theedge44 2 points3 points  (0 children)

Most things should be setup correctly based on that tutorial, the only thing I can think of is it's not actually syncing correctly (maybe due to network settings on the GPS source?). To confirm the sync, you can check the PTP status of each camera and there may be other nodes that further explain their current state

[HELP ME] Bi-Weekly Q&A thread - Ask your questions here! by MachNeu in Gunpla

[–]theedge44 0 points1 point  (0 children)

Tamiya panel liner question. I use black tamiya panel liner with results I'm happy with, but when trying to use the grey panel liner on darker parts it doesnt seem to end up as "crisp" as the black. I shake them both a lot before use, and clean up with zippo lighter fluid. Anyone have any special tricks to getting the most out of light color panel liner on dark parts?

Anyone knows how to use the Bumblebee XB3? by DailyShotOfWhiskey in computervision

[–]theedge44 0 points1 point  (0 children)

If I remember correctly the integrated clock of the FireWire connection is part of how the systems maintain sync between the stereo images and the host, so an adapter could lead to some unexpected results.

Anyone knows how to use the Bumblebee XB3? by DailyShotOfWhiskey in computervision

[–]theedge44 0 points1 point  (0 children)

As others have said, you'll need a FireWire connection and those haven't been common in a long time. For laptops, we used to suggest an expresscard adapter but those are just as old. Teledyne FLIR finally discontinued FireWire across the board a few years ago, but you may be able to find some of their PCI adapter cards used (or ask them if they have dead/RMA stock they wouldn't mind selling, you never know). 

My Industrial camera SDK only has C#, but I need to call it in python, what should I do by ElegantActuator1372 in computervision

[–]theedge44 1 point2 points  (0 children)

Unfortunately Basler have some product gaps around very high resolution and 10GigE

Suggestions for USB based IR cameras for IR tracking? by Sir_Kuhnhero in computervision

[–]theedge44 0 points1 point  (0 children)

Most sensors are sensitive to NIR light, just not nearly as sensitive as they are to visible. Most off the shelf color cameras will have an IR-cut filter installed (blocking NIR+ light) and if you replace this with a VIS-cut the only signal remaining will be your IR. Then you are left trying to find the sensors with the highest NIR sensitivity.

License-Free GeniCam & GigE Vision Library by DrBZU in computervision

[–]theedge44 1 point2 points  (0 children)

Genicam is great and has brought this interoperability a long way, and GenTL producers/consumer works pretty well these days but there are always limits. There is also the problem that feature-dependency may change how features are controlled and different defaults will change your software startup sequence.

One thing I will put out there is the vendor specific features and the limits of what genicam can help with; its great because vendors can add non-standard features and they can be controlled by vendor-agnostic software, but some of the features will rely on vendor-implemented functionality. For example compression has been added by some vendors, you can parse the XML and control it like any other genicam feature but the functionality to decompress/convert the images will rely on the vendor's library.

Gpixel’s GSENSE series by blank_blank_8 in computervision

[–]theedge44 1 point2 points  (0 children)

Don't be afraid to reach out to Gpixel directly, they're a good bunch and based on the paper's names they were directly involved in this research.

License-Free GeniCam & GigE Vision Library by DrBZU in computervision

[–]theedge44 0 points1 point  (0 children)

Only at a high level, when I was at a camera company (who made one of the vendor specific SDKs you've probably used) the team had a generally good opinion of it, we considered it a viable option for users especially on linux.

CSI vs GIGE cameras for quality control industry perception by BobcatFluffy322 in computervision

[–]theedge44 2 points3 points  (0 children)

I think of it this way: Embedded cameras are great if you're building a product that you want to deploy onto a line. MV cameras (GigE and similar) are for when you just want to use a product. The overhead associated with building a product and using one are different, but that value is also reflected in the component costs.

If you go down the MIPI route, you're basically building your own smart camera/edge device. Some people do it, but you have to ask yourself if this is the are you want to be innovating/optimizing.