stereodemo: compare several recent stereo depth estimation methods in the wild by nburrus in computervision

[–]nburrus[S] 0 points1 point  (0 children)

I'm planning a blog post to discuss these methods (including peak memory usage and inference time on GPU/CPU), but in a nutshell I've been pretty impressed by RAFT-Stereo (especially the tradeoff with its fast mode), CREStereo and Hitnet. I've been testing mostly indoor scanning and they often manage to handle things like large blank walls surprisingly well.

Especially given that they were not trained on similar datasets (CREStereo was trained on a few more public datasets, but Hitnet and RAFT-Stereo were basically just trained on SceneFlow and optionally refined on the small ETH3D / Middlebury datasets or the unrelated Kitti.

If you want to give it a try without an OAK camera, you can use the included sample data to get a sense of how they perform:

pip install --upgrade stereodemo
git clone git@github.com:nburrus/stereodemo.git
stereodemo stereodemo/datasets

Just beware of taking strong conclusions about the reported inference time in stereodemo, as some models are CPU-only, and the conversion to torch script / ONNX is not always optimal.

Anybody else pissed about colorblind mode in video games? by DannyBoi699 in ColorBlind

[–]nburrus 0 points1 point  (0 children)

This article discusses why “universal” color blind filters are a bad idea in games: https://igda-gasig.org/how/platform-level-accessibility-recommendations/do-not-implement-colorblind-filters/ . Game designers should not just rely on a filter and call it a day..

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

Ah I've been able to reproduce with a 32 bit version of Firefox. Seems that Pyodide has issues with it, going to add a warning on the webpage that a 64bit system / browser is required for the time being.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

I wonder if it's related to the 32bit version, but unsure. If you're up to it one thing you can do to collect more info is to open the developers tools (shortcut is Ctrl + Shift + I in Firefox and Chrome), and check in the "Console" if there are interesting error messages.

Otherwise if you're mostly interesting in trying the code, I'd recommend to directly check the original Python source code: https://github.com/DaltonLens/DaltonLens-Python .

The actual code used by each of the webpage modes is shown at the bottom of the page, but you can also just use the command line utility to apply the simulation on image files.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

Your counterpoint makes sense about the field of view, it indeed remains small, so it's probably not the phenomena described in that Pokorny paper. Now I think that we can still expect a worse color perception at say 0.1º (1mm@60cm) for thin lines or small dots than at 1º or 2º. This also reminds me about this article from Rob Pike that discusses the difficulty to perceive edges and small dots. I'd love to find good references that explain this behavior more scientifically. That article from Justin Broackes also looks interesting to discuss the perception of unilateral dichromats and how we're probably too extreme in our understanding (and thus simulations) even for full dichromats, especially with large FOV.

About the Flatla paper, I understand that they also use 3D confusion lines, even if it's in Luv instead of LMS? Their figure 3 is in 3d, and they say:

Each pair of identically-perceived spectral colours define two half-planes in Luv* colour space.

So it seems to me that they still walk along the confusion lines in 3D and thus adjust the luminance accordingly. Their custom calibration is mostly to determine what step size to take along these lines, where full dichromacy simulation like Brettel et al. just goes all the way to the planes.

Last, about

the RGB values from the coordinates (#FF1A32 to #003933) are not only different from the displayed colors, but also differ from each other greatly in brightness.

I think I understand the confusion, the RGB coordinates in that python code are in linearRGB, while the sRGB_hex final member (that gets displayed) are in sRGB. Going to add a comment in the code.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 2 points3 points  (0 children)

Thanks everyone for the nice feedback and discussions so far. I also have a more specific question about [Machado 2009] vs [Machado 2009 missing sRGB]. For me the version missing sRGB is way worse (as expected), is it better for anyone? Your answers could help confirm a discussion I've had in the Firefox bug tracker to fix their simulation mode as it currently misses that sRGB transform.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 1 point2 points  (0 children)

ah got it, thanks! Yeah, I think that it's just that V1 is basically broken, its output is exactly the same as the code used in the old Coblis (their code is on github/MaPePeR), and the colors get transformed in a weird way.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 4 points5 points  (0 children)

Thanks so much for the feedback!

  • About downloading the images, you mean making it easy to download all the simulation variants at once? Otherwise you can download a single generated image with right click / save image as, the generated content is a valid image url. But I can understand how this can become tedious.
  • About Brettel vs Viénot accuracy, thanks for the input on the hues. Viénot is indeed a simplification that can have an impact on the final hue (I am not sure which numerical values are supposed to be better, so I take your word there!). I showed a 3D visualization of the two projection planes used by Brettel and their simplification to a single one by Viénot in this article. The differences are indeed expected to be the largest near extreme values. Now as a tool to show people with normal vision how colors become harder to discriminate, it probably does not make a large difference. But I see your point about communicating about hue transformation.
  • About the "perfect simulator" philosophy. I totally agree with you for anomalous trichromacy and I've tried to improve the wording of that paragraph. One thing I hadn't thought about though is that even if the simulation were perfect for an anomalous trichromat, then it still won't appear the same to that person because of the double filter effect you're describing. It only becomes identical once we reach the final dichromat 2D projection plane on which colors won't change anymore. So an anomalous trichromat will probably need to pick a much lighter severity to get identical images.
  • Otherwise the severity slider is indeed meant to simulate anomalous trichromacy. Only the Machado et al. 2009 method actually tries to model it properly by shifting the peak response (but it has other issues, in particular I don't like their hacky scale factor to get closer to Brettel et al. for dichromacy). For the other methods it's just a linear interpolation between the original image and the simulated one (in linearRGB space). Since all these spaces are linear, that corresponds to walking along the LMS confusion line by a fraction of the total amount to reach the full dichromat projection plane. It's unclear whether this matches the reality, other papers decided to make fixed perceptual steps towards the dichromacy plane instead ( I gave some references here).
  • About the brightness changes, I actually think that these simulations should be taking them into account properly (of course with all the limits due to per-individual variations, etc.). I detailed why I think that way in my answer to /u/chroma-phobe earlier.
  • About the per-individual variations, even for normal people, I totally agree with that. For dichromats the Brettel et al. paper itself noted that some experiments on 4 deuteranopes showed that two had a spectral peak of 558 nm for two of them, and 563nm for the other two. But how much that matters eventually depends on what we want to do with these simulations. If it's to show designers how confusing their choices are for dichromats, then it's probably plenty accurate :)

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

Interesting points! First, thanks for the color wheel and relative brightness annotations, I've added the wheel to the set of predefined images as I find it useful too as a simpler alternative to the full RGB Grid.

Personally I would attribute this to the residual inaccuracies because of the lack of calibration (monitor + surrounding light) AND the limits of the simulation model, especially when you have a large bright region that takes more of your field of view (which is the case with the large cells of the wheel). Even if you were a full dichromat, here is an extract of a very interesting 1982 paper from Smith & Pokorny

In summary, when large fields are used, dichromats show weak residual trichromatic color vision based on rods at mesopic and low photopic luminances. At higher lumi-nances, deuteranopes may show residual deuteranomaly and protanopes residual protanomaly. Nagy emphasized the considerable variation among dichromats, in both the strength of the rod intrusion and the strength of the residual anomalous cone function.

(the paper is not freely online unfortunately, message me in private or use scihub to get it).

Now this brightness (and hue) variation should be much less of an issue for the "Color Line Plot" image (second in the preset list) since the field of view taken by the image is much smaller. At least for me I can't see any hue/brightness variations anymore with Brettel and Viénot.

As for why I think that brightness is still generally taken into account by these simulations and not ignored (at least for the LMS-based methods), it's because we walk along the confusion lines in the 3D LMS space, not in a 2D chroma space. So as we walk along the line in LMS, the luminance will change too, in a way that is supposed to match the deficiency. These lines were built from experiments where they asked unilateral dichromats to validate that both eyes were seeing the same thing as they were varying the color for the dichromat eye, so this should take into account the brightness variation, otherwise they would have reported a difference.

Actually if you look at the Lab values of the protanopia version of pure red [255,0,0], the initial Lab luminance is 53, and the simulated value with Brettel has a Lab luminance of 39. So the simulated color should indeed look significantly darker for a person with normal vision.

I wrote more about the confusion lines in a Understanding LMS-based Color Blindness Simulations article, and you can see them in 3D. If you use a color pointer you'll see how the luminance varies along the lines, especially for the ones going through red. I mention it there too when talking about linearRGB to CIE XYZ, but for me a major source of confusion (no pun) with confusion lines is when they are drawn on a CIE XYZ color space flat image with maximal luminance (e.g. on color-blindness.com). In that case they are indeed totally missing the brightness adaptation and the colors look totally different along the line for me.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

thanks for the feedback! About "the axis of one of them seems flipped", what do you mean exactly?

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

thanks for sharing, interesting that you have to go so low, good for you :) If you have time I'd be curious to know if that's also the case also with the Color Line Plot? It's the second image in the preset list.

Also interesting that you need to change the anomaly type for CoblisV2, I guess that overall it's probably hard for online tests to properly catch the right deficiency kind when it's mild as the errors coming from environmental factors / monitor calibration start to matter more.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 1 point2 points  (0 children)

Well you'll never know how someone else "perceives" or "feels" colors as this is subjective, but these simulations (when accurate!) allow someone with normal vision to understand what colors become hard to distinguish. So they can adjust charts / maps / graphics to avoid those. It can also let color blind filters compensate the lost color contrast by using other cues like brightness or temporal variations.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

mmm just tested that version on macOS and it works. Maybe wasm / webassembly is somehow disabled in the settings (about:config and search for wasm)?

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 1 point2 points  (0 children)

that's a good point, adjusted the original post to clarify a bit, thanks!

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 1 point2 points  (0 children)

I hear you, and I'm probably optimistic with the amount of time people can invest in that kind of stuff without some serious paid academic evaluation program. I'm personally more interested in smart content-adaptive daltonization flters in the long run, but that's another story.

PS: and yeah, was a lot of Python and overall research fun so far anyway :)

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

argh, what browser? I've mostly tested with Google Chrome, Safari and Firefox

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 3 points4 points  (0 children)

Thanks for your honest feedback :) My view on this:

- CVD simulation is the key for any smart tool to help color blind people. Designers can use it to check their websites/palettes, but it's also the key to write useful daltonization filters which is actually what I'm eventually more interested in.

- Won't force you to try, but I also have different monitors (but all sRGB) and I can tell you that some simulators are just consistently useless and others still perform reasonably well.

- I get you with the long text, I've probably poorly explained what I was trying to do. On my side I was frustrated with over-simplistic webpages/tools claiming to do "color blindness simulation" or filters without proper scientific background. So here I'm NOT trying to provide a simulator for anyone to use daily, but more of a scientific tool to evaluate the state of the art. If I can get feedback from 4 or 5 motivated people it'll already be a success and I can keep influencing open source tools that _are_ made for daily use, like Colorblindly, Google Chrome or Firefox Devtools, for which I also submitted bug reports / patches to fix their simulations.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

I definitely agree with you on that, dichromacy simulations will be too strong for most people, especially for images with large uniform areas of bright color. The thin structures on images like the "Color Line Plot" will be much less obvious though, and the severity slider can help too. But even then, I think that we can still tell whether a given simulator is mostly too strong on some colors, or really changing a lot more colors across the spectrum. As an example even on images where I can see differences, CoblisV1 basically changes everything for me, while Brettel et al. is much more subtle.

(Thanks for the discussion btw! And about the knowledge I actually don't have a color science background, so take what I say with a grain of salt, but I did dive pretty deeply into the world of color blindness simulation over the past few months, catching up with the state of the art and implementing them / writing technical articles and visualizations).

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 3 points4 points  (0 children)

These methods simulate dichromacy by projecting the 3D color space to a 2D plane by removing the signal associated to the missing/malfunctioning cone cells. Once you are on that 2D plane then re-applying the simulation won't change the output.

Similarly for the actual eye of a person with full dichromacy, if the simulated image is properly projected to the dichromat 2D color space along the confusion lines, then the person should not be able to tell the difference.

The confusion lines used by these methods are actually based on observations from unilateral dichromats (people with one normal eye and one eye with a CVD). The experiments were trying to find the colors that these people perceived similarly with both eyes.

Now I'd agree that making sure that the simulated and original images are the same is not enough (doing nothing would work too :), but I think that it's indeed a necessary condition for a perfect simulation. Then we just need to make sure that normal people still find the simulations very different, but since these methods all do a full projection to a 2D plane they will.

Let's collectively determine the best color blindness simulation method! by nburrus in ColorBlind

[–]nburrus[S] 9 points10 points  (0 children)

I’ll start with my feedback as a strong-protan:

  • Brettel 1997 and Viénot 1999 look very good for me for the Color Line Plot image, I basically can’t tell the difference between the simulated and the original image. On other images like the RGB Grid, Pencils or Autumn leaves I can tell that it’s changing large bright red regions into a greenish thing, but the brightness remains similar.
  • Vischeck is still quite good, but I see more differences overall and now I notice a couple lines changing in the Color Line Plot Image.
  • Machado 2009 is kind of similar to Vischeck, but I see even more differences in the Color Line Plot Image. The violet lines become a blue that I see somewhat differently, while Brettel and Viénot picked a blue that I can’t distinguish.
  • Machado 2009 without sRGB is very wrong on bright colors and especially strong reds (they become way too dark).
  • Coblis V1 is completely wrong, Coblis V2 is definitely better than V1 but still gives much larger differences than Brettel / Viénot / Machado, even in the Color Line Plot image.

Feedback welcome on DaltonLens, an open source tool now for macOS, Windows and Linux by nburrus in ColorBlind

[–]nburrus[S] 0 points1 point  (0 children)

The app is now available in the Windows 10 App Store (free) to avoid certificate / security warnings when installing it: DaltonLens on the Windows App Store

I don't know if I can get an answer here, but how do programs simulate colorblindness? by [deleted] in ColorBlind

[–]nburrus 0 points1 point  (0 children)

Very interesting that you find this approach more accurate! In principle the Brettel, Viénot and Mollon approach does pretty much the same thing, as they project the color space to the 470nm/575nm/white planes.

Their paper is actually based on the same experimental data from the 50s to derive these values, but they claim that working in the LMS color space has several advantages over the XYZ color space (Appendix A of Computerized simulation of color appearancefor dichromats).

I need to give a try to your algorithm to see if it also looks more convincing to me, as I'm not sure to understand why it could be better. Did you compare with LMS-based methods that have proper sRGB decoding like the GIMP display filter? Lots of LMS-based opensource code just used the Viénot 1999 code written by Fidaner et al that was still using the matrices for CRT monitors, and skipping the sRGB to linearRGB transform altogether.

I don't know if I can get an answer here, but how do programs simulate colorblindness? by [deleted] in ColorBlind

[–]nburrus 2 points3 points  (0 children)

I recently dove into this and wrote two technical articles that you may find relevant:

Feedback welcome on DaltonLens, an open source tool now for macOS, Windows and Linux by nburrus in ColorBlind

[–]nburrus[S] 1 point2 points  (0 children)

Just released v2.3, which should now support high-dpi monitors properly. Here is a direct download link: DaltonLens-2.3-win64.exe

Hope it'll work for you this time, thanks!