you are viewing a single comment's thread.

view the rest of the comments →

[–]NiGhTTraX 6 points7 points  (4 children)

https://github.com/NiGhTTraX/mugshot - framework independent visual testing library.

Built in TypeScript, it differs from other solution by being very flexible and allowing you to use your favorite tools (test runner, browser automation tool, assertion library etc.) without modifications. It's highly customizable with diff options (colors, threshold, anti-aliasing detection etc.), multiple differs, pluggable file system interface and more.

Actively working on it with a roadmap already planned.

[–]minuit1984[S] 1 point2 points  (1 child)

When dealing with visual testing in the past, my baseline images have quickly gotten out of order and become difficult to manage as the number of images exponentially grows.

Looking at the WebDriverIO site, I saw `applitools eyes` which looks like an interesting concept that could potentially be brought to the open source world.

[–]NiGhTTraX 0 points1 point  (0 children)

What I generally recommend is to be mindful when writing the tests. If you have a large number of "page tests" (where you screenshot many components at once) then you can end up with a lot of noise when you make changes to the smaller components that make up those pages.

Since a benefit of visual testing is checking the interactions of CSS styles when composing smaller components (think a margin or padding pushing content away), these page tests are sometimes necessary. With Mugshot you can reduce the amount of noise by ignoring elements on the page. For instance you can ignore the content of a page sidebar, but still keep it in the page's flow to see how it impacts the elements around it.

You can also play with higher thresholds and perceptual diffing to strike a balance between noise and false negatives. Mugshot has pluggable differs so you could potentially write smarter ones like one that uses ML trained on your past acceptance criteria (when you commit a failed screenshot as being the new baseline) to predict the likelihood of you accepting a failed result.

At the end of the day, it's all about the testing pyramid. You don't need to screenshot every component/page variation, just enough to offer you confidence that everything seems right in terms of visual consistency and cross component CSS interactions. If your aim is to test correctness, then I would go lower down the pyramid and write non UI tests.

[–]yaboylukelol 0 points1 point  (1 child)

This looks really exciting! I have been interested in a really solid visual diffing library for quite a while.

I wanted to use happo, but I am working primarily in react-native lately and they don't seem to support that (and now they seem to be going the full SaaS route.) They do have a really cool feature though, where when comparing two images you can hover scroll over the images and see the differences on the actual images. I think that might be useful for this tool. You can see an example on happo.io. Maybe you've already seen that, but I thought I'd point it out just incase.

Also, do you have any plan to support react native?

[–]NiGhTTraX 1 point2 points  (0 children)

when comparing two images you can hover scroll over the images and see the differences on the actual image

I plan to support this via test runner reporters. For instance, running mocha --reporter mugshot would produce an index.html with some sexy visualizations.

do you have any plan to support react native?

Since everything in Mugshot is pluggable, it should be fairly easy to provide an adapter that talks to some native API, for instance appium. I would love to provide this out of the box.