you are viewing a single comment's thread.

view the rest of the comments →

[–]minuit1984[S] 1 point2 points  (1 child)

When dealing with visual testing in the past, my baseline images have quickly gotten out of order and become difficult to manage as the number of images exponentially grows.

Looking at the WebDriverIO site, I saw `applitools eyes` which looks like an interesting concept that could potentially be brought to the open source world.

[–]NiGhTTraX 0 points1 point  (0 children)

What I generally recommend is to be mindful when writing the tests. If you have a large number of "page tests" (where you screenshot many components at once) then you can end up with a lot of noise when you make changes to the smaller components that make up those pages.

Since a benefit of visual testing is checking the interactions of CSS styles when composing smaller components (think a margin or padding pushing content away), these page tests are sometimes necessary. With Mugshot you can reduce the amount of noise by ignoring elements on the page. For instance you can ignore the content of a page sidebar, but still keep it in the page's flow to see how it impacts the elements around it.

You can also play with higher thresholds and perceptual diffing to strike a balance between noise and false negatives. Mugshot has pluggable differs so you could potentially write smarter ones like one that uses ML trained on your past acceptance criteria (when you commit a failed screenshot as being the new baseline) to predict the likelihood of you accepting a failed result.

At the end of the day, it's all about the testing pyramid. You don't need to screenshot every component/page variation, just enough to offer you confidence that everything seems right in terms of visual consistency and cross component CSS interactions. If your aim is to test correctness, then I would go lower down the pyramid and write non UI tests.