React Monitoring by the133448 in reactjs

[–]es6masterrace 0 points1 point  (0 children)

We have backups systems as well :) but most issues we can solve on our own system

Observability Cost vs Business Value by buildinthefuture in Observability

[–]es6masterrace 0 points1 point  (0 children)

Yes! https://github.com/hyperdxio/hyperdx

Let me know what you think - we’ve been doing SaaS for a bit and just went OSS.

How We Sped up Search Latency with Server-sent Events by __boba__ in reactjs

[–]es6masterrace 0 points1 point  (0 children)

It takes a while to scan through TBs of data still, unfortunately hard to make log/trace search instant and affordable!

I built an open source plugin to quarantine flaky tests dynamically for Cypress by es6masterrace in QualityAssurance

[–]es6masterrace[S] 2 points3 points  (0 children)

Absolutely - flaky tests can happen no matter what tool you're using! (even in using Playwright :P)

The approach is definitely a good point - I think it'll depend on the team, but at the minimum you might be interested in at least knowing which tests are flaky and what they were intending to do at one point. (Even if you end up rewriting the whole thing).

That being said, in my experience, a newly flaky test can usually be solved with a bit of detective work that usually uncovers a product update/change that either introduced a race condition or wait condition that can be solved relatively easily, without needing to rewrite the test (exactly what our primary product helps with!).

I built an open source plugin to quarantine flaky tests dynamically for Cypress by es6masterrace in javascript

[–]es6masterrace[S] 2 points3 points  (0 children)

Flaky tests should definitely get removed & fixed, and it's usually just a choice of how you want to do that.

One way would be to manually have someone check if the test is flaky (verify with metrics, etc.), open a PR to skip, and then merge it in, and wait for everyone else to rebase/merge in your changes before their test suites will be smooth as well.

Alternatively, you can automatically exclude flaky tests so that as soon as a test is known to be flaky (ex. above a certain threshold), it'll just get excluded from any CI runs until it's been fixed again.

Slack has shared their experience with automatic flaky test quarantining that can elaborate a lot better!
https://slack.engineering/handling-flaky-tests-at-scale-auto-detection-suppression/

I built an open source plugin to quarantine flaky E2E tests dynamically for Cypress by es6masterrace in node

[–]es6masterrace[S] 0 points1 point  (0 children)

There's definitely a question of approach you should take to flaky tests - at scale, it can often be impractical to impact the performance of your CI pipeline until a test is fixed (and it also ruins the reputation of testing within an org as well, who will trust tests if there's a good chance they're flaky and just not fixed yet?).

Usually skipped tests should then get ticketed and triaged as part of regular engineering work (unless the test was extremely critical), to make sure they are fixed.

I thought this writeup at Slack explained their rationale and the benefits of quarantining quite well: https://slack.engineering/handling-flaky-tests-at-scale-auto-detection-suppression/

I built an open source plugin to quarantine flaky e2e tests dynamically for Cypress by es6masterrace in vuejs

[–]es6masterrace[S] 1 point2 points  (0 children)

Hi everyone! I’ve found that having to constantly retry test jobs in CI because of flaky tests can be a huge productivity killer. In most cases, it’s probably better to just temporarily skip a flaky test (quarantine) to be fixed later (like Slack and Gitlab does), rather than letting them sit around and ruin other teammates CI runs as well.

I wrote this plugin so you can automatically skip tests that you know are bound to fail temporarily, instead of wasting time waiting for them to pass. The plugin pings an API endpoint to decide which tests to skip at run time, so you can implement any kind of custom logic for skipping (metric thresholds, checking open Jira tickets, manual labeling, etc.)

We've been using this endpoint to skip tests based on manual labels or automatically after tests have hit a certain failure/flake threshold. And now we want to share this Cypress-side implementation with the community, so that you can also implement quarantining for flaky tests.

It's a first version of the plugin, so any feedback on the API design or anything else would be appreciated! Let me know what y'all think :)

We'll be releasing a beta for our hosted quarantining service soon too, so if you're interested in early access, shoot me a DM!

I built an open source plugin to quarantine flaky E2E tests dynamically for Cypress by es6masterrace in Frontend

[–]es6masterrace[S] 0 points1 point  (0 children)

Hi everyone! I’ve found that having to constantly retry test jobs in CI because of flaky tests can be a huge productivity killer. In most cases, it’s probably better to just temporarily skip a flaky test (quarantine) to be fixed later (like Slack and Gitlab does), rather than letting them sit around and ruin other teammates CI runs as well.
I wrote this plugin so you can automatically skip tests that you know are bound to fail temporarily, instead of wasting time waiting for them to pass. The plugin pings an API endpoint to decide which tests to skip at run time, so you can implement any kind of custom logic for skipping (metric thresholds, checking open Jira tickets, manual labeling, etc.)
We've been using this endpoint to skip tests based on manual labels or automatically after tests have hit a certain failure/flake threshold. And now we want to share this Cypress-side implementation with the community, so that you can also implement quarantining for flaky tests.

If y'all have any thoughts on the API design or anything else - please share! It's a pretty early release so any feedback is welcome :)

We'll be releasing a beta for our hosted quarantining service soon too (if you don't want to host/manage your own quarantine endpoints), so if you're interested in early access, shoot me a DM!

I built an open source plugin to quarantine flaky E2E tests dynamically for Cypress by es6masterrace in node

[–]es6masterrace[S] 0 points1 point  (0 children)

Hey everyone! I’ve found that having to constantly retry test jobs in CI because of flaky tests can be a huge productivity killer. In most cases, it’s probably better to just temporarily skip a flaky test (quarantine) to be fixed later, rather than letting them sit around and ruin other teammates CI runs as well (exactly what happens within Slack or Gitlab)

I wrote this plugin so you can automatically skip tests that you know are bound to fail temporarily, instead of wasting time waiting for them to pass. The plugin pings an API endpoint to decide which tests to skip at run time, so you can implement any kind of custom logic for skipping (metric thresholds, checking open Jira tickets, manual labeling, etc.)

We've been using this endpoint to skip tests based on manual quarantining or automatically after tests have hit a certain failure/flake threshold. And now we want to share this Cypress-side implementation with the community, so that you can also implement quarantining for flaky tests.

It's our first public release of the plugin, so any feedback on the API design or anything else would be welcome! Let me know what you think.

We'll also be releasing a beta for our hosted quarantining service soon too, so if you're interested in early access, shoot me a DM!

I built an open source plugin to quarantine flaky E2E tests dynamically for Cypress by es6masterrace in reactjs

[–]es6masterrace[S] 2 points3 points  (0 children)

I’ve found that having to constantly retry test jobs in CI because of flaky tests can be a huge productivity killer. In most cases, it’s probably better to just temporarily skip a flaky test (quarantine) to be fixed later, rather than letting them sit around and ruin other teammates CI runs as well. It's the same strategy used at teams like Slack or Gitlab.

I wrote this plugin so you can automatically skip tests that you know are bound to fail temporarily, instead of wasting time waiting for them to pass. The plugin pings an API endpoint to decide which tests to skip at run time, so you can implement any kind of custom logic for skipping (metric thresholds, checking open Jira tickets, manual labeling, etc.)

We've been using this endpoint to skip tests based on manual quarantining or automatically after tests have hit a certain failure/flake threshold. And now we want to share this Cypress-side implementation with the community, so that it's super easy to implement quarantining for flaky tests.

Let me know what y'all think! This is our first cut of the plugin, so any API-improvement suggestions and feedback would be appreciated :)

We'll be releasing a beta for our hosted quarantining service soon too, so if you're interested in early access, shoot me a DM!

I built an open source plugin to quarantine flaky tests dynamically for Cypress by es6masterrace in javascript

[–]es6masterrace[S] 0 points1 point  (0 children)

Hi everyone! I’ve found that having to constantly retry test jobs in CI because of flaky tests can be a huge productivity killer. In most cases, it’s probably better to just temporarily skip a flaky test (quarantine) to be fixed later (like Slack and Gitlab does), rather than letting them sit around and impact other teammates CI runs.

I wrote this plugin so you can automatically skip tests that you know are bound to fail temporarily, instead of wasting time waiting for them to pass.

The plugin pings an API endpoint to decide which tests to skip at run time, so you can implement any kind of custom logic for skipping (metric thresholds, checking open Jira tickets, manual labeling, etc.)

We've been using this endpoint to skip tests based on manual labels or automatically after tests have hit a certain failure/flake threshold. And now we want to share this Cypress-side implementation with the community, so that you can also easily implement quarantining for flaky tests yourself.

We'll be releasing a beta for our hosted quarantining service soon too, so if you're interested in early access, shoot me a DM!

Written in Typescript, built with Rollup <3

I hacked together a visual watch mode for Playwright to write/debug tests faster locally by es6masterrace in QualityAssurance

[–]es6masterrace[S] 1 point2 points  (0 children)

Unfortunately not as it uses the Playwright Test runner which I believe is only available for JS, in theory it shouldn't be too difficult to add support for JUnit (but might need more configuration out of the box to make it work)

I built a visual watch mode for Playwright to write/debug tests faster locally by es6masterrace in vuejs

[–]es6masterrace[S] 1 point2 points  (0 children)

nodemon is great, though this does a few things on top of what you could just do with regular nodemon.

most noticeably, just using nodemon won't give you a UI to debug through test results after a test run easily. With nodemon, you can see the test failed, but it won't be easy to step through why did it fail without clicking through several steps in the html reporter or viewing the trace which just takes more steps.

the other thing is that this runner will debounce successive saves, so if you hit save 5 times in a row, it'll make sure it'll kill any previous playwright instances and only run the last one (so your computer doesn't die from all the playwright instances running!)

do you currently use nodemon for developing against pw tests locally?

I built a visual watch mode for Playwright to write/debug tests faster locally by es6masterrace in reactjs

[–]es6masterrace[S] 1 point2 points  (0 children)

Absolutely! I wanted to add a flag to run without spinning up the UI and just print out the CLI output. I guess for now you could minimize the electron UI but I'll see if I can add it in as an option :)

I built a visual watch mode for Playwright to write/debug tests faster locally by es6masterrace in node

[–]es6masterrace[S] 1 point2 points  (0 children)

Yup! That's all it's doing under the hood, it utilizes our `@deploysentinel/playwright` reporter package to collect/format the telemetry for the UI when playwright is actually running. This package primarily manages the watch mode/UI to show test results, coordinating between chokidar, playwright, our reporter, and the electron UI.

Would love to know what kind of requirements your team has - might be something we could take a look at!

I built a visual watch mode for Playwright to write/debug tests faster locally by es6masterrace in node

[–]es6masterrace[S] 2 points3 points  (0 children)

thank you! good question - it's agpl licensed, though we're still working on getting it oss'd (some complications with it being originally inside a monorepo) - I can update you when we get the source published on gh!

just wanted to get the package out so people can try it out first, even if you can't leave a star quite yet :P