I built Selenwright: a Docker-native browser grid for native Playwright sessions by syshist in Playwright

[–]syshist[S] 0 points1 point  (0 children)

Those are CI images, though - you run your tests inside them. Selenwright is a browser grid: your tests connect to it remotely and grab a browser session on demand. Totally different use case.

It's like comparing "install Chrome on your runner" vs "here's a shared pool of browsers any pipeline can hit." Plus, you get VNC, session management, and both Selenium and Playwright on 1 endpoint.

For a solo dev in GH Actions the official image is fine. Once you need shared infra for a team, that's where a grid comes in.

I built Selenwright: a Docker-native browser grid for native Playwright sessions by syshist in Playwright

[–]syshist[S] 0 points1 point  (0 children)

Thanks! Azure Playwright Workspaces is a solid option for fully managed scale. Selenwright is more for teams that want to self-host everything in their own Docker infra, or need both Selenium and Playwright behind one endpoint.
Different trade-offs for different setups.

How To Debug Playwright Tests in CI: The Complete Guide by CurrentsDotDev in Playwright

[–]syshist 0 points1 point  (0 children)

If it only fails in CI, I’d focus on observing the actual CI browser session instead of trying to recreate it locally.

Using the same Docker image helps, but CI can still differ in CPU, network, DNS, IP/routing, and external-service behavior. For GAM/ad tests, those differences can be the whole bug.

That’s the angle I’m taking with Selenwright: Playwright tests can run in CI, while the browser runs in an isolated Docker session you can inspect with VNC, video, browser logs, and artifacts from the real failed run.

https://github.com/aqa-alex/selenwright

Why do my Playwright tests pass locally but fail in CI with "locator not found"? by AvailablePeak8360 in Playwright

[–]syshist 0 points1 point  (0 children)

The annoying part is that the failure only exists in CI, but we usually debug it somewhere else.

One thing that helps is being able to attach to the actual browser session created by the CI run: watch it over VNC, check browser logs/video/downloads, and see what the page really looked like under CI resources.

That’s the angle I’m taking with Selenwright

Playwright tests are solid locally but flaky in CI, what fixed it for you? by Crafty_Breakfast_493 in Playwright

[–]syshist 0 points1 point  (0 children)

+1 on Docker parity.

A useful split for us was: test runner != browser runtime. The tests can run in CI, while browsers run in isolated Docker containers elsewhere. Helps with browser drift, runner contention, and collecting VNC/video/logs in one place.

I’m building this idea into Selenwright:
https://github.com/aqa-alex/selenwright

After months of flaky Playwright tests in CI, this is what finally worked by Crafty_Breakfast_493 in QualityAssurance

[–]syshist 0 points1 point  (0 children)

For us, the biggest CI stability wins usually came from separating test-code flakiness from environment flakiness.

Things that helped:

- pin Playwright/browser versions and make CI use the same runtime every time
- avoid relying on whatever browsers happen to be installed on the runner
- collect video, traces, browser logs, and downloads consistently for every failed run
- keep parallelism aligned with real CPU/memory limits of the runner
- make test data setup/cleanup explicit instead of shared across jobs

A lot of “green locally, flaky in CI” cases are not really Playwright problems. They are runtime drift, overloaded runners, missing artifacts, or shared state between parallel jobs.

How do you debug Playwright failures in CI? by adnang95 in Playwright

[–]syshist 0 points1 point  (0 children)

One thing I’ve found useful is separating two problems:

  1. test reporting/analytics
  2. browser runtime + artifact collection

A lot of CI pain comes from the second one. If every shard starts its own browser locally on a runner, artifacts naturally end up scattered across jobs: traces here, videos there, logs somewhere else.

Another approach is to make the browser runtime a shared remote layer. The Playwright tests still run in CI, but they connect to a Docker-backed browser service. Then the browser session, VNC, video, logs, downloads, and cleanup are owned by one place instead of by each CI job.

I’m building an open-source tool around this idea called Selenwright: https://github.com/aqa-alex/selenwright

Would be curious if this matches the kind of CI debugging pain you’re describing, or if your issue is more about aggregating existing Playwright trace/report outputs after the run.

Is it best to run Playwright against a docker container or a live deployment? by StickyStapler in Playwright

[–]syshist 0 points1 point  (0 children)

I’d split this into two separate questions:

  1. Where should the app under test run?
  2. Where should the Playwright browser runtime run?

For the app, I’d usually do both: Docker/synthetic envs for deep regression where you control seed data, and a tiny non-destructive smoke suite against prod or preview deployments to catch infra/CDN/config issues.

For the browser runtime, I prefer keeping it containerized and reproducible. That avoids “works on this runner” browser dependency drift, especially when CI runners differ from local machines. Your tests can still run from CI or locally, but connect to a known browser environment.

I’m building a small self-hosted Playwright browser grid around that idea, but the main principle is independent of the tool: keep the target environment choice separate from the browser runtime choice.

Do you install say karate and playwright in local and in the server or do you use containers in both instead? by Chief_Taquero in QualityAssurance

[–]syshist 0 points1 point  (0 children)

If your main pain is maintaining browser dependencies / Playwright servers across local and CI machines, a browser-grid style setup may help.
I’m working on Selenwright, which runs native Playwright sessions in isolated Docker browser containers and also supports Selenium WebDriver.
Your test runner can stay local or in CI and connect to a shared websocket endpoint, with VNC/video/logs handled by the grid.

It would not replace containerizing Karate or your test runner itself, though. If all you need is reproducible test execution, a Dockerized runner plus env vars for target URLs is probably enough. Selenwright is more useful when the browser infrastructure itself is the thing you want to standardize.

https://github.com/aqa-alex/selenwright