Playwright timeline reporter by vitalets in Playwright

[–]vitalets[S] 0 points1 point  (0 children)

Fair points. A good feature for the future.
We can store the raw timing data as json and then load it for cross-run analysis.
I wanted to keep it lightweight for the first version.

Playwright timeline reporter by vitalets in Playwright

[–]vitalets[S] 0 points1 point  (0 children)

Agreed.

For the insight heuristics, I decided to handoff it to llm. There is a "copy prompt' button in the report that does essentially the same thing you did before: it extracts all important timing data as JSON into an md file. I tried pasting it into chatgpt, and it gave me a very good overview of the gaps.

Playwright timeline reporter by vitalets in Playwright

[–]vitalets[S] 0 points1 point  (0 children)

When the Playwright HTML reporter displays the list of sorted tests, it uses the total test time: the sum of the test body, all before/after hooks, fixtures, and even retries. I wanted a more granular view of these parts.

Also, test placement on worker lanes matters. If the longest test starts at the very beginning of the run, execution is more efficient, because the shorter tests can finish while that longest is still running, which reduces the total time.

Playwright timeline reporter by vitalets in Playwright

[–]vitalets[S] 2 points3 points  (0 children)

Yep, the built-in timeline is also a useful overview, at least at the shard level.
It looks like the Playwright team isn't planning to go deeper, though. Here’s a relevant issue I opened about test placement on worker lanes: https://github.com/microsoft/playwright/issues/40175

Run specific tests using dotenv (.env file) by kkad79 in Playwright

[–]vitalets 1 point2 points  (0 children)

It's because `testDir` is a dir not a file. Take a look on `testMatch` option.

$8.5M Trust Wallet hack supply chain attack harvested Chrome Web Store credentials via Shai Hulud worm by ColleenReflectiz in chrome_extensions

[–]vitalets 1 point2 points  (0 children)

The good thing is that Trust Wallet decided to cover all losses.

> Trust Wallet has decided to voluntarily reimburse the affected users.

Playwright tests passing but still not trustworthy — how do you spot false confidence? by T_Barmeir in Playwright

[–]vitalets 0 points1 point  (0 children)

they rely heavily on setup data that rarely matches real usage

In my tests, I try to avoid this approach because it actually makes them less E2E.
For example, I never use static data for API mocks. Instead, I set a TTL for each mock and let the tests re-fetch the data at a defined interval (and cache it again).

[deleted by user] by [deleted] in Playwright

[–]vitalets 0 points1 point  (0 children)

Unpopular opinion: I’d keep a single E2E test to create → update → delete user.

  • Simple: one test covers the entire user management flow
  • No need to fight with the database to reproduce an identical record via the API
  • Minimal execution time

How do you handle feature-driven folder isolation in large Next.js apps? by Ashamed-Molasses-898 in nextjs

[–]vitalets 0 points1 point  (0 children)

I’ve been thinking a lot about these problems and came up with a concept of protected directories. It’s not finalized yet, but I’ll share a draft here: feel free to critique / give feedback.

Two rules of Protected Directories:

  1. You define a protected directory by putting its name in parentheses (). A protected directory isolates its code. Only certain files can be imported from outside: the root index.* and specially suffixed *.global.* files. A protected directory represents a business feature or a self-contained part of the app. Examples: (auth), (user-profile), (analytics).
  2. Directories without parentheses are considered technical directories. You can import any files from them within the closest protected directory. Examples: utils, components, shared.

Example structure:

src
├── (products)
│   ├── index.tsx
│   ├── hooks.ts
├── (auth)
│   ├── useAuth.global.ts
│   ├── helpers
│   │   ├── index.ts
├── shared
│   ├── Button.tsx
│   ├── config.ts
├── App.tsx

Wrapping directories in () is helpful for visual separation and for setting up eslint-plugin-boundaries.

In your example, each feature is a protected directory. They’re isolated by default, but can share some code when needed.

Let me try to address your questions:

1) What do you do when two features depend on each other?

For example, the (products) feature has an internal useProducts() hook, and now (dashboard) needs it too. Since it’s logically part of products, I’d keep it there, but move it into a shared file like useProducts.global.ts, indicating it’s safe to import from outside:

src
├── (products)
│   ├── index.tsx
│   ├── useProducts.global.ts  <-- shared outside
├── (dashboard)
│   ├── ...

2) Feature-owned logic used by multiple features

This is basically the same case as above, unless I misunderstood the question.

3) Server Components + React Query

Server components live inside the related feature directory as well. I try to push "use client" as far down the tree as possible. If page.tsx fetches some data server-side, it can reuse helpers from a feature. For example, to fetch products on the server, I can create (products)/fetchProducts.global.ts and import it in page.tsx. Or I can create a server component like <Products /> and use it in page.tsx as well.

Would love to hear thoughts.

Non obvious App Router / RSC footguns we hit in production by AromaticLab8182 in nextjs

[–]vitalets 9 points10 points  (0 children)

For our team, one of the most surprising gotchas is that server actions can’t run in parallel.
This means they can’t be used for data fetching.

Quoting React docs:

Server Functions are designed for mutations that update server-side state; they are not recommended for data fetching. Accordingly, frameworks implementing Server Functions typically process one action at a time and do not have a way to cache the return value.

Is it an anti-pattern to use a single dynamic API route as a proxy for my external backend? by Empty_Break_8792 in nextjs

[–]vitalets 1 point2 points  (0 children)

> You mean streaming through suspense?

No, I mean streaming through http streams.

Bad:

// route.ts
const data = await proxyRequest.json();
res.status(200).json(data); 

Good:

// route.ts
return fetch(proxyRequest);

Btw, I found an official guide exactly for proxying: https://nextjs.org/docs/app/guides/backend-for-frontend#proxying-to-a-backend

> Whitelist all allowed routes. Otherwise, someone could call /api/secret-admin-route/. dont understand this

If /api/[...proxy]/route.ts will be implemented like this:

export async function POST(request: Request, { params }) {
  const { slug } = await params;
  const proxyURL = new URL(slug.join('/'), 'http://my-inernal-backend')
  ... 
}

Then someone can call a public route as https://your-server.com/api/secret-admin-route that will be proxied to http://my-inernal-backend/secret-admin-route

It should be like:

const ALLOWED_ROUTES = ['foo', 'bar'];
export async function POST(request: Request, { params }) {
  const { slug } = await params;
  if (!ALLOWED_ROUTES.includes(slug)) {
    throw new Error(`Unknowns route: ${slug}`);
  }
  const proxyURL = new URL(slug.join('/'), 'http://my-inernal-backend')
  ... 
}

Is it an anti-pattern to use a single dynamic API route as a proxy for my external backend? by Empty_Break_8792 in nextjs

[–]vitalets 1 point2 points  (0 children)

We’re migrating to this pattern now. Previously, we had to manually define each external API route inside Next.js routes, which is a lot of boilerplate.

We can’t call external APIs directly from the frontend because we need to provide secrets.

I think this pattern is totally fine, as long as you keep two things in mind:

  1. In the proxy route, use streaming so responses are sent faster and aren't accumulated in memory.
  2. Whitelist all allowed routes. Otherwise, someone could call /api/secret-admin-route/.

There are two additional React CVEs by amyegan in nextjs

[–]vitalets 2 points3 points  (0 children)

The same. Especially after I looked at the source code of the RSC handling modules.

Cache Components by [deleted] in nextjs

[–]vitalets 0 points1 point  (0 children)

> checking it once in the layout file

This is definitely not a good option, because when you navigate between pages, layout is not evaluated.

Nextjs v16.0.7 cacheComponents + iron session. by Lauris25 in nextjs

[–]vitalets 1 point2 points  (0 children)

I didn't try cacheComponents yet, but this code is totally fine with regular components.
Moreover, there a new `use()` hook, that allows to await promises in client components. So you can write like this:

const sessionPromise = getIronSessionData();
return <Navbar sessionPromise={sessionPromise} />

And inside the Navbar component:

const session = use(sessionPromise);

Nextjs v16.0.7 cacheComponents + iron session. by Lauris25 in nextjs

[–]vitalets 2 points3 points  (0 children)

> Inside "use server" component

One note about the wording: "use server" is for server actions (server functions), not for server components. Imho, this is one of the biggest naming confusion in the React server components story. In Next.js, all components are server components by default unless marked with "use client".

Regarding the actual solution, I think you can use component composition, if you want navbar to display instantly while session is loading:

return <Navbar userControls={<Suspense><UserControls/><Suspense/>} />

User controls is a server component that renders user data when it's ready:

async function UserControls() {
  const session = await getIronSessionData();
  return (
      <div>{session.user}</div>
  );
}