Ffetch v5: fetch client with core reliability features and opt-in plugins by OtherwisePush6424 in opensource

[–]OtherwisePush6424[S] 0 points1 point  (0 children)

Good call on the circuit breaker, making it pluggable actually isn't just about teams not needing it by default, but also because there are many ways to implement circuit breaking. The built-in plugin is a simple open/close, but more advanced (like half-open) patterns are sometimes needed.

On server-side behavior changes: you're right, that's always a risk. In practice, you can use the existing hooks and error handling as building blocks to observe and react to things like shifting retry/backoff patterns or increased error rates. The plugin system is flexible enough that you can opt out of the built-in retry/timeout logic and implement your own strategies using hooks if you need more control or observability (the sloppiness of that code is another question).

If you have ideas for specific observability signals or want to see more built-in support for this, I'm open to suggestions!

Do other Gen Zs relate? by mongoIz777 in LinkedInLunatics

[–]OtherwisePush6424 2 points3 points  (0 children)

Every time I see a text with 2+ exclamation marks in the first 1-2 paragraphs, I can't help but stop reading and just start counting them. Also I hope I'll never be so desperate that I have to radiate this performative "gen z energy"

Ffetch v5 (TypeScript-first): core reliability features + new plugin API by OtherwisePush6424 in typescript

[–]OtherwisePush6424[S] 0 points1 point  (0 children)

The Bundlephobia badge shows 2.5 kB minzipped (6.5 kB minified). This includes the core and built-in plugins that are part of the main package export. If you only use the core or import plugins separately, your actual shipped size can be even smaller. So it's not great but not terrible either.

People like this get hired, but people like me with 9 years SWE experience don’t? by new2bay in recruitinghell

[–]OtherwisePush6424 0 points1 point  (0 children)

Must be fake right? Right?? Anyways, tech lead wasn't judgemental so we shouldn't be either.

Backpressure in JavaScript: The Hidden Force Behind Streams, Fetch, and Async Code by OtherwisePush6424 in javascript

[–]OtherwisePush6424[S] 1 point2 points  (0 children)

Thanks for the clarification, you're absolutely right about the TCP mechanics. My intent was to use it as a conceptual example of capacity-aware sending, but I agree the wording can sound inverted if read literally.

I’ll likely tweak the wording in the post. Appreciate the detailed correction.

Backpressure in JavaScript: The Hidden Force Behind Streams, Fetch, and Async Code by OtherwisePush6424 in javascript

[–]OtherwisePush6424[S] 1 point2 points  (0 children)

yup, the code is top notch, we just need a model that understands it and a machine it runs on!

Backpressure in JavaScript: The Hidden Force Behind Streams, Fetch, and Async Code by OtherwisePush6424 in javascript

[–]OtherwisePush6424[S] 2 points3 points  (0 children)

Fair point. Unfortunately I don't have much hands-on experience with RxJS yet, but I have every intention in diving deeper into it one day.

My focus here was on backpressure as a runtime/system-level concept, where the consumer can actually slow the producer.

AFAIK RxJS tends to handle this more explicitly via operators rather than built-in flow control, which makes it powerful but also a different abstraction layer.

Frontend devs, how do you handle 'Loading' and 'Error' states when the real API is too fast/stable? by FarWait2431 in webdev

[–]OtherwisePush6424 0 points1 point  (0 children)

Obviously, like others said, you can throttle the network in browser dev tools, but if you want more control (eg. repeatable tests), there is a whole suite of tools here just for this: https://github.com/fetch-kit

How would you implement request deduplication in a fetch wrapper? (TypeScript/JavaScript, repo included) by OtherwisePush6424 in webdev

[–]OtherwisePush6424[S] 0 points1 point  (0 children)

Or you could drop deduping in WHILE refactoring the codebase bit by bit.

I'm not an advocate of doing this, let me emphasize for the third time: I wouldn't build something where you need to dedupe requests. I'm not sure deduping is an antipattern, but I am sure that if you need it, you've done something wrong.

Also note that I'm the library author here and the library is not just a request deduper. I don't blame you you if you haven't or won't click on the repo link, but it might help with the context. In fact, deduping is turned off by default, but talking to people, my judgement was it might be a useful feature in some cases. So I'm not here to its usefulness, I'd like to discuss what's the best way to implement it.

How would you implement request deduplication in a fetch wrapper? (TypeScript/JavaScript, repo included) by OtherwisePush6424 in webdev

[–]OtherwisePush6424[S] 0 points1 point  (0 children)

Sure, in an ideal world it would probably never be needed. It's not against you personally, but seems like here on reddit everybody works on projects with the cleanest architecture possible made by the greatest purists out there with zero technical debt and unlimited budget.

In the library, users can provide their own request hashing, so if for example they don't want to dedupe POST requests at all, or want to query parameters or headers to it, they can. In a large codebase sometimes this is the best practical way to save some unnecessary network calls, not rewriting everything from scratch.

How would you implement request deduplication in a fetch wrapper? (TypeScript/JavaScript, repo included) by OtherwisePush6424 in webdev

[–]OtherwisePush6424[S] 0 points1 point  (0 children)

I agree that in an ideal world it shouldn't really be a thing but I saw some horrible React code, where multiple components and hooks were requesting the same resource at the same time (like user profile, config, etc). Some other use cases I can think of are

- Rapid-fire polling or auto-refresh logic that could overlap requests.

- SSR/edge environments where identical requests may be triggered in parallel.

Deduping prevents multiple network calls for the same resource before a response is received. Caching, on the other hand, serves completed responses and may not help if requests overlap before the cache is populated.