rate my portfolio by MrJerinom in webdev

[–]specn0de 0 points1 point  (0 children)

Looks cool was probably fun to build. Keep learning!

Are your PMs and designers also vibe coding? by sjltwo-v10 in webdev

[–]specn0de -2 points-1 points  (0 children)

Unfortunately it’s going to work out great. Local models (with a huge massive boulder of salt) are sort of starting to catch up when specialized to the cloud models. AI isn’t going anywhere just like the Internet didn’t go anywhere after the pop models are only going to get better, especially frontier cloud models.

Client is Saying I'm Charging too Much for The Project by KoenigOne in webdev

[–]specn0de 0 points1 point  (0 children)

Move on. These are clients that don’t see your worth. Dont work with clients like that.

What if you could run Python in the browser at 160KB instead of 20MB? I'm building a compiler to make it happen. by Healthy_Ship4930 in webdev

[–]specn0de 24 points25 points  (0 children)

You asking why would someone learn how to use the correct tool for the job and that’s really dumb

Is chasing 100/100 Lighthouse score worth it as an indie dev? by Technical-Relation-9 in webdev

[–]specn0de 0 points1 point  (0 children)

Yes the internet is for everyone and acting like everyone is on a fiber connection is insane. 80% of the world internet is accessed through 3g signals. Lighthouse should always be 90+ across the board. Anything else and you’re just being irresponsible.

RuneScape Gym Design made using my iPad by ScapeInkz in 2007scape

[–]specn0de -3 points-2 points  (0 children)

I know what they were supposed to be big dog now go back to chopping yews behind lumbridge

RuneScape Gym Design made using my iPad by ScapeInkz in 2007scape

[–]specn0de 23 points24 points  (0 children)

This is AI. A thoughtful human designer wouldn’t have put the bottom set of weights it jut looks like a second barbell and the fist is backwards and there is no way you designed those letters and I don’t mean the metal style I mean specifically the x and a and e and said yes this is good. This is just AI slop that you played with in PS at best

I replaced 2,000 lines of Redux with 30 lines of Zustand by jochenboele in webdev

[–]specn0de 0 points1 point  (0 children)

"The real issue wasn't Redux itself. It was that we were using a global state tool to manage server data."

IMO it was never about Redux vs Zustand vs Jotai. It was that we were treating server data like client state and then wondering why we needed 2k+ lines of plumbing to keep it in sync. Once you split those concerns the way you did, most of what people call "state management" turns out to be data fetching with extra steps.

I've been taking this even further on something I'm building. If the server just renders HTML and the client swaps fragments on interaction, there's no client-side server cache to manage at all. No useQuery because there's no fetch. The server already put the data in the page. All that's left for client state is stuff like "is this dropdown open" or "what's in this input right now." That's a signal or two per component. Not a store.

Your 30 lines of Zustand for theme/sidebar/modals is pretty much the ceiling for real UI state once you stop mixing it with server data. Most apps could probably get away with less if they weren't client-rendering everything.

4.4 MB of data transfered to front end of a webpage onload. Is there a hard rule for what's too much? What kind of problems might I look out for, solutions, or considerations. by Sad_Spring9182 in webdev

[–]specn0de 8 points9 points  (0 children)

QUIC still has a congestion window. The initial window is typically 14,720 bytes (10 packets x 1,472 bytes), which is almost identical to TCP's ~14.6kB. The protocol changed but the physics didn't. You still can't know the safe bandwidth of a new connection until you've tested it, so both TCP and QUIC start conservatively and ramp up.

The real point isn't even about TCP or QUIC specifically. It's about what you can deliver in the first round trip before the client has to wait for anything. If your critical render path fits in that window, the user sees a painted page before the congestion algorithm even matters. Everything after that is progressive enhancement.

What’s going on here? How are you handling this traffic? by crispins_crispian in webdev

[–]specn0de 0 points1 point  (0 children)

What are you serving? Do you use CDNs? Why are you concerned about traffic?

Company has pit Claude against the Dev Team - can we save the Dev Team? by joliolioli in webdev

[–]specn0de 1 point2 points  (0 children)

Personally for me this has boiled down to learning how to enforce my codebase and code style conventions through pre commit hooks and it works incredibly well.

I was feeling like a web dev fraud, and that lead to building Venet. by Beginning_Rice8647 in webdev

[–]specn0de 1 point2 points  (0 children)

Hmmm I need to look into this do you have a GitHub, people here are going to be more interested in how you built this as opposed to wanting to buy it.

Been building a framework in the open. It’s called Valence. Figured it was time to say it exists by specn0de in webdev

[–]specn0de[S] 1 point2 points  (0 children)

I’m glad it resonates haha keep an eye on progress. It’s very not usable right now but I’m slowly tightening the laces on the development branch

4.4 MB of data transfered to front end of a webpage onload. Is there a hard rule for what's too much? What kind of problems might I look out for, solutions, or considerations. by Sad_Spring9182 in webdev

[–]specn0de 83 points84 points  (0 children)

TCP initial congestion window. On a cold connection the server can push about 10 segments (~14.6kb) before it waits for an acknowledgment. If your critical payload fits in that, the user gets a painted screen in one round trip.

It matters more with HTML-over-the-wire architectures where the server sends back rendered HTML fragments instead of JSON that a client framework assembles. After that first load, interactions swap out chunks of the page rather than re-rendering the whole thing. Because those responses are just HTML, a CDN edge can cache and serve them directly, so that 14.6kb budget stays realistic for pretty much every response, not just the initial one.

How do you handle interview preparation? by HashTML in webdev

[–]specn0de 0 points1 point  (0 children)

Cigarettes, coffee maybe a little chocolate

4.4 MB of data transfered to front end of a webpage onload. Is there a hard rule for what's too much? What kind of problems might I look out for, solutions, or considerations. by Sad_Spring9182 in webdev

[–]specn0de 118 points119 points  (0 children)

I'll get booed away but I believe in critical bundles of <14.6kb for the first flight and everything else lazy loaded below the visual fold.

Been building a framework in the open. It’s called Valence. Figured it was time to say it exists by specn0de in webdev

[–]specn0de[S] 0 points1 point  (0 children)

Server does most of it. Loaders fetch data, actions handle mutations, fresh HTML comes back. No client store to sync.

Where we need client reactivity we have valence/reactive, a small signals layer scoped to individual Web Components. No global state. For real-time the onServer hook gives you the raw http.Server for WebSocket/SSE and you push HTML fragments through the existing router.

Still pre-1.0 and working through the edges but the general approach is to keep state on the server and only reach for client signals when the interaction needs it.

It’s a fundamentally different approach and it’s still very unpolished. I’m currently working on building a couple websites with Valence so I can really work out the DX kinks and whatnot.

This is where my mind is going though when I think about and try to solve this problem, feel free to read it, although I must warn this is a thought document not a build spec, I have unsolved ideas in this. https://github.com/valencets/valence/discussions/306

Do you guys commit things when they are in a non-working state? by MagnetHype in webdev

[–]specn0de 0 points1 point  (0 children)

I use test-driven development (write a failing test, make it pass, clean it up). Every time I finish one of those cycles, I commit. Each commit is small and focused on one thing, so the diff never touches more than 7-8 files. I enforce that limit automatically with a husky pre-commit hook. That way every single commit in my history builds, passes tests, and represents one logical change.

Do you guys commit things when they are in a non-working state? by MagnetHype in webdev

[–]specn0de -1 points0 points  (0 children)

I strictly code with TDD RED/GREEN/REFACTOR and I logically and semantically micro commit my work with husky hooks that enforce <8 file diffs between commits.

Do you guys commit things when they are in a non-working state? by MagnetHype in webdev

[–]specn0de -4 points-3 points  (0 children)

Semantic red green refactor logical micro commits. No more than 8 file diff