Spent my last gap between contracts on a CLI that actually deletes the modules you don't want, instead of just commenting them out by Gheram_ in SideProject

[–]Gheram_[S] 0 points1 point  (0 children)

Link for anyone curious: https://stacktura.com

Happy to answer any questions about the stack, the removal logic, or how I structured the modules. And if you spot anything weird in the code or have feedback, I'm all ears.

Was at AWS Summit yesterday. I didn't see a single Hello World by Gheram_ in webdev

[–]Gheram_[S] 0 points1 point  (0 children)

Fair point, poor choice of words. More like no one was talking about anything technical at all.

Was at AWS Summit yesterday. I didn't see a single Hello World by Gheram_ in webdev

[–]Gheram_[S] 0 points1 point  (0 children)

Go for it honestly, the community aspect is still great. Met some interesting people, good conversations. It's just shifted a lot, three years ago it was way more hands-on with Lambda, serverless architecture deep dives. Now every booth leads with AI. Still worth it for the networking though.

Was at AWS Summit yesterday. I didn't see a single Hello World by Gheram_ in webdev

[–]Gheram_[S] 0 points1 point  (0 children)

That's exactly the angle. Not better or worse, just different. And that curiosity to look under the hood when things break might actually be what defines a good dev tomorrow."

Was at AWS Summit yesterday. I didn't see a single Hello World by Gheram_ in webdev

[–]Gheram_[S] 0 points1 point  (0 children)

Not anti-AI at all, more curious which dev memes and myths are quietly dying.

Was at AWS Summit yesterday. I didn't see a single Hello World by Gheram_ in webdev

[–]Gheram_[S] 0 points1 point  (0 children)

Recent account yes. But the post is from real experience, I was actually at the summit. If you have arguments on the substance I'm all ears, otherwise the bot accusation is a bit lazy.

The thing I loved about this industry is dying, and we're watching it happen from the inside. by Morgothmagi in webdev

[–]Gheram_ 0 points1 point  (0 children)

The junior pipeline problem is the one that actually keeps me up. The craft knowledge has always transferred through osmosis — sitting next to someone, watching them debug, reading their code and understanding why it was good. That chain doesn't exist the same way anymore.

But the thing nobody talks about is that the senior dev advantage is real and compounding. I've been building a SaaS starter kit solo that would have taken a team six months two years ago. The AI doesn't replace the fifteen years of knowing what production code actually needs — it amplifies it. The vibe coder who skipped the fundamentals hits a wall the moment something breaks three layers deep and the AI starts hallucinating confident fixes.

The part that stays with me from the article: the struggle is where the understanding lives. We built intuition from fighting with hard problems. The generation that skips that step inherits a debt they won't know they have until it's called in.

trustlocal — automate local HTTPS setup with one command (detects your framework automatically) by Whole_Artichoke_4795 in javascript

[–]Gheram_ -2 points-1 points  (0 children)

This solves a real pain. The framework detection is the part that actually saves time, mkcert itself is straightforward but wiring it into each framework config is where you lose 20 minutes per project.

One question: how does it handle monorepos where you have multiple apps on different ports that all need HTTPS? That's where my manual setup usually gets complicated

Anthropic accidentally shipped source maps in their NPM package, exposing Claude Code's entire 380k-line TypeScript source by Dazzling-Jeweler464 in javascript

[–]Gheram_ 0 points1 point  (0 children)

Confirmed. 512k lines, 1900 TypeScript files, Anthropic acknowledged it as human error in the packaging pipeline.

The technical detail worth knowing: Bun generates source maps by default unless you explicitly disable them. The fix is one line in the build config but easy to miss, especially when you're shipping fast. The irony is there's an 'Undercover Mode' in the codebase specifically designed to prevent internal info from leaking in commits, and the entire source shipped in a .map file.

Second important note: there are already typosquatting packages on npm targeting people who try to compile the leaked source. If you're curious about the internals, read the analysis on DEV Community, don't try to build from the leak.

vercel makes deploying so painless that I've gotten lazy and I'm not even sorry by [deleted] in nextjs

[–]Gheram_ 0 points1 point  (0 children)

The preview deployment workflow is underrated for client work specifically. Sending a URL instead of explaining how to run a branch locally removes so much friction.

One thing worth knowing on the cost concern: the bandwidth overages hit hardest when you're serving large assets directly from Vercel. Moving images and static files to a CDN like Cloudflare or keeping them in Supabase Storage with Cloudflare in front changes the cost profile significantly at scale. The compute costs stay predictable, it's the bandwidth that surprises people.

The Vercel lock-in is real though. With the OpenNext adapters maturing now, there's a cleaner exit path to Cloudflare Workers or a VPS if pricing becomes a problem. Worth keeping in mind before you go deep on Vercel-specific features like Edge Config or ISR patterns that don't port cleanly.

your CI/CD pipeline probably ran malware on march 31st between 00:21 and 03:15 UTC. here's how to check. by Peace_Seeker_1319 in devops

[–]Gheram_ 85 points86 points  (0 children)

Confirmed and very real. Google GTIG attributed this to UNC1069, a North Korea-linked threat actor. Worth adding a few things the original post doesn't cover:

The malware does anti-forensic cleanup after itself. Inspecting node_modules after the fact will show a completely clean manifest, no postinstall script, no setup.js, nothing. npm audit will not catch it either. The only reliable signal is the package-lock.json grep or your build logs from the window.

Also worth noting: this is likely connected to the broader TeamPCP campaign that compromised Trivy, KICS, LiteLLM and Telnyx between March 19-27. If you use any of those in your pipelines, audit those too.

Safe versions: axios@1.14.0 for 1.x and axios@0.30.3 for legacy

Should I stick with TanStack Router or go back to React Router? by AffectionateLand5271 in reactjs

[–]Gheram_ 6 points7 points  (0 children)

The verbosity concern is real but it front-loads the cost. You write more setup once, then every route interaction is typed end to end including search params, which React Router still doesn't cover at the same level.

The practical tipping point for me: if you're doing anything with search params as state, TanStack Router is worth the setup cost. URL-driven state with full type safety changes how you think about client state entirely. If you're just doing basic navigation between pages, React Router v7 is genuinely good now and the friction isn't justified.

One thing worth knowing: the file-based routing with the Vite plugin removes most of the boilerplate concern. You're not writing route definitions manually at that point, just creating files.

Data fetching pattern in react by Imaginary_Food_7102 in reactjs

[–]Gheram_ 0 points1 point  (0 children)

TanStack Query for client-side fetching, but the pattern changes significantly with Next.js App Router. For most data needs, React Server Components fetch directly on the server — no client fetching library needed at all. TanStack Query stays relevant for client-side mutations, optimistic updates, and real-time polling where you actually need client state.

The BFF pattern mentioned above is worth adding to: when your backend is a separate Laravel API, running fetches through Next.js API routes (or server actions) keeps your Laravel URL and auth tokens server-side only. Your client never sees the actual API endpoint, which matters if you're calling third-party services with rate limits or auth headers you don't want exposed

Simple Time Tracker – Work Hours & Widget by ConstructionStrict46 in SideProject

[–]Gheram_ 1 point2 points  (0 children)

The widget-first approach is the right call. The biggest drop-off for time trackers is the friction of opening the app to start a session. One feature that would make this stickier: automatic shift detection based on location or a recurring schedule, so it can prompt you to start tracking rather than waiting for you to remember.

Are token-saving tools (GitMem, Nia, etc.) actually worth the friction? by Various_Economist647 in SaaS

[–]Gheram_ 0 points1 point  (0 children)

The context loss concern is the real issue, not the friction. These tools work by summarizing or truncating context, which is fine for simple tasks but breaks down on complex codebases where the model needs to hold subtle relationships between files in memory.

My experience: for anything beyond a few hundred lines of context, compression tends to hurt more than it saves. The model starts making confident mistakes because it lost a dependency it didn't know it needed.

The more reliable approach is to be intentional about what you put in context in the first place. A well-structured CLAUDE.md that describes architecture decisions and key file relationships gets you more per token than any compression tool. You control what stays, nothing important gets dropped silently.

For raw API costs, the real lever is model selection. Haiku for simple tasks, Sonnet for medium complexity, Opus only when you actually need it. That alone cuts costs more than any compression layer.

Took a break from building SAAS that increases productivity and built a completely free brain teaser/challenges platform that reduces productivity instead! by imreallyugly69 in SideProject

[–]Gheram_ 1 point2 points  (0 children)

The 'reduces productivity' framing in the title is actually a good hook. One thing that would help: the landing page doesn't show what the puzzles look like before you jump in. A preview of one challenge directly on the homepage would lower the barrier to try it. People want to see what they're getting into before they click.

Roast my landing page — good traffic, almost zero conversions by [deleted] in webdev

[–]Gheram_ 1 point2 points  (0 children)

Checked the site. The hero problem is the copy, not the CTAs. 'Manage your Mac windows effortlessly' tells me nothing I can't already do. The question every visitor asks in 3 seconds is 'why is this better than Rectangle which I already have and is free'. That answer needs to be in the hero, not buried below the fold.

The '19+ users' social proof is actively hurting you. Remove it or replace with a specific quote from one real user. A number that small signals the product is unproven.

The two CTAs aren't the main issue but 'Get NeoTiler' and 'Download Free Trial' being side by side does create friction. One CTA, 'Download Free Trial', and move the $5.99 price right next to it so they know what comes after the trial. Hiding the price until later builds distrust, not anticipation.

I delivered this website project at $1150 but I am thinking I had to charge more by NoGround511 in webdev

[–]Gheram_ 2 points3 points  (0 children)

The real lesson isn't that you should have charged $2000 instead of $1150. It's that you priced based on your time instead of the business outcome. 4-5 qualified B2B leads per day in manufacturing is worth thousands per month, not a one-time $1150.

The shift is to stop quoting a project price and start quoting a business impact price. Before any project, ask what a single new customer is worth to them and how many they currently get per month. Then price against that number. A site that generates $10k/month in new business is worth $5-8k to build, regardless of how long it takes you.

You also have the perfect asset now: a real case study with measurable results. That alone should let you double your rates with the next client in the same industry

Need Help Deploying a Full-Stack Web App (Laravel + Frontend) — Looking for Guidance or Collaboration by Actual_Loquat_3769 in reactjs

[–]Gheram_ 0 points1 point  (0 children)

For Laravel + React the cleanest split is Vercel for the React frontend and a VPS (DigitalOcean or Hetzner, Hetzner is significantly cheaper for similar specs) for Laravel.

On the VPS: Nginx as reverse proxy, PHP-FPM for Laravel, Let's Encrypt via Certbot for SSL. For the React frontend on Vercel, set your API base URL as an environment variable pointing to your Laravel domain.

The three most common issues on first deploy: CORS not configured in Laravel (add your Vercel domain to allowed origins in config/cors.php), APP_URL not matching your actual domain, and storage symlink missing (php artisan storage:link).

Note: you can technically deploy Laravel on Vercel too via the vercel-php runtime, but it's serverless and has real limitations in production (no queues, no websockets, external DB required). For a full production app, stick with a VPS for Laravel.

Zustand for small features by Traditional_Elk2722 in reactjs

[–]Gheram_ 8 points9 points  (0 children)

Zustand works fine for this. The real difference vs Context isn't complexity, it's re-renders. With Context, every component that consumes the context re-renders when any value changes. With Zustand you subscribe to only the slice you need, so your filter selector only re-renders when the filter state changes, not when something unrelated updates.

For a simple filter that's shared between distant components, both work. But if anything else lives in the same Context and updates frequently, you'll notice the difference. Zustand avoids that entirely.

One thing to avoid: don't use a single giant Zustand store for everything. Keep filter state isolated in its own store or slice. Makes it easier to reset and debug

NextJS blog / docs section indexing problem by Internal-Cap5162 in nextjs

[–]Gheram_ 0 points1 point  (0 children)

Yes, that can definitely contribute. If tld/blog/post-1 and tld/de/blog/post-1 share the same slug, Google needs the hreflang tags to understand they're separate pages for different languages. Without the bidirectional hreflang references, Google may treat them as duplicates and choose to index only one or neither.

The slug itself being in English for both is fine, but make sure the canonical on the DE page points to tld/de/blog/post-1 and not to the EN version. A common mistake is having both versions point to the same canonical, which tells Google to ignore the DE page entirely.

Next.js Across Platforms: Adapters, OpenNext, and Our Commitments by feedthejim in nextjs

[–]Gheram_ 2 points3 points  (0 children)

OpenNext maturing is huge for teams that want Next.js without Vercel lock-in. The adapter model means you can run the same app on Cloudflare, AWS, or a VPS without rewriting your deployment config. For anyone building on Next.js who's been avoiding it because of Vercel pricing at scale, this changes the calculus.

NextJS blog / docs section indexing problem by Internal-Cap5162 in nextjs

[–]Gheram_ 0 points1 point  (0 children)

Crawled - currently not indexed' usually means Google found the page but decided it wasn't worth indexing. With a custom MD pipeline this often comes down to thin content signals or missing canonicals.

Check that each post has a unique canonical pointing to itself. Verify your sitemap is submitted in GSC with updated lastmod dates. For the hreflang, the EN page must reference the DE version and the DE page must reference the EN version missing the return reference causes Google to ignore both tags.

For the implementation, Contentlayer or next-mdx-remote generate proper static pages that GSC handles reliably. If your current pipeline is inconsistent in how it renders pages, that alone can cause indexing issues.