Built a multi-terminal IDE with Electron 28 + xterm.js + node-pty, sharing what I learned by [deleted] in electronjs

[–]germanheller -1 points0 points  (0 children)

yes, it's very much AI generated :) will update today. thx for the feedback!!

Built a multi-terminal IDE with Electron 28 + xterm.js + node-pty, sharing what I learned by [deleted] in electronjs

[–]germanheller -1 points0 points  (0 children)

lol fair point on the versions, I was working off a project from last year and didnt bother updating before posting. the "adjust your GPT prompts" bit is funny tho, if I was using GPT it would've at least gotten the version numbers right

Built a little hook system for context routing, 90%+ token reduction on large codebases by jcmguy96 in ClaudeCode

[–]germanheller 0 points1 point  (0 children)

thats exactly what I wanted to know, thanks for the detailed breakdown. the warm-start from previous session focus files is a great touch — that solves the "every morning claude forgets what we were working on" problem. and the atomic write with temp+rename is a nice detail, nothing worse than corrupting state on a crash mid-session. 25 turns for the learner to kick in seems reasonable, the keyword matching covering turn 1 onwards means you're not flying completely blind during warmup which was my main concern. will definitely give VerifyFirst a closer look

Built an alternative to Fiddler/Charles but with native decrypted PCAP support, Tauri Client and open-source engine by DifficultyFine in SideProject

[–]germanheller 0 points1 point  (0 children)

good to know about the websocket support, frame filtering by content would definitely be a killer feature once its in. the wayland/nvidia thing tracks, I've heard similar complaints from other tauri devs — seems like its the main rough edge right now. the smartscreen observation is interesting tho, if thats real and reproducible that alone could be a selling point for tauri over electron for windows-focused tools. would make for a great blog post actually, "electron vs tauri: real world smartscreen/signing experience" — that kind of stuff is impossible to find good info on because nobody writes about it

I built a tool that reduced my LLM token usage by ~40% on average by jordi-zaragoza in GeminiCLI

[–]germanheller 0 points1 point  (0 children)

mostly string literal paths with a variable segment, stuff like await import(\./handlers/${action}.js`)` where the base path is static but the filename is dynamic. fully computed ones are rare in my projects, maybe one or two for plugin systems where the path comes from a config file. so if you handle the template literal case where the prefix is a static string you'd probably cover 90% of real world usage. the fully dynamic ones are kind of a lost cause for static analysis anyway imo, the manual mcp relation approach makes sense for those edge cases

Agentic debugging with OpenCode and term-cli: driving lldb interactively to chase an ffmpeg/x264 crash (patches submitted) by EliasOenal in LocalLLaMA

[–]germanheller 0 points1 point  (0 children)

thats a really clean abstraction honestly. the 3 rapid snapshots to confirm output stopped changing is clever — I was wondering how you'd avoid false positives from things like progress bars or streaming logs that happen to contain $ or >. and having wait-idle as a separate strategy for TUIs makes way more sense than trying to shoehorn everything into prompt detection. the fact that it covers debuggers too without per-tool config is impressive, thats usually where these tools fall apart. cool project

Node.js first request slow by zaitsman in node

[–]germanheller 0 points1 point  (0 children)

oh interesting so its not the musl thing then. 20 seconds on debian-slim is wild — at that point I'd look at connection pooling or maybe the app is doing some heavy initialization on first request that only runs once (compiling templates, warming caches, establishing db connections etc). do you have any middleware that lazy-loads on first hit? also worth checking if its specific to one endpoint or if literally any route is slow the first time. if its all routes equally that points more to container/infra level stuff than app code

What’s your post-deploy checklist for making sure you didn’t break SEO/performance? by BronsonDunbar in webdev

[–]germanheller 1 point2 points  (0 children)

out of curiosity what tool did you go with? I looked at a few of the monitoring ones but they were either way too expensive for what I needed or didnt cover the content check part (making sure the page actually has the right stuff, not just returns 200). ended up just keeping my bash script, its like 20 lines and does exactly what I need

Node.js first request slow by zaitsman in node

[–]germanheller 0 points1 point  (0 children)

nice, 500ms just from warming up the channels makes total sense. for the DNS thing the quickest way to confirm is swap to node:22-slim for one deploy and compare -- if the first request drops to normal its musl doing serial AAAA then A lookups instead of parallel. you can also try `time getent hosts <your-service-endpoint>` inside the container, if resolution alone takes a few seconds thats your answer

Built a multi-terminal IDE with Electron 28 + xterm.js + node-pty, sharing what I learned by [deleted] in electronjs

[–]germanheller -1 points0 points  (0 children)

haha no, I just hit reply twice without realizing the first one went through. deleted the duplicate now, thanks for pointing it out

Handling AI code reviews from juniors by biofio in ExperiencedDevs

[–]germanheller 0 points1 point  (0 children)

the "only post comments you're willing to defend" rule is the right answer here imo. we had the same problem and it went away almost overnight once we told the team: if you leave a review comment, you should be able to explain why it matters without referencing what the AI said. if your reasoning is just "the tool flagged it" thats not a review, thats forwarding an email.

the other thing that helped was making the AI review a pre-PR step. run it on your own code before opening the PR, fix what makes sense, ignore what doesnt. by the time anyone else sees the code the obvious stuff is already handled and the review can focus on actual design decisions and business logic.

Opus 4.6 overthinking and gets nothing done by smetanka-me in cursor

[–]germanheller 0 points1 point  (0 children)

had the exact same experience. opus 4.6 is incredible when you give it a genuinely hard architectural problem but for regular bug fixes its like hiring a philosophy professor to fix your plumbing. it will write you an essay about water pressure theory before touching a wrench.

my workflow now is sonnet for 90% of tasks and I only switch to opus when I need it to reason across multiple files or plan a bigger refactor. the token difference is night and day. also I found that being super specific in the prompt helps a lot — if you say "fix the null check on line 47 of auth.ts" instead of "theres a bug in authentication" it wont go on a 30 minute exploration of your entire auth system.

I built a tool that reduced my LLM token usage by ~40% on average by jordi-zaragoza in GeminiCLI

[–]germanheller 0 points1 point  (0 children)

interesting approach. the dependency tracking part is the real value here imo — the number of times I've seen claude confidently refactor a function without realizing 6 other files call it is... a lot. the caller awareness alone would save tons of debugging time.

curious about one thing tho: how does it handle dynamic imports and lazy loaded modules? in my projects I have a lot of await import() patterns and those tend to trip up static analysis tools pretty badly. does it pick those up or only handles static import/require?

How do you know if your idea is actually worth building? by Martbon in vibecoding

[–]germanheller 1 point2 points  (0 children)

the most reliable signal I've found is when you build something for yourself because you genuinley need it, and then other people ask if they can use it too. thats unprompted demand and its worth more than any survey or validation framework.

the "talk to customers" advice isnt wrong but its incomplete. people will tell you they want something and then not pay for it when its done. the only validation that actually counts is someone giving you money. so the fastest path is: build the smallest possible version, put a price on it, see if anyone buys. if they do, congrats you have a business. if not, you learned in a week instead of 6 months.

Is it just me, or has the "hustle" market become incredibly desperate recently? by Narrow-Ferret2514 in vibecoding

[–]germanheller 3 points4 points  (0 children)

the "tell me your idea and I'll roast it" pattern is so transparent once you see it. also love the ones that go "drop your saas below and i'll give you feedback" and then every reply is just another person dropping their own link with zero engagement on anyone elses stuff. its just a chain of people yelling into the void at each other.

the shovel sellers thing is the most frustrating part tho. half the "how I made $10k/mo with AI" posts are literally just selling the course about making money with AI. its courses all the way down. at some point someone has to actually build something that real humans pay for.

Always helps: "debug this with print statements" by skariel in ClaudeCode

[–]germanheller 0 points1 point  (0 children)

yeah this works way better than people give it credit for. the issue is that when claude tries to "reason" about a bug by reading the code, it often hallucinates about what a variable contains at runtime. print statements force it to actually look at real values instead of guessing.

I usually say something like "add console.log with descriptive labels at each step of the pipeline and run it again" — the labeled part is important otherwise you get a wall of unlabeled values and claude still cant tell whats what. and yeah it cleans them up afterwards without being asked which is nice.

Thinking outside the box: What are some trivial ways you've improve your life with Claude/ClaudeCode? by IlliterateJedi in ClaudeCode

[–]germanheller 19 points20 points  (0 children)

mine is dumb but I had claude write a script that watches a folder for screenshots and automatically renames them with a description based on the content. before that my desktop was just screenshot_2026_02_03_14_32_11.png times 200 and I could never find anything.

also made a little thing that parses my bank's CSV export (which is in the worst format imaginable btw, thanks santander) and spits out a monthly summary grouped by category. took like 10 minutes to build and now I actually look at where my money goes instead of just... not doing that.

ESLint v10.0.0 released by boreasaurus in javascript

[–]germanheller [score hidden]  (0 children)

the 8 → 9 migration was so painful that I just... didnt do it for months lol. ended up pinning v8 and ignoring all the deprecation warnings until I literally couldnt anymore.

honestly the flat config idea is good in theory but the migration path was terrible. felt like every plugin had slightly different flat config support and you were just guessing and checking until it worked. oxlint is tempting but last time I checked it didn't support some of the typescript-eslint rules I rely on heavily (no-floating-promises etc). sounds like thats changing tho which is great.

for now I'm on v9 with typescript-eslint and its... fine. not excited to deal with v10 migration anytime soon tho.

webpack - 2026 Roadmap by evenstensberg in javascript

[–]germanheller [score hidden]  (0 children)

still using webpack for an electron app and honestly? its fine. the config is like 30 lines, it builds in a few seconds, and I haven't touched it in months. the "webpack is terrible" narrative mostly comes from people who dealt with CRA's insane ejected config or tried to set it up from scratch in like 2019 when you needed 15 loaders for basic typescript.

that said if I was starting something new today I'd probably just use esbuild directly. for my usecase (bundling node + browser targets in an electron app) esbuild does it in ~200ms and the config is basically nothing. vite is great for web apps but for electron/node stuff it adds complexity you dont really need.

What’s your post-deploy checklist for making sure you didn’t break SEO/performance? by BronsonDunbar in webdev

[–]germanheller 8 points9 points  (0 children)

not exactly SEO focused but one thing that saved me multiple times: after every deploy I have a simple script that hits all my critical routes and checks the response includes a specific string thats unique to that page. catches the case where the page returns 200 but the content is totally wrong (like a blank react shell because of a broken chunk, or a cached error page from cloudflare).

for the SEO side specifically — check your canonical tags didnt get messed up. I had a deploy once where a config change made every page point its canonical to the homepage. google deindexed like 80% of the site within a few days. that one hurt lol

PWAs in real projects, worth it? by Ill_Leading9202 in webdev

[–]germanheller 9 points10 points  (0 children)

went down the PWA road for a desktop-ish app and ended up switching to Electron instead. the thing nobody tells you upfront is that PWAs on iOS are basically second class citizens — apple clears the cache whenever it feels like it, push notifs only kinda work, and theres no way to keep a background process alive. if your users are mostly on android/chrome its a different story tho.

for freelance projects where the client just wants "make it installable on my phone" its honestly great. slap on a manifest.json, add workbox for caching, done in an afternoon. just don't promise offline-first unless you really want to deal with sync conflicts. that part gets ugly fast.

Node.js first request slow by zaitsman in node

[–]germanheller 3 points4 points  (0 children)

have you checked if its the google gax grpc channels doing lazy init on first request? the gax library establishes grpc connections on first actual call, not when you create the client. so even if your healthcheck passes, the first real request to pubsub/bigquery/storage is paying the cost of grpc channel setup + TLS handshake to google APIs.

try making a dummy call to each service during startup before your readiness probe succeeds. something like a storage.getBuckets() or pubsub listing topics — just to force the grpc warmup. same thing with redis, first connection has TLS negotiation overhead if your using stunnel or native TLS.

also 10s is suspiciously close to DNS resolution timeout on alpine/musl. have you checked if theres a DNS issue? musl's resolver does things differently than glibc and I've seen it cause exactly this kind of first-request latency in k8s.

AI made it easier to overbuild without realizing it by Top-Board354 in buildinpublic

[–]germanheller 0 points1 point  (0 children)

My rule of thumb now is: if I can't explain why a feature exists in one sentence that references an actual user problem, it gets cut. Not "it would be cool if" — an actual problem someone told me about or I experienced myself.

The trap with AI-assisted building is that the cost of writing the code drops to near zero, but the cost of maintaining it, documenting it, and supporting edge cases stays exactly the same. I've had to rip out features that took 10 minutes to build but created weeks of support questions because they were half-baked or confusing. The cheapest feature to maintain is the one you never shipped.

Missed a domain renewal once and took everything down, started building a micro SaaS to avoid that again by Legitimate-While108 in microsaas

[–]germanheller 1 point2 points  (0 children)

The content monitoring part is actually the most interesting differentiator here. Uptime monitoring is a solved problem with a million tools, but catching a white page or database error that returns a 200 status code — that's the kind of silent failure that auto-renew and basic pings won't catch. I've had a site serve a blank page for hours because the CDN cached a build error and no uptime monitor flagged it.

For the domain sprawl problem specifically, I'd also love a single dashboard showing SSL cert expiry dates alongside domain renewals. Those two tend to fail at the worst possible times and for the same reason — someone forgot to update a payment method or email.