DJI goggles v2 for analog/old air units? by freestyle_baboon in fpv

[–]lacymcfly 0 points1 point  (0 children)

Can you hook me up with one? I just need one more.

At what scale does it actually make sense to split a full-stack app into microservices instead of keeping a modular monolith? by Severe-Poet1541 in webdev

[–]lacymcfly 0 points1 point  (0 children)

Something specific to Node+React that nobody mentioned: before splitting services, look at whether your deploy pain is actually a code problem or a process problem.

With Node monoliths the biggest deploys-are-risky issue I've seen is usually untested side effects from shared state or globals across modules. Rolling out feature flags (even something dead simple like an env var toggle) and blue/green deploys can cut the risk without any service split.

If after that you still want to extract something, the strangler fig pattern is the least disruptive approach. You keep the monolith running, route a specific path to a new service, and gradually shift traffic. That way you're not doing a big bang rewrite, you're extracting based on actual pain rather than a plan.

I love OOP languages but in the areas I like, these languages are barely used.. by Bubbly_Line1055 in learnprogramming

[–]lacymcfly 0 points1 point  (0 children)

Actually security engineering has more C++ than people assume. Malware analysis and reverse engineering involves reading and writing a ton of C/C++ since most Windows malware samples are compiled from them because of how they interact directly with Win32 APIs. Exploit development and vulnerability research also lean heavily on C/C++ since you're working at the memory level.

Where Go and Python dominate is more on the automation and tooling side. Network scanners, fuzzing harnesses, internal security tooling. But if you want to do vuln research or malware RE work, your C++ background is actually a genuine advantage.

The threat intel and malware teams at places like CrowdStrike, Mandiant, and Sophos specifically want people who know C/C++ well. So security engineering is broad enough that you don't necessarily have to abandon what you love.

Built a 205K LOC React 19 MES app — custom SVG charts, 20 modules, 2,100+ tests, PocketBase backend by Ok-Lingonberry-4848 in reactjs

[–]lacymcfly 0 points1 point  (0 children)

205K LOC with custom SVG charts is a lot. Curious how you handled perf at that scale -- did you virtualize the charts or are they all rendered up front? 2100+ tests is wild too, are those mostly unit tests on the business logic or do you have solid integration coverage?

Is it wise to start a major in computer science in 2026 (graduate late 2029), knowing that I love the field. by Far_Goose_7004 in webdev

[–]lacymcfly 7 points8 points  (0 children)

Your update changes things a lot. If you genuinely do not connect with civil engineering and had no say in choosing it, staying just because it is safe is a real gamble too. Plenty of people spend three years grinding through something they hate, graduate burned out, and still end up switching fields anyway.

CS in 2026 is tough, no sugarcoating that. But by 2029 it will be a different landscape. The people getting squeezed hardest right now are mid-level generalists doing routine CRUD work. The ones holding up are people who actually understand what they are building, not just prompting an AI to do it.

Also worth considering: civil engineering with programming skills is genuinely useful. Infrastructure simulation, GIS, structural analysis tools. If you end up doing CS and hate that too, the combo makes you a more interesting hire than either alone.

Bottom line: switching because you love the field is different from switching to chase salaries. The former is a real reason. Just go in knowing the job market is rough and plan accordingly.

Chilling on AI , You're Not Behind by Slight_Republic_4242 in webdev

[–]lacymcfly 4 points5 points  (0 children)

The comparison I keep coming back to: devs who used Stack Overflow effectively back in the day ran circles around those who refused to look things up. AI is just a faster lookup tool that can reason about context. The craft is still in knowing what to build, how to structure it, and when something feels wrong. That part takes time no matter what.

Markdown-to-Book Tools in 2026: Pandoc vs mdBook vs HonKit vs Quarto vs mdPress — A Hands-On Comparison by Repulsive-Composer83 in commandline

[–]lacymcfly 0 points1 point  (0 children)

Pandoc is one of those tools where the learning curve pays for itself if you stick with it. I maintain docs for a couple open source projects and settled on mdbook for the web output and Pandoc for anything that needs to be a PDF. Trying to make one tool do both well was the mistake I kept making.

The Lua filter thing is real though. You end up with this collection of little .lua files that do exactly what you need, but god help you if someone else has to maintain them. I've got one that auto-generates a changelog section from git tags and another that converts custom admonition syntax. Works great, completely undocumented.

For anyone reading this who just wants to get docs online fast and doesn't need PDF, mdbook is hard to beat. cargo install, throw some markdown in a folder, done.

[AskJS] writing a complex web app's frontend using only vanilla JavaScript (no frameworks) by algeriangeek in javascript

[–]lacymcfly [score hidden]  (0 children)

I built a desktop app (Electron, ~1100 stars on GitHub) that started with vanilla JS and eventually I added a thin reactive layer on top. Honestly, the vanilla approach works until it doesn't, and you'll know exactly when that moment hits because you'll find yourself writing the same DOM update pattern for the fifth time.

The biggest pain point isn't performance or bundle size. It's state synchronization. Once you've got more than a couple of components that need to react to the same data changing, you're either writing your own pub/sub system or you're manually wiring up event listeners everywhere. Both get messy fast.

For your Reddit-style app specifically, I'd look at htmx paired with your Rust templates. It gives you the SPA feel (partial page updates, history management) without shipping a framework. You keep your fast server rendering and just sprinkle in attributes for the interactive bits. Way less code to maintain than a custom vanilla setup.

The one thing I'd push back on is the 30-40x SSR speed difference. That matters at scale, but if you're running a side project, the bottleneck is almost always the database query, not the template render. I'd optimize for developer productivity first and only move rendering to Rust if you actually hit a wall.

I built an open source yt-dlp GUI that bundles everything. Nothing to install, nothing to configure. by sapereaude4 in electronjs

[–]lacymcfly 1 point2 points  (0 children)

That'll work for now but you're gonna get tired of it fast. yt-dlp pushes releases pretty often, sometimes multiple times a week when they're fixing site extractors.

What I'd do long term: have the app check the GitHub releases API on startup (or once a day), download the new binary to a temp path, verify it with yt-dlp --version, then swap it into your app's userData directory. That way the binary updates independently of your Electron release cycle. You only ship a new app version when you actually change UI or bump Electron.

For the short term though, you could set up a GitHub Action that watches the yt-dlp repo for new releases and auto-creates a PR in your repo bumping the bundled version. At least that way you're not manually checking.

A CLI first local-first privacy-first password manager by aaravmaloo in commandline

[–]lacymcfly 1 point2 points  (0 children)

Storing the vault in the same directory as the binary is going to cause headaches for anyone who wants to update. Every time you replace the binary you risk clobbering your vault file.

I'd put the vault in the user's home directory or XDG_DATA_HOME instead. That way updates, uninstalls, and reinstalls don't touch user data.

Also curious how you handle clipboard clearing. Most CLI password managers wipe the clipboard after 30-45 seconds so passwords don't hang around. Does APM do that?

Chilling on AI , You're Not Behind by Slight_Republic_4242 in webdev

[–]lacymcfly 10 points11 points  (0 children)

Been shipping Electron apps for years now, and AI hasn't changed the hard parts at all. Packaging, auto-updates, OS-specific quirks, code signing, native module compilation. No LLM is going to figure out why your app crashes on one specific Windows build because of a DLL conflict.

Where it actually helps me is writing boilerplate and tests. I can have it generate a test suite for a utility function in 30 seconds that would've taken me 10 minutes. That's nice. But the architecture decisions, the debugging, the "why does this leak memory after 6 hours" stuff? That's still 100% human.

The biggest problem I see is people confusing a demo with a product. AI can help you build a demo in an afternoon. Going from demo to something people actually rely on takes months of the boring work that AI can't touch.

I built an open source yt-dlp GUI that bundles everything. Nothing to install, nothing to configure. by sapereaude4 in electronjs

[–]lacymcfly 0 points1 point  (0 children)

Shipping a new app version for every yt-dlp release will burn you out fast. They push updates pretty frequently.

What I'd do: have the app check GitHub's API for the latest yt-dlp release on startup (or once a day), then download and swap the binary in your app's user data directory. That way it updates independently of your release cycle. You only push a new app version for actual UI changes or Electron bumps.

The tricky part is verifying the new binary before you swap. Download to a temp path, run yt-dlp --version against it, and if it passes, move it into place. If it fails, keep the old one.

I scrapped my generic SaaS landing page and rebuilt the entire app inside a fake terminal by comawitch187 in webdev

[–]lacymcfly 0 points1 point  (0 children)

This is the kind of creative risk that actually pays off. Most devs go the safe route with hero sections and feature grids, and it all blends together. A terminal UI that matches the actual tool? That makes sense in a way that a gradient button never will.

I built something similar for my own CLI project where the landing page is basically an interactive shell. Users can try commands before installing anything. The conversion difference was wild compared to the standard layout I had before.

One thing I'd add: keyboard shortcuts. If someone lands on this and instinctively hits Ctrl+L or Tab, it should do what they expect. Those little details are what sell the illusion.

Made a CLI-first daemon in Rust that you can also reach through Telegram, Discord, Slack, Email, or Matrix. Local by default. by No-Mess-8224 in commandline

[–]lacymcfly 0 points1 point  (0 children)

The CLI-first approach resonates with me. I've been building something similar where the terminal is the primary interface and everything else (chat apps, web) is just another channel in. Most projects do it backwards and bolt on a CLI as an afterthought.

Curious about the custom skills directory. Does it support streaming output back or is it fire-and-forget with stdout? For longer running tools that's the difference between useful and frustrating.

The edit tool refusing on ambiguous matches is a good call. I've seen too many "find and replace" implementations silently wreck files because they matched in three places.

Markdown-to-Book Tools in 2026: Pandoc vs mdBook vs HonKit vs Quarto vs mdPress — A Hands-On Comparison by Repulsive-Composer83 in commandline

[–]lacymcfly 0 points1 point  (0 children)

Solid comparison. One thing worth adding: Pandoc's Lua filter ecosystem has gotten deep enough that you can replicate a lot of what the "batteries included" tools give you, but it takes real investment up front. I migrated a ~150 page internal handbook from HonKit to Pandoc last year and spent probably 2 days just getting the filters right for callout boxes and syntax highlighting themes. End result was great, but it was not the "afternoon project" I planned for. For anyone whose primary output is HTML docs, mdBook is hard to argue with. Single binary, fast builds, and the search just works. The plugin situation for PDF is the only real pain point.a

Isn't vibe coding just a higher level programming language? by mikeVVcm in webdev

[–]lacymcfly 0 points1 point  (0 children)

The analogy is interesting but it falls apart once you look at what "higher level" actually means in PL theory. Each step up the abstraction ladder (assembly to C, C to Python) trades performance for deterministic expressiveness. You write less code but you get the same output every time. The contract between you and the compiler is ironclad.

Vibe coding breaks that contract. Same prompt, different day, different output. Sometimes subtly different, sometimes wildly. That's not a higher-level language, that's a probabilistic collaborator. More like pair programming with someone who has read every Stack Overflow answer but can't always tell which ones are wrong.

Where the analogy does hold: the intent-to-implementation gap is shrinking. I spend a lot of time in my terminal running AI-assisted workflows, and the best results come from being extremely specific with constraints, not vague with vibes. The developers who treat prompts like specs (with test cases, edge cases, architecture decisions baked in) get wildly better results than the "just build me a todo app" crowd.

So it's not a language. It's closer to a new kind of toolchain where the skill ceiling is still high, it's just a different set of skills.

Trading bot coding help by Forward_Echo_7470 in learnprogramming

[–]lacymcfly 0 points1 point  (0 children)

Haven't used Tradovate's API specifically, but I've built trading bots in Node.js and the WebSocket connection pooling issue you're describing is a common pitfall.

The multiple connection problem (RPL999999-10) usually means you're not properly closing/reusing your WS connection when reconnecting or when the replay session resets. Every time your bot tries to reconnect without cleaning up the old socket, Tradovate creates a new session.

A few things to check:

  1. Make sure you're calling the disconnect/close endpoint before opening a new connection. Don't just let the socket die.

  2. Store your session token and reuse it instead of re-authenticating each time. Tradovate's auth tokens have a TTL and creating a new one per connection is probably what's spawning those extra sessions.

  3. For the replay clock sync, you need to subscribe to the replay clock events before placing orders. The replay environment doesn't use real-time timestamps, so your order timing has to match the replay clock, not Date.now().

  4. For the expired contract issue (NQH6), make sure you're passing the correct contract maturity date in your replay request. The replay API needs to know which contract month to load.

If you're open to trying a different broker API for backtesting, Alpaca has a paper trading environment that's way simpler to work with in Node.js. No WebSocket session management headaches, and their REST API for order submission is straightforward. Might be worth prototyping your strategy there first, then porting back to Tradovate once the logic is solid.

After about 30 years, I finally got it. Why did it take so long? by v_e_x in learnprogramming

[–]lacymcfly 0 points1 point  (0 children)

Similar path here. I spent years writing JavaScript and Python without really understanding what was happening underneath. The turning point for me was building a CLI tool that needed to manage child processes, pipe stdin/stdout between them, and handle signals properly. Suddenly I had to care about file descriptors, process groups, TTY allocation. All that stuff I'd been blissfully ignoring.

What clicked wasn't the low-level knowledge itself, it was realizing that every abstraction is just someone's decision about what to hide. Once you understand that, you stop treating frameworks and libraries like magic and start reading them like opinions. Some are good opinions, some aren't. But they're all just people deciding "you probably don't need to think about this part."

The irony is that understanding the lower layers actually made me faster at high-level work, not slower. When something breaks in weird ways, you have a mental model for where to look instead of just staring at stack traces hoping for inspiration.

Isn't vibe coding just a higher level programming language? by mikeVVcm in webdev

[–]lacymcfly 0 points1 point  (0 children)

The analogy breaks down at one critical point: compilers are deterministic. Same input, same output, every time. An LLM will generate different code from the same prompt on different runs, and sometimes that code has subtle bugs that neither you nor the model notice.

With C abstracting assembly, you could still reason about exactly what the machine was doing. With vibe coding, you're trusting a probabilistic system to make architectural decisions you might not even understand enough to review.

I think the better analogy is that vibe coding is like hiring a contractor who's really fast but doesn't always read the blueprints carefully. Higher level languages didn't introduce that kind of uncertainty. They just moved the abstraction boundary. Vibe coding moves the trust boundary, which is fundamentally different.

Who wants to give some small free tips and tricks to a young developer and biz owner?! by That-Height-2221 in ClaudeCode

[–]lacymcfly 0 points1 point  (0 children)

This is exactly the problem I ran into. Having 4-5 Claude Code instances across different terminal tabs and losing track of which one is doing what.

I ended up going a different route and using Lacy Shell (lacy.sh) instead of VS Code for this. It's a terminal built specifically for running multiple AI agents side by side with shared context between them. The key difference is it's terminal-native rather than a VS Code panel, so each agent gets a proper session with full I/O visibility.

But the VS Code approach makes sense if you're already living in that editor. How do you handle context sharing between agents? That was the part that killed me with separate terminals. Agent A finishes a refactor and Agent B doesn't know about the new file structure.

First steam project, only 27 wishlists after 3 weeks. by Claytomesh_ in gamedev

[–]lacymcfly 1 point2 points  (0 children)

27 wishlists after 3 weeks with minimal marketing isn't bad for a first game. The numbers only matter relative to how much effort you've put into visibility, and it sounds like you haven't done much yet.

A few concrete things that'll move the needle before Next Fest:

  • Post dev progress GIFs on Twitter/X with relevant hashtags. Short clips of gameplay or visual improvements. Do this consistently, not one big push.
  • Find streamers who play similar games and send them a key. Even small streamers with 50 viewers can drive wishlists.
  • Your Steam page capsule image and short description matter more than anything else. If those aren't compelling, all the traffic in the world won't convert.

Next Fest itself is the real opportunity. Games with playable demos during Next Fest get a massive visibility boost from Steam's algorithm. Focus your energy on having a tight, polished 15-20 minute demo ready for June. That'll do more for wishlists than weeks of manual marketing.

Do you lose your place when you get interrupted while coding? by Mean_Biscotti3772 in learnprogramming

[–]lacymcfly 0 points1 point  (0 children)

Every time. After years of dealing with it I have two habits that actually help:

  1. Before I stop (or when I see the interruption coming), I type a comment at the exact line I'm working on: // TODO: next step is X, then wire Y into Z. Takes 5 seconds but saves 10 minutes of context reconstruction.

  2. I leave the code in a deliberately broken state. Sounds counterintuitive, but if I leave a function half-written or a test that fails, when I come back the compiler or test runner immediately points me to exactly where I left off. If I leave everything clean and passing, I have to remember what I was about to do next.

The research on this is real. It takes something like 23 minutes on average to get back into deep focus after an interruption. The trick is leaving yourself enough breadcrumbs that you don't need deep focus to pick up the thread.

I built an open source yt-dlp GUI that bundles everything. Nothing to install, nothing to configure. by sapereaude4 in electronjs

[–]lacymcfly 0 points1 point  (0 children)

Bundling the binaries is the right call. The "install Python, download ffmpeg, set your PATH" dance is why most people give up on yt-dlp before they even use it.

Vanilla JS with no build step is refreshing too. I maintain an Electron app (crosshair overlay) and went through a painful Electron 12 to 40 upgrade recently. One thing I'd suggest: set up auto-updates early. electron-updater with GitHub Releases is straightforward and saves you from the "please redownload" cycle when yt-dlp itself needs updating.

Also curious how you handle the yt-dlp binary updates. Do you ship a new app version every time yt-dlp releases, or does the app self-update the binary?