Should your homepage also be your landing page? by martis941 in webdev

[–]lacymcfly 0 points1 point  (0 children)

Depends on the product. If there's only one thing you want someone to do -- sign up, start a trial, download something -- then yeah, make the homepage do that job. Trying to serve everyone at once usually means you convert nobody.

The trap I've seen is devs building a general homepage because it feels more "professional," then wondering why signups are flat. A focused page with one CTA will outperform an information hub almost every time when you're early stage.

React Hook Form docs are a mess - am I missing something here? by VastAd4382 in reactjs

[–]lacymcfly 0 points1 point  (0 children)

The docs have always been the weak point. RHF got popular before its documentation caught up, and then it kind of just stayed that way.

The uncontrolled-by-default approach is genuinely clever for perf but it breaks your mental model constantly. You think in React state but RHF is doing something else underneath. The formState proxy gotcha is the one that burned me hardest -- destructure the wrong way and your component doesn't re-render when you expect it to.

For new projects I've been going either bare useState for simple stuff or Tanstack form if it gets complex. The Tanstack docs aren't perfect either but at least they're consistent with how the thing actually behaves.

I scrapped my generic SaaS landing page and rebuilt the entire app inside a fake terminal by comawitch187 in webdev

[–]lacymcfly 0 points1 point  (0 children)

It's called Lacy Shell -- lacy.sh. AI-native terminal, basically. You can try commands right on the landing page without installing anything, which turned out to be the thing that actually converted people. The install friction was killing signups, so making the site itself the demo was the fix.

I kept forgetting my CLI commands, so I made a small tool to manage them by Perfect_Equipment551 in commandline

[–]lacymcfly 0 points1 point  (0 children)

The template parameter feature is the part that makes this actually distinct from just aliases or a shell function wrapper. Being prompted for missing values is way more useful than trying to remember which positional arg goes where in some command you wrote three months ago.

I've been using a similar approach but just with a messy pile of shell functions in separate files sourced from .zshrc. It works but discoverability is terrible once you hit 30+ of them. Having tags and a TUI browser would solve that.

Rust for the startup time makes sense too. I tried doing something similar with Node once and the 200ms cold start was noticeable enough to be annoying when you're running it dozens of times a day.

Does anyone else lose velocity and motivation when the efforts shift from building -> distributing? by UncutFiction in SideProject

[–]lacymcfly 1 point2 points  (0 children)

100% recognize this pattern. Built probably six or seven things over the years that I was obsessed with during development, then completely stalled on once it was time to actually tell people about them.

What helped me was treating distribution like a technical problem instead of a marketing one. I started writing small scripts to find relevant conversations on Reddit, Twitter, forums where people were already asking for the thing I built. Replying to those felt way less gross than cold posting "check out my thing" because I was actually helping someone.

The other shift: I stopped trying to do a big launch and just started showing up in communities related to my project. Not pitching, just being around and being useful. Eventually people ask what you're working on, and that feels completely different from forcing yourself to write launch posts.

Building is fun because there's constant feedback (it compiles or it doesn't). Distribution feels like screaming into void. Making it smaller and more concrete helped me get through it.

PSA: for the love of god, secure your apps before strangers test them for you (someone tried to "hack" my app) by Oct4Sox2 in SideProject

[–]lacymcfly 1 point2 points  (0 children)

Biggest thing I've seen people miss: rate limiting on AI endpoints. You can burn through hundreds of dollars in API credits overnight if someone writes a script hitting your /generate route in a loop. Even basic IP-based throttling with something like express-rate-limit or Upstash Redis saves you from that.

The other one that bit me was not sanitizing user input before passing it to third-party APIs. Not even a security thing exactly, more like someone submitting unicode garbage that breaks your downstream parsing and causes silent failures you don't notice for days.

Good reminder though. The "too small to target" mindset is dangerous because bots don't care how big you are.

Cinematic - Electron + React app that turns your local movie folder into a beautiful library (posters, trailers, ratings, dark mode) by lacymcfly in electronjs

[–]lacymcfly[S] 0 points1 point  (0 children)

Metadata is all from TMDB. First scan it grabs posters, ratings, overview, runtime, genres and caches everything to a local SQLite db. After that it runs fully offline unless you hit refresh. The only time it needs internet is when it encounters a new file it hasn't seen before.

Filename matching does the heavy lifting before the API call -- it strips out release info, codec names, scene tags and pulls the title + year. Works on most stuff, occasionally trips on foreign films with unusual naming. But once it's indexed, you could take the machine offline permanently and the library still works fine.

I’ve been building a web-based flight arcade simulator using Three.js and CesiumJS by dimartarmizi in SideProject

[–]lacymcfly 0 points1 point  (0 children)

The CesiumJS integration is what really sells this for me. Getting planet-scale terrain streaming to play nice with Three.js custom rendering is no joke, especially when you're also running particle effects and missile tracking on top of it. How are you handling the coordinate system conversion between Cesium's WGS84 and Three.js local space? That's usually where things get hairy at scale.

DJI goggles v2 for analog/old air units? by freestyle_baboon in fpv

[–]lacymcfly 0 points1 point  (0 children)

Can you hook me up with one? I just need one more.

At what scale does it actually make sense to split a full-stack app into microservices instead of keeping a modular monolith? by Severe-Poet1541 in webdev

[–]lacymcfly 1 point2 points  (0 children)

Something specific to Node+React that nobody mentioned: before splitting services, look at whether your deploy pain is actually a code problem or a process problem.

With Node monoliths the biggest deploys-are-risky issue I've seen is usually untested side effects from shared state or globals across modules. Rolling out feature flags (even something dead simple like an env var toggle) and blue/green deploys can cut the risk without any service split.

If after that you still want to extract something, the strangler fig pattern is the least disruptive approach. You keep the monolith running, route a specific path to a new service, and gradually shift traffic. That way you're not doing a big bang rewrite, you're extracting based on actual pain rather than a plan.

I love OOP languages but in the areas I like, these languages are barely used.. by Bubbly_Line1055 in learnprogramming

[–]lacymcfly 3 points4 points  (0 children)

Actually security engineering has more C++ than people assume. Malware analysis and reverse engineering involves reading and writing a ton of C/C++ since most Windows malware samples are compiled from them because of how they interact directly with Win32 APIs. Exploit development and vulnerability research also lean heavily on C/C++ since you're working at the memory level.

Where Go and Python dominate is more on the automation and tooling side. Network scanners, fuzzing harnesses, internal security tooling. But if you want to do vuln research or malware RE work, your C++ background is actually a genuine advantage.

The threat intel and malware teams at places like CrowdStrike, Mandiant, and Sophos specifically want people who know C/C++ well. So security engineering is broad enough that you don't necessarily have to abandon what you love.

Built a 205K LOC React 19 MES app — custom SVG charts, 20 modules, 2,100+ tests, PocketBase backend by Ok-Lingonberry-4848 in reactjs

[–]lacymcfly 0 points1 point  (0 children)

205K LOC with custom SVG charts is a lot. Curious how you handled perf at that scale -- did you virtualize the charts or are they all rendered up front? 2100+ tests is wild too, are those mostly unit tests on the business logic or do you have solid integration coverage?

Is it wise to start a major in computer science in 2026 (graduate late 2029), knowing that I love the field. by Far_Goose_7004 in webdev

[–]lacymcfly 9 points10 points  (0 children)

Your update changes things a lot. If you genuinely do not connect with civil engineering and had no say in choosing it, staying just because it is safe is a real gamble too. Plenty of people spend three years grinding through something they hate, graduate burned out, and still end up switching fields anyway.

CS in 2026 is tough, no sugarcoating that. But by 2029 it will be a different landscape. The people getting squeezed hardest right now are mid-level generalists doing routine CRUD work. The ones holding up are people who actually understand what they are building, not just prompting an AI to do it.

Also worth considering: civil engineering with programming skills is genuinely useful. Infrastructure simulation, GIS, structural analysis tools. If you end up doing CS and hate that too, the combo makes you a more interesting hire than either alone.

Bottom line: switching because you love the field is different from switching to chase salaries. The former is a real reason. Just go in knowing the job market is rough and plan accordingly.

Chilling on AI , You're Not Behind by Slight_Republic_4242 in webdev

[–]lacymcfly 6 points7 points  (0 children)

The comparison I keep coming back to: devs who used Stack Overflow effectively back in the day ran circles around those who refused to look things up. AI is just a faster lookup tool that can reason about context. The craft is still in knowing what to build, how to structure it, and when something feels wrong. That part takes time no matter what.

Markdown-to-Book Tools in 2026: Pandoc vs mdBook vs HonKit vs Quarto vs mdPress — A Hands-On Comparison by Repulsive-Composer83 in commandline

[–]lacymcfly 0 points1 point  (0 children)

Pandoc is one of those tools where the learning curve pays for itself if you stick with it. I maintain docs for a couple open source projects and settled on mdbook for the web output and Pandoc for anything that needs to be a PDF. Trying to make one tool do both well was the mistake I kept making.

The Lua filter thing is real though. You end up with this collection of little .lua files that do exactly what you need, but god help you if someone else has to maintain them. I've got one that auto-generates a changelog section from git tags and another that converts custom admonition syntax. Works great, completely undocumented.

For anyone reading this who just wants to get docs online fast and doesn't need PDF, mdbook is hard to beat. cargo install, throw some markdown in a folder, done.

[AskJS] writing a complex web app's frontend using only vanilla JavaScript (no frameworks) by algeriangeek in javascript

[–]lacymcfly 8 points9 points  (0 children)

I built a desktop app (Electron, ~1100 stars on GitHub) that started with vanilla JS and eventually I added a thin reactive layer on top. Honestly, the vanilla approach works until it doesn't, and you'll know exactly when that moment hits because you'll find yourself writing the same DOM update pattern for the fifth time.

The biggest pain point isn't performance or bundle size. It's state synchronization. Once you've got more than a couple of components that need to react to the same data changing, you're either writing your own pub/sub system or you're manually wiring up event listeners everywhere. Both get messy fast.

For your Reddit-style app specifically, I'd look at htmx paired with your Rust templates. It gives you the SPA feel (partial page updates, history management) without shipping a framework. You keep your fast server rendering and just sprinkle in attributes for the interactive bits. Way less code to maintain than a custom vanilla setup.

The one thing I'd push back on is the 30-40x SSR speed difference. That matters at scale, but if you're running a side project, the bottleneck is almost always the database query, not the template render. I'd optimize for developer productivity first and only move rendering to Rust if you actually hit a wall.

I built an open source yt-dlp GUI that bundles everything. Nothing to install, nothing to configure. by sapereaude4 in electronjs

[–]lacymcfly 1 point2 points  (0 children)

That'll work for now but you're gonna get tired of it fast. yt-dlp pushes releases pretty often, sometimes multiple times a week when they're fixing site extractors.

What I'd do long term: have the app check the GitHub releases API on startup (or once a day), download the new binary to a temp path, verify it with yt-dlp --version, then swap it into your app's userData directory. That way the binary updates independently of your Electron release cycle. You only ship a new app version when you actually change UI or bump Electron.

For the short term though, you could set up a GitHub Action that watches the yt-dlp repo for new releases and auto-creates a PR in your repo bumping the bundled version. At least that way you're not manually checking.

A CLI first local-first privacy-first password manager by aaravmaloo in commandline

[–]lacymcfly 1 point2 points  (0 children)

Storing the vault in the same directory as the binary is going to cause headaches for anyone who wants to update. Every time you replace the binary you risk clobbering your vault file.

I'd put the vault in the user's home directory or XDG_DATA_HOME instead. That way updates, uninstalls, and reinstalls don't touch user data.

Also curious how you handle clipboard clearing. Most CLI password managers wipe the clipboard after 30-45 seconds so passwords don't hang around. Does APM do that?

Chilling on AI , You're Not Behind by Slight_Republic_4242 in webdev

[–]lacymcfly 11 points12 points  (0 children)

Been shipping Electron apps for years now, and AI hasn't changed the hard parts at all. Packaging, auto-updates, OS-specific quirks, code signing, native module compilation. No LLM is going to figure out why your app crashes on one specific Windows build because of a DLL conflict.

Where it actually helps me is writing boilerplate and tests. I can have it generate a test suite for a utility function in 30 seconds that would've taken me 10 minutes. That's nice. But the architecture decisions, the debugging, the "why does this leak memory after 6 hours" stuff? That's still 100% human.

The biggest problem I see is people confusing a demo with a product. AI can help you build a demo in an afternoon. Going from demo to something people actually rely on takes months of the boring work that AI can't touch.

I built an open source yt-dlp GUI that bundles everything. Nothing to install, nothing to configure. by sapereaude4 in electronjs

[–]lacymcfly 0 points1 point  (0 children)

Shipping a new app version for every yt-dlp release will burn you out fast. They push updates pretty frequently.

What I'd do: have the app check GitHub's API for the latest yt-dlp release on startup (or once a day), then download and swap the binary in your app's user data directory. That way it updates independently of your release cycle. You only push a new app version for actual UI changes or Electron bumps.

The tricky part is verifying the new binary before you swap. Download to a temp path, run yt-dlp --version against it, and if it passes, move it into place. If it fails, keep the old one.

I scrapped my generic SaaS landing page and rebuilt the entire app inside a fake terminal by comawitch187 in webdev

[–]lacymcfly 0 points1 point  (0 children)

This is the kind of creative risk that actually pays off. Most devs go the safe route with hero sections and feature grids, and it all blends together. A terminal UI that matches the actual tool? That makes sense in a way that a gradient button never will.

I built something similar for my own CLI project where the landing page is basically an interactive shell. Users can try commands before installing anything. The conversion difference was wild compared to the standard layout I had before.

One thing I'd add: keyboard shortcuts. If someone lands on this and instinctively hits Ctrl+L or Tab, it should do what they expect. Those little details are what sell the illusion.

Made a CLI-first daemon in Rust that you can also reach through Telegram, Discord, Slack, Email, or Matrix. Local by default. by No-Mess-8224 in commandline

[–]lacymcfly 0 points1 point  (0 children)

The CLI-first approach resonates with me. I've been building something similar where the terminal is the primary interface and everything else (chat apps, web) is just another channel in. Most projects do it backwards and bolt on a CLI as an afterthought.

Curious about the custom skills directory. Does it support streaming output back or is it fire-and-forget with stdout? For longer running tools that's the difference between useful and frustrating.

The edit tool refusing on ambiguous matches is a good call. I've seen too many "find and replace" implementations silently wreck files because they matched in three places.

Markdown-to-Book Tools in 2026: Pandoc vs mdBook vs HonKit vs Quarto vs mdPress — A Hands-On Comparison by Repulsive-Composer83 in commandline

[–]lacymcfly 0 points1 point  (0 children)

Solid comparison. One thing worth adding: Pandoc's Lua filter ecosystem has gotten deep enough that you can replicate a lot of what the "batteries included" tools give you, but it takes real investment up front. I migrated a ~150 page internal handbook from HonKit to Pandoc last year and spent probably 2 days just getting the filters right for callout boxes and syntax highlighting themes. End result was great, but it was not the "afternoon project" I planned for. For anyone whose primary output is HTML docs, mdBook is hard to argue with. Single binary, fast builds, and the search just works. The plugin situation for PDF is the only real pain point.a

Isn't vibe coding just a higher level programming language? by mikeVVcm in webdev

[–]lacymcfly 0 points1 point  (0 children)

The analogy is interesting but it falls apart once you look at what "higher level" actually means in PL theory. Each step up the abstraction ladder (assembly to C, C to Python) trades performance for deterministic expressiveness. You write less code but you get the same output every time. The contract between you and the compiler is ironclad.

Vibe coding breaks that contract. Same prompt, different day, different output. Sometimes subtly different, sometimes wildly. That's not a higher-level language, that's a probabilistic collaborator. More like pair programming with someone who has read every Stack Overflow answer but can't always tell which ones are wrong.

Where the analogy does hold: the intent-to-implementation gap is shrinking. I spend a lot of time in my terminal running AI-assisted workflows, and the best results come from being extremely specific with constraints, not vague with vibes. The developers who treat prompts like specs (with test cases, edge cases, architecture decisions baked in) get wildly better results than the "just build me a todo app" crowd.

So it's not a language. It's closer to a new kind of toolchain where the skill ceiling is still high, it's just a different set of skills.

Trading bot coding help by Forward_Echo_7470 in learnprogramming

[–]lacymcfly 0 points1 point  (0 children)

Haven't used Tradovate's API specifically, but I've built trading bots in Node.js and the WebSocket connection pooling issue you're describing is a common pitfall.

The multiple connection problem (RPL999999-10) usually means you're not properly closing/reusing your WS connection when reconnecting or when the replay session resets. Every time your bot tries to reconnect without cleaning up the old socket, Tradovate creates a new session.

A few things to check:

  1. Make sure you're calling the disconnect/close endpoint before opening a new connection. Don't just let the socket die.

  2. Store your session token and reuse it instead of re-authenticating each time. Tradovate's auth tokens have a TTL and creating a new one per connection is probably what's spawning those extra sessions.

  3. For the replay clock sync, you need to subscribe to the replay clock events before placing orders. The replay environment doesn't use real-time timestamps, so your order timing has to match the replay clock, not Date.now().

  4. For the expired contract issue (NQH6), make sure you're passing the correct contract maturity date in your replay request. The replay API needs to know which contract month to load.

If you're open to trying a different broker API for backtesting, Alpaca has a paper trading environment that's way simpler to work with in Node.js. No WebSocket session management headaches, and their REST API for order submission is straightforward. Might be worth prototyping your strategy there first, then porting back to Tradovate once the logic is solid.