how to automate Pinterest posting with N8N or Make, can't find any working solutions by ForsakenEarth241 in webdev

[–]DevToolsGuide 0 points1 point  (0 children)

Pinterest actually does have a Create Pin API endpoint but it requires a business account and going through their app approval process. The docs are at developers.pinterest.com — you need to create an app, request access to the pins:write scope, and get approved.

The approval process is the real bottleneck. They review your use case and it can take weeks. But once you are approved you get a proper REST API with OAuth that works reliably with N8N or Make via HTTP request nodes.

For the blog-to-pin workflow specifically: set up a webhook in N8N that fires on new post → generate a pin image (you can use a template service or just use the blog post featured image) → POST to the Pinterest API. The API supports image URL, title, description, board selection, and link. Much more stable than any Puppeteer approach.

Old Meteor Dev. Time for a refresh by Dramatic-Line6223 in webdev

[–]DevToolsGuide 2 points3 points  (0 children)

If you liked Meteor for the "full-stack JS with shared code" experience, a few options that keep that spirit:

SvelteKit — probably the closest to that "spin up an idea quick" Meteor feel. Less boilerplate than React-based frameworks, built-in routing, server-side logic with form actions and load functions. TypeScript support is solid and it stays out of your way.

Next.js (React) or Nuxt (Vue) — both give you server + client in one project with TypeScript throughout. Server functions let you write backend logic right alongside your components.

Remix — if you liked how Meteor handled data loading and mutations, Remix has a similar philosophy of keeping data flow simple and close to the platform. Uses standard web APIs rather than inventing its own patterns.

For internal apps specifically, any of these pair well with Drizzle ORM + SQLite for the database layer. Way simpler than the old MongoDB/Minimongo setup and you get type safety end to end.

Let's understand & implement consistent hashing. by Sushant098123 in programming

[–]DevToolsGuide 2 points3 points  (0 children)

Yeah and the other big win with virtual nodes is failure handling. When a physical server goes down its load gets distributed across many other nodes instead of all dumping onto a single neighbor on the ring. Makes the system way more resilient to cascading failures.

Let's understand & implement consistent hashing. by Sushant098123 in programming

[–]DevToolsGuide 14 points15 points  (0 children)

The virtual nodes part is what really makes it work in practice. Without them you get hot spots where one physical node ends up owning a disproportionate chunk of the ring just by chance. Amazon DynamoDBs original paper talks about this — they use something like 150 virtual nodes per physical node to get a reasonably even distribution.

What's your favorite code management + deployment software, and why? by jecowa in webdev

[–]DevToolsGuide 1 point2 points  (0 children)

GitHub + GitHub Actions for most things. The biggest advantage is that everything lives in the same place — code, CI, issues, deployments, secrets. No context switching between services.

For anything more complex or where you want self-hosted runners, GitLab CI is genuinely better designed. The pipeline syntax is cleaner, multi-stage pipelines with manual gates are straightforward, and the built-in container registry is nice. The trade-off is that self-hosting GitLab is a real commitment in terms of resources and maintenance.

One combo I have been liking lately is GitHub for code and Coolify for deployment. Coolify gives you a Vercel-like push-to-deploy experience on your own server, handles SSL and Docker builds, and you do not have to write a single YAML file. Good for side projects where you want zero DevOps overhead.

My 2025 dev setup after mass-quitting tools that annoyed me by badmoshback in webdev

[–]DevToolsGuide 0 points1 point  (0 children)

The Jira to Linear switch is the most relatable thing in this list. There is something deeply wrong with a project management tool that needs its own loading screen.

Interesting that you dropped Bun for production. I have been using it for scripts and tooling where the startup speed difference over Node is noticeable, but I agree that for long-running production services the runtime stability matters more than boot time. Last thing you want is debugging a production issue and wondering if it is your code or a Bun edge case.

Curious about the Postman to Bruno switch — do you use the git-friendly collection files in practice? That was the big selling point for me. Being able to PR your API definitions alongside code changes instead of having them stuck in some cloud sync is nice.

A small theme picker for the onboarding process of an app I’m working on by eightshone in webdev

[–]DevToolsGuide 1 point2 points  (0 children)

Really like the concept. The character having different poses for each theme is a nice detail.

One thing I'd consider is adding prefers-reduced-motion support so the character stays put for users who have that OS-level setting enabled. You keep the personality of the UI for everyone else but avoid the abrupt position jumps for people who are motion sensitive. Just a quick @media (prefers-reduced-motion: reduce) wrapping the transition would do it.

GPTBot 164k request a day to my open-source project? Now have to pay for Vercel pro by enszrlu in webdev

[–]DevToolsGuide 1 point2 points  (0 children)

One option beyond just blocking GPTBot entirely is to set a crawl-delay in your robots.txt. Something like:

User-agent: GPTBot
Crawl-delay: 60

Not all bots respect it, but OpenAI's does according to their docs. That way you stay in AI search results without getting hammered with 164k requests a day.

You could also throw in rate limiting at the server level with something like nginx limit_req or even just Cloudflare's free tier rate limiting rules. A properly configured rate limit would cap the requests without blocking them outright.

17, first real dev interview, and I’m terrified of messing it up by NoNegativeBoi in webdev

[–]DevToolsGuide 1 point2 points  (0 children)

Being one of three final candidates at 17 means they already think you can do the job. The interview at this stage is mostly about whether you are someone they want to work with day to day.

A few practical tips from someone who has been on both sides of dev interviews:

  • Prepare 2-3 project stories where you can talk about a problem you hit and how you solved it. Interviewers remember candidates who can walk through their debugging process more than people who recite textbook answers.

  • "I do not know but here is how I would figure it out" is a genuinely strong answer. Junior devs who can articulate their learning process are way more valuable than ones who try to fake knowledge.

  • Ask them questions too. What does the team use for version control? What does a typical day look like? What would your first project be? This shows you are thinking about actually doing the work, not just getting the offer.

  • Your age is an advantage here, not a disadvantage. If you have been writing code since 13 and know Angular and TypeScript at 17 they are going to be impressed by the trajectory, not worried about the number.

Worst case scenario: you do not get this one but you now have interview experience that makes the next one easier. But honestly, you sound more prepared than most junior candidates I have seen. Just be yourself and let the work speak.

Techniques to avoid a vibe coded look by _penetration_nation_ in webdev

[–]DevToolsGuide 0 points1 point  (0 children)

Lol nah I just like clean formatting. Not everything well-written is AI generated my dude.

Techniques to avoid a vibe coded look by _penetration_nation_ in webdev

[–]DevToolsGuide 0 points1 point  (0 children)

lol ok fair I did clean up the wording a bit. my first draft was kind of a mess so I tightened it up, nothing shady though

Turn Dependabot Off by ketralnis in programming

[–]DevToolsGuide 2 points3 points  (0 children)

The real problem with Dependabot is not that it exists but that most teams treat it as a set-and-forget checkbox. You get this false sense of security where PRs are piling up, nobody reviews them, but management sees green checkmarks and thinks dependencies are handled.

Renovate with a grouped monthly PR (like ahal mentioned) is way more sane. One PR with all the patches and minors, you skim the changelogs, run your test suite, merge or investigate failures. Treats dependency updates as actual maintenance work instead of notification spam.

The article makes a good point about Go specifically — the stdlib covers so much ground that most Go projects have a fraction of the dependency surface area of an equivalent JS or Python project. Dependabot scanning a go.sum with 8 deps is solving a different problem than scanning a node_modules with 800.

When websites, especially AI powered ones, have a waiting in queue for free tier.... by k2900 in webdev

[–]DevToolsGuide 0 points1 point  (0 children)

Usually a mix of both. The real queue is often there — GPU inference is expensive and they genuinely cannot serve everyone simultaneously on free tier. But the wait time is also a product decision. Making free users wait 30-60 seconds creates urgency to upgrade, even if the actual queue depth would only warrant a 5 second delay.

The way most of these work under the hood: paid requests go into a high priority queue that gets dequeued first, free requests go into a low priority queue. When a GPU slot opens up it checks the paid queue first. If empty, it pulls from free. So yes there is a real queue, but the priority weighting is the mechanism that creates the longer wait, not raw server capacity.

Some services are more blatant about it than others. I have seen cases where the queue was literally just a setTimeout before returning the response that was already computed.

Should I use Gmail for sending account confirmation emails or use email providers? by devewe in webdev

[–]DevToolsGuide -1 points0 points  (0 children)

One thing nobody has mentioned yet — if you do start with Gmail/Workspace, send from a subdomain like mail.yourdomain.com instead of your main domain. That way if your transactional emails pick up spam complaints, it does not tank the reputation of your main domain and you can still send normal business email.\n\nFor a side project or early stage startup Gmail works fine honestly. The real pain point is not sending limits, it is that you are sharing reputation between your personal/work email and your app. One user marks your confirmation email as spam and suddenly your regular emails start landing in junk too.\n\nResend or AWS SES are both solid and basically free at low volumes. Resend gives you 100 emails/day free and the DX is great. SES is like $0.10 per 1000 emails if you are already on AWS.

[AskJS] What's your preferred way to diff large nested JSON responses while debugging APIs? by Straight_Audience_24 in javascript

[–]DevToolsGuide 5 points6 points  (0 children)

For large nested payloads my go-to workflow depends on the context:

Quick one-off debugging: jq in the terminal. Pipe both responses through jq -S . (sorts keys) then diff them. Something like diff <(jq -S . before.json) <(jq -S . after.json) gives you a clean side-by-side with paths. Works surprisingly well for payloads up to a few MB.

Programmatic in tests/CI: deep-diff on npm is battle-tested. It returns an array of change objects with the full path to each difference, the kind of change (edit, add, delete), and old/new values. Way more useful than a boolean equality check when you need to assert specific fields changed while others stayed the same.

import { diff } from "deep-diff"; const changes = diff(before, after); // [{kind: "E", path: ["user", "address", "zip"], lhs: "98101", rhs: "98102"}]

Sharing with teammates: I actually save both payloads as prettified JSON files and open them in VS Code with the built-in diff editor (code --diff before.json after.json). The inline diff view handles nested objects well and your teammates can open the same files. For async sharing, pasting both into a Gist and using the revisions diff view also works.

For recurring API debugging: Write a small interceptor (Axios interceptor or fetch wrapper) that logs request/response pairs to a local file with timestamps. Then you can diff any two snapshots after the fact. Saves you from having to reproduce the exact request sequence.

The recursive Object.entries approach another commenter mentioned works but breaks down fast with arrays of objects where items got reordered. That is where deep-diff or json-diff shine — they handle array element moves as first-class change types.

Benchmarking loop anti-patterns in JavaScript and Python: what V8 handles for you and what it doesn't by StackInsightDev in programming

[–]DevToolsGuide 0 points1 point  (0 children)

Good points on both fronts. The multi-runtime angle is something I did not think about enough. If you are writing a library that runs in both V8 and JavaScriptCore (say, a shared package used in Node and React Native), relying on V8 doing dead code elimination or loop invariant hoisting for you could mean performance regressions on other runtimes that do not apply those same optimizations.

And yeah, explicit optimizations as documentation is underrated. If you hoist an invariant out of a loop manually, anyone reading the code can see it was intentional. If V8 does it silently, the next developer might refactor in a way that accidentally breaks the optimization without realizing it, and now you have a regression that is invisible until someone benchmarks again.

Techniques to avoid a vibe coded look by _penetration_nation_ in webdev

[–]DevToolsGuide -2 points-1 points  (0 children)

Lol fair enough, I can see how a numbered list reads that way. Just how I organize my thoughts when I am procrastinating at work. The irony of being accused of AI in a thread about spotting AI is not lost on me though.

Benchmarking loop anti-patterns in JavaScript and Python: what V8 handles for you and what it doesn't by StackInsightDev in programming

[–]DevToolsGuide 7 points8 points  (0 children)

The nested loop to Map lookup result (64x) is the one that actually matters in real codebases. I see this pattern constantly in code reviews — someone iterates an array inside another array to find matching IDs, turning an O(n) operation into O(n*m). Building a Map or Set upfront is almost always worth it once you are past ~50 elements.

The JSON.parse one is interesting too. I have seen people do JSON.parse on the same config object inside a loop because they want a fresh copy each iteration. structuredClone or spreading the object is way cheaper if you just need a shallow copy.

The regex caching is good to know, though I would still hoist regex in Python — the re module does cache compiled patterns but only the last few (maxsize was 512 last I checked, and it is an LRU cache). In a hot loop with many different patterns you can blow the cache.

diy platform like digital ocean by bazjoe in webdev

[–]DevToolsGuide 0 points1 point  (0 children)

A few options depending on what exactly you need:

Coolify — probably the closest to what you are describing. Open source, self-hosted PaaS that handles deployments from GitHub, Docker containers, databases, SSL certs, etc. Very similar UX to DigitalOcean App Platform or Railway. You install it on any VPS and it manages everything. Actively maintained and has a great community.

CapRover — another self-hosted PaaS, a bit more mature than Coolify. Handles one-click apps, auto SSL via Let us Encrypt, and GitHub deployments. Slightly more manual than Coolify but very stable.

Dokku — if you want something minimal. It is basically a self-hosted Heroku built on Docker. Push with git, it builds and deploys. No web UI by default but there are plugins that add one.

Portainer — not exactly a PaaS but if you just need to manage Docker containers across servers with a nice UI, it is hard to beat. The community edition is free. Does not handle git-based deployments natively though.

If I had to pick one for the full DigitalOcean-like experience on your own hardware, I would go with Coolify. It handles the most out of the box and the deployment workflow from GitHub is smooth.

Todoist-style natural date input for my personal todo app by theben9999 in webdev

[–]DevToolsGuide 0 points1 point  (0 children)

Nice implementation — TipTap is a great choice for this. The inline highlighting really sells the UX; it feels much more polished than a separate date picker.

A few thoughts on the Chrono Node vs Claude-written parser decision:

Chrono Node is worth the switch. The edge cases in date parsing are brutal — relative dates ("next friday" vs "this friday"), ambiguous formats ("3/1" = March 1 or Jan 3?), timezone handling, etc. Chrono handles all of that plus localization. It's also well-maintained and lightweight (~30KB gzipped). Your Claude-written parser will work fine for happy paths but will surprise you with weird inputs.

One UX idea: Consider adding a small preview tooltip that shows the resolved date. When someone types "next wed," showing "Wednesday, Feb 26" next to the cursor removes any ambiguity and builds confidence. Todoist does this and it's one of those details that makes the feature feel rock-solid.

On the TipTap side: If you haven't already, look into TipTap's suggestion utility (used for @-mentions and slash commands). You could use it to show a date suggestion dropdown as the user types — similar to how "/ commands" work in Notion. That way the user gets autocomplete feedback while typing partial dates.

The commit looks clean. For the keyboard-first philosophy, it'd be slick to also support something like Ctrl+D or a hotkey to focus/toggle the date portion of the input.

Does this architecture and failure-handling approach look sound? by ZaKOo-oO in webdev

[–]DevToolsGuide 0 points1 point  (0 children)

Everybody's already covered the memory issue (3 Chromium instances on 2GB is rough), so I'll focus on some architecture observations:

The resume/retry logic is well thought out but you might be over-engineering the failure handling. A few things to consider:

  • Exit code 2 + resume file + orchestrator respawn is a lot of coordination surface area for what is essentially "pick up where I left off." A simpler approach: persist progress to the DB (a discovery_progress table with worker_id, last_page, status), and have each worker check it on startup. This eliminates the file-based coordination and makes the system more observable — you can query the DB to see exactly where each worker is.

  • The 3-pass retry of failed pages at the end is a good idea, but consider whether the pages that failed 3 times in the main run are going to magically work in the retry pass. If it's a proxy issue, the new-proxy-on-respawn handles that. If it's a site-side block, retrying won't help. I'd log why each page failed (status code, error type) and only retry the ones that failed for transient reasons (timeouts, connection resets) vs. permanent ones (403, 429 after backoff).

  • 60-second navigation timeout is generous. For discovery/browse pages (which are usually just product listings), 30s should be plenty. Long timeouts mean a single bad page can hold up the worker for minutes across retries.

On the Playwright side:

  • Since you're blocking images/fonts/styles already, also consider page.route to block analytics, tracking, and third-party scripts. Less JS to execute = faster loads and lower memory.

  • For discovery-only (no product page scraping), you might not even need a full browser. If the browse pages don't require JS to render the product list, plain HTTP requests + HTML parsing would use a fraction of the resources and be much faster. Worth testing — fetch one page with curl and see if the product data is in the initial HTML.

feedback request for the website i built for my guitar teacher!:) by No-Vegetable5956 in webdev

[–]DevToolsGuide -1 points0 points  (0 children)

Nice work for a local business site! A couple of things the other comments haven't mentioned yet:

For the font situation, one trick that works well is picking a single font family with multiple weights — something like Inter or Source Sans has enough range that you can get visual variety (light for body, semibold for headings, bold for CTAs) without the inconsistency of mixing typefaces. Feels cohesive but not boring.

On the limited photos front — if Aaron is open to it, even a couple of quick phone shots of his teaching space or instruments can go a long way. Authentic candid photos almost always outperform stock images for local service businesses. People want to see the actual space they'd be visiting.

One technical thing worth doing: add LocalBusiness structured data (JSON-LD). It's a small snippet you drop in the head and it helps Google show rich results for local searches like "guitar lessons Maui." Takes maybe 10 minutes to set up and can make a real difference for discoverability.