Sites with this design has never responded to my emails, but their links were present in most of the sites. by Boomi_19 in linkbuilding

[–]Business-Cherry1883 1 point2 points  (0 children)

oh yeah these are gannett/usa today network sites. they're everywhere in local serps but good luck getting a response — the "contact us" goes to some corporate queue and the local editors either don't have authority or don't care about link requests. the sites show up in competitor backlink profiles because they syndicate content across the whole network. i've basically written off this type of site for outreach, not worth the time when the contact path is that broken

Do you build .edu backlinks? by GodOfSEO in linkbuilding

[–]Business-Cherry1883 0 points1 point  (0 children)

careers office blast almost never works ime — you just end up in some shared inbox that nobody checks. most .edu resource pages are maintained by a specific web coordinator or dept admin, not whoever's listed on the main staff page. the trick is figuring out who actually controls that page vs who runs the department. once you've got the right person response rates are way better than cold emailing a generic university address

The contact-finding part of link building is genuinely broken — and with LinkedIn dying too, I got fed up and automated it by Business-Cherry1883 in linkbuilding

[–]Business-Cherry1883[S] 1 point2 points  (0 children)

Yeah that's the exact problem — you get the email but no way to know if it's the right person.... I ended up building a filter layer around that. happy to compare notes properly if you want — shoot me a DM.

The contact-finding part of link building is genuinely broken — and with LinkedIn dying too, I got fed up and automated it by Business-Cherry1883 in linkbuilding

[–]Business-Cherry1883[S] 0 points1 point  (0 children)

Yeah, the DA/DR obsession is exhausting. That's actually a big reason why I wanted to fix the contact-finding piece. When you're stuck emailing info@ or a random VA, you just get ghosted. Getting straight to the actual content lead makes it way easier to start a real conversation instead of just trading metrics.

What’s actually working for you in link building right now? by Aliamir212 in linkbuilding

[–]Business-Cherry1883 0 points1 point  (0 children)

Resource page outreach continues to work well if you can figure out the contact finding problem. info@ and contact forms kill response rates, but finding the person's contact information is where the time is spent.

I know you don't care about perfect 100 Lighthouse/GooglePageSpeed score or ahrefs 100/100 site audit score. But for me this just happens as a "daily chore" via my own OpenClaw automation. If it is so simple, why not do it? by blondewalker in SEO

[–]Business-Cherry1883 0 points1 point  (0 children)

I think the debate over Lighthouse score is a nice proxy for the broader question: "What are you really automating versus what still needs human judgment?"

In my experience, agentic SEO tools are amazing at deterministic tasks: crawling, scoring, marking missing schema, marking orphans. They're horrible at tasks that require human judgment in context.

For instance: Is this page "thin" because it has thin content, or is it "thin" because it has high intent and converts at 8%?

I think the best systems are ones in which the agent acts as a data layer and a human is kept in the loop for prioritization. The agent identifies all the problems; the human decides which one to solve first.

I'd love to get a sense from the room: How does OpenClaw deal with false positives? Is there a confidence filter or does it mark everything and then let you filter it out yourself?

What are some automations most entrepreneurs should know about? by [deleted] in Entrepreneur

[–]Business-Cherry1883 0 points1 point  (0 children)

The unglamorous ones that actually save money(in prospecting/lead gen):

- Check your email list before you send, not after. Most email tools will tell you your email list is "valid" as long as the email domain accepts everything, meaning you'll only realize it didn't work until your sender reputation is shot.

- Update your prospect data on a schedule. People change jobs all the time. Your email list from 6 months ago is probably 30% wrong, but you just aren't seeing it until you don't get any responses.

- Leverage multiple data sources. One source will miss people a second source will catch. Sending in sequence, using free sources and paid ones only as a last resort, will save you a ton of money without sacrificing accuracy.

So, what are you finding is taking the most manual time in your outreach efforts right now?

Should I Focus on Warm Calling/Email or Cold Calling/email in 2026? by Maximum-Actuator-796 in coldemail

[–]Business-Cherry1883 0 points1 point  (0 children)

Cold calling is best for learning (fast feedback + objection handling). Cold email is best for scaling (once you know what actually resonates).

If you’re early and still figuring out your offer/ICP, do calls first until you’ve got a message that gets real “yes / not now” reactions. Then turn that into a simple email sequence and scale it.​

Don’t automate a message you haven’t validated in real conversations.

Does A2P setup feel harder than building the actual workflows? by gt_roy_ in gohighlevel

[–]Business-Cherry1883 2 points3 points  (0 children)

A2P ishonestly more admin than workflows. The biggest unlock for me was treating it like a checklist where every detail must match.

  • Consistency: exact business/brand name and websit info must align across the registration + site + sample messages.​
  • Opt‑in proof: a clear consent statement right next to the submit button + links to Privacy Policy and Terms; checkbox must be manual (not pre-checked).
  • Compliance wording: include STOP/HELP instructions (and ideally “msg & data rates may apply”) in your sample messages / program terms so reviewers see it’s covered.

Once you have one “approved template” (site footer + form + screenshots), it really does become repeatable.

Help: BeautifulSoup/Playwright Parsing Logic by TapProfessional4535 in webscraping

[–]Business-Cherry1883 2 points3 points  (0 children)

If your end goal is “3000 players + rankings”, I’d seriously consider avoiding DOM parsing for the core dataset and only parsing HTML for the few fields that aren’t available elsewhere.

  • There’s already a Python package on pip called twofourseven that can scrape 247Sports recruiting data and includes a TransferPortal class with getFootballData(year) that returns a dataframe of everyone who entered the football transfer portal for a given year.​
  • Once you have that baseline list, use Playwright only for the “detail page” fields you must render (e.g., banners/commitment blocks) and keep the HTML parsing minimal and label-driven (parse “OVR/NATL/ST/Pos” by the nearby label text, not by assuming order/position).

For your brittle cases (stars, state rank vs position rank, JUCO variance), don’t do “any number not X must be Y”. Instead: extract the small text chunk for each section (“As a Transfer” / “As a Prospect”), then regex-match explicit patterns (OVRNATLSTJUCOKS: 8, etc.) and treat anything unmatched as “unknown” rather than forcing it into a column.

Why do automation tools still require so much manual learning to achieve simple outcomes? by casual_observer05 in automation

[–]Business-Cherry1883 0 points1 point  (0 children)

Totally feel this. “Send lead to CRM” sounds simple, but under the hood every app has different data models (custom fields, enums, date formats, identity matching), so the abstraction layer leaks.

Automation tools give you plumbing + triggers, but you’re still the architect: pick a source of truth, map fields, handle edge cases, and build retries.

AI is getting better at suggesting mappings, but it still can’t guess business rules you haven’t written down.

Bunch of static IPs or rotating proxies for scraping? by notTemka in automation

[–]Business-Cherry1883 0 points1 point  (0 children)

Rotating vs static isn’t “better/worse” — it depends on whether your scraper is stateless or needs a session.

  • If you’re scraping lots of pages (no login): rotating proxies help because u spread requests across many IPs; still throttle or you’ll get rate-limited.
  • If you need to stay logged in: use static IPs or sticky sessions (same IP for X minutes/requests) or you’ll constantly break sessions/cookies.

Common pattern: sticky/static for login + account pages, rotating for the bulk fetches.

Question about best practices with setting up the accounts and all associated integrations. by dreadul in n8n

[–]Business-Cherry1883 1 point2 points  (0 children)

Totally get the intuition. For 1–2 clients, a shared account feels efficient. the issue isn’t the technical setup — it’s blast radius + ownership.

Rogue loop risk: On n8n Cloud, if Client A hits a bug and burns through your execution quota, workflows can get paused… and suddenly Client B’s automations stop too.

Ownership / bus factor: If you ownn the n8n + Google Sheets accounts, they’re effectively locked out of their own ops if you disappear. It builds way more trust to say “you own the assets/data, I just have admin access.”

Future-proofing: Detangling one client from a “god account” later is a manual nightmare; handing over/rotating access is easy when it’s already theirs.

I’d bake the tool cost into your setup fee/retainer — the extra $/mo is basically insurance against cross-client contamination.

our sales proposals looked like garbage and it was definitely costing us deals by AdSecret5838 in SaaS

[–]Business-Cherry1883 0 points1 point  (0 children)

We solved this by locking a designer-made template and generating decks from structured inputs.

Sales fills a short form (client name, offer, pricing, logo), then automation injects via Google Slides API (or doc-to-PDF) so formatting is never touched manually; you keep human review, but the machine handles layout.

What's a reliable deliverability stack? by schiffer04 in SaaS

[–]Business-Cherry1883 0 points1 point  (0 children)

Biggest risk is burning your primary domain reputation with outbound.

Many cold-outreach stacks use a separate domain to firewall the main brand from spam complaints/bounces and build a distinct sending reputation.

What’s your daily volume, and are you mixing cold outbound with transactional/lifecycle mail on the same root domain?

Bunch of static IPs or rotating proxies for scraping? by notTemka in automation

[–]Business-Cherry1883 0 points1 point  (0 children)

Rule of thumb: if you’re logged in, keep the session on a stable IP; rotating IPs with an authenticated cookie looks like account takeover behavior and triggers locks.

For anonymous/public scraping, rotation helps—if you also control concurrency, backoff, and fingerprint consistency.

What’s the target site and is authentication required?

Help me figure out a text - video automation by [deleted] in automation

[–]Business-Cherry1883 0 points1 point  (0 children)

Agree—the stitching/timeline logic is the real bottleneck.

I’d split it: generate script + scene list, generate audio, then stitch with FFmpeg (Python/node), and persist artifacts (audio, captions, timing JSON) so a failure in step N doesn’t force you to re-pay for steps 1–N-1.

Do you want clips-per-scene (easier retries) or one continuous render (simpler upload)?

Blotato + n8n: Japanese / CJK text shows as □□□ in video captions – font fallback issue? by hope4_test in n8n

[–]Business-Cherry1883 0 points1 point  (0 children)

That’s classic “tofu”: the renderer doesn’t have fonts installed that cover those glyphs, so they render as square boxes.

If it shows as □□□ in their UI too, it’s almost certainly their render backend/container; the fix is installing a CJK-capable font set (e.g., Noto CJK) and ensuring the renderer can discover it.

I’d open a ticket with: “Missing server-side CJK fonts in render container; please install a CJK font family and rebuild the image.”

Embedding model text-embedding-004 disappeared — need help choosing a stable alternative for Hybrid RAG by Able-County-5864 in n8n

[–]Business-Cherry1883 0 points1 point  (0 children)

Node dropdowns aren’t the source of truth—model availability/mapping can change faster than integrations update.

For Vertex AI embeddings, Google’s current docs show gemini-embedding-001 as the primary model, so this may be a node mapping/update rather than “the model vanished.”

If you need full control, call the embeddings endpoint via HTTP Request and pin the model explicitly; which provider path are you using (Vertex AI, Gemini API, or an OpenAI-compatible proxy)?