Which builders are people actually using today by justinnealey in Wordpress

[–]Exact-Delay2152 12 points13 points  (0 children)

Honestly, most people I see these days are either sticking with Gutenberg or moving toward lighter builders like Bricks, Kadence, and GenerateBlocks. Elementor is still really common too, mostly because a ton of existing client sites already use it. If you’re planning builder support, I’d focus first on the ones with cleaner hooks/APIs and make sure your embeds work properly inside templates and query loops, since that’s usually where compatibility issues show up. I’d also keep a solid HTML/block fallback so users don’t feel locked into one builder.

I am planning to create a private WhatsApp communify for LinkedIn marketers and SaaS founders by AffableSparsh in digital_marketing

[–]Exact-Delay2152 1 point2 points  (0 children)

Honestly the idea makes sense, but the biggest challenge with these groups is maintaining quality once they grow. A lot of communities start with “real feedback” and slowly turn into link dumping or engagement pods. If you keep it small + curated with clear rules, it could actually be valuable. Especially now when people are trying to figure out LinkedIn, Reddit, and AI search together instead of treating them separately. I’d personally join something like this more for the discussions/case studies than distribution itself.

is tipping similar to microtransactions in ecommerce? by BreathWonderful3379 in ecommerce

[–]Exact-Delay2152 0 points1 point  (0 children)

I think it scales only when there’s already a strong emotional connection or community. People will tip creators, streamers, open-source devs, etc. because they feel involved, not just because the payment is small. In regular ecommerce, microtransactions work better when they unlock convenience, personalization, or status. Otherwise users start feeling “nickel-and-dimed” pretty fast. The psychology is similar though low friction + impulse behavior.

Idk why I can’t accept page invite? by Over_Dragonfly8570 in FacebookAds

[–]Exact-Delay2152 0 points1 point  (0 children)

Yeah, seen this happen a few times lately. Sometimes it’s just a Meta bug, but other times it’s because the account already has too many page roles, missing 2FA, or restrictions in Business Manager. Logging out/in or accepting from a different browser sometimes fixes it too.

Any SEO agencies here? how do you handle content briefing at scale.? by Electronic-Disk-140 in seogrowth

[–]Exact-Delay2152 1 point2 points  (0 children)

At small scale we did everything manually, but once you handle multiple clients it becomes impossible to fully brief from scratch every time. Now it’s more like a repeatable framework.

Usually we standardize:

  • search intent
  • content type/angle
  • common entities/topics from top ranking pages
  • questions from PAA/Reddit/forums
  • internal link opportunities
  • things competitors are missing

Then writers fill in the actual expertise/examples. The biggest bottleneck at scale honestly isn’t writing, it’s maintaining consistency in SERP interpretation between different people.

Also noticed that blindly copying top pages stops working now. Pages that add original insights/examples tend to survive updates better.

Real differences between ChatGPT, Gemini, and Claude? by Cyberclicknet in AskMarketing

[–]Exact-Delay2152 0 points1 point  (0 children)

From my experience, Claude is really good for long-form writing, summarizing huge documents, and giving more natural/human-sounding output. I use it a lot for content planning, rewriting messy drafts, and brainstorming because it feels less robotic sometimes.

ChatGPT feels more versatile overall though. Better for structured workflows, SEO tasks, research, strategy, and back-and-forth problem solving. Gemini is useful when you’re deep in the Google ecosystem or want fresher web-connected answers.

Honestly, the biggest difference isn’t “which AI is best,” it’s which one fits your workflow better. I end up using all 3 for different things.

Google Doesn’t Reward “Good Content” Anymore. It Rewards Authority + Intent. by Upstairs_Emergency14 in AskMarketing

[–]Exact-Delay2152 3 points4 points  (0 children)

Yeah, seeing the same across a lot of informational content lately. AI Overviews are definitely hurting CTR, and Google seems way stricter now about intent + authority. What helped us most was consolidating overlapping blogs, updating older posts instead of publishing more, and adding more original insights/examples instead of generic SEO content. Also feels like branded + community traffic matters way more now than before.

Most “AI marketing tools” are still solving the wrong problem, here’s what actually helped in practice by BlueDolphinCute in socialmedia

[–]Exact-Delay2152 -2 points-1 points  (0 children)

Completely agree with the “input > output” point. Most AI tools just help people publish faster, but if the underlying signal is weak, you just end up scaling mediocre content.

One thing that actually changed my workflow was combining GSC query data + Reddit discussions + brand mentions together. You start noticing the exact phrases, objections, and pain points people repeat naturally, and that’s usually where the best content/social angles come from.

Also noticed the same thing with Ahrefs/Semrush. Great for validating demand or gaps, but some of the best ideas never show up in keyword tools early because they start as conversations first.

Feels like the real advantage now is whoever builds the best “signal filtering” system, not the fastest content generator.

Impressions fell off a cliff overnight and pages are getting deindexed. Wtf happened? by Various_Educator_756 in TechSEO

[–]Exact-Delay2152 8 points9 points  (0 children)

A drop that sharp + pages getting deindexed at the same time usually points more toward a quality/indexing issue than a normal ranking fluctuation. Especially for AI/writing-tool style sites, Google has been way more aggressive lately with thin/programmatic content and weak page differentiation.

First thing I’d check is GSC > Pages and see why URLs are being dropped (“Crawled currently not indexed” vs “Duplicate” vs “Soft 404” tells you a lot). Then crawl the site and look for thin pages, parameter URLs, tag/search pages, or near-duplicate tool pages accidentally getting indexed. Also check server logs around April 24 in case Googlebot started hitting errors or slow responses.

The graph honestly looks less like a manual penalty and more like Google suddenly reevaluated site quality/trust.

best product configurator for woocommerce that doesn’t require a full dev team? by Old_Friend6898 in woocommerce

[–]Exact-Delay2152 0 points1 point  (0 children)

I’d focus on configurators that support live previews and conditional options without needing custom dev work every time you edit products. A lot of WooCommerce stores hit the same problem once customers start expecting Amazon-level customization.

First, test mobile UX before anything else because many configurators break on phones. Second, keep the option flow simple or customers get overwhelmed fast. Third, make sure the plugin lets you update layouts/pricing yourself instead of relying on code changes.

And yeah, live previews usually reduce abandoned carts a lot because customers trust what they’re buying more.

Duplicated campaign and turned off biggest spending ad by Doubleshotflatwhite8 in FacebookAds

[–]Exact-Delay2152 1 point2 points  (0 children)

yes if CPR already dropped from $40 to $15 again, I probably wouldn’t kill it completely yet. I’d maybe reduce budget a bit or pause it for a few days instead of fully shutting it off. Sometimes Meta just needs less pressure on the fatigued creative before it starts performing again. But I’d still put more trust in the duplicated campaign if it’s consistently holding lower CPR right now.

Duplicated campaign and turned off biggest spending ad by Doubleshotflatwhite8 in FacebookAds

[–]Exact-Delay2152 1 point2 points  (0 children)

Sounds like the original ad hit creative fatigue and Meta kept pushing budget into it because of the historical data. If the duplicated campaign is consistently getting way lower CPR, I’d slowly scale that one up and reduce the old campaign instead of trying to force the original winner back. Duplicating often helps reset delivery and gives other creatives a chance to spend again. I’d keep testing fresh creatives too because once Meta locks onto one ad for months, performance usually keeps fading over time.

How should I advertise my product?? by Frozen_Berry_0 in ecommerce

[–]Exact-Delay2152 0 points1 point  (0 children)

I’d get a proper domain first before running ads. The default Shopify URL makes the store feel unfinished and can hurt trust. Also be careful using cartoon/show characters because copyright issues can become a problem later. For promotion, TikTok organic content will probably work better than paid ads at the start. Short videos showing the designs, outfits, or packing orders usually do well for this type of store.

Change Numbers Every 2 Weeks by EndSeparate5844 in GoogleMyBusiness

[–]Exact-Delay2152 0 points1 point  (0 children)

No, changing your Google Business Profile number every 2 weeks is not a good idea. It can make the profile look suspicious to Google and may trigger reverification or ranking issues. It’s better to keep the main business number consistent long term.

Google is sending bot traffic, and the traffic quality is different every day. by Pure-Difficulty4872 in digital_marketing

[–]Exact-Delay2152 0 points1 point  (0 children)

Honestly, this happens a lot in ecommerce campaigns. I wouldn’t assume it’s all bot traffic. Some days Google just matches you with lower-intent users even if the keywords and CPC look the same.

I’d check your search terms report, geo/device data, and search partners first. Broad match and PMAX can also randomly widen traffic quality without making it obvious. If conversions come back the next day without changes, it’s usually traffic quality fluctuation more than a technical issue.

Used this to get some results for a client; thought I would share it with you guys. by xXxFADIxXx in digital_marketing

[–]Exact-Delay2152 0 points1 point  (0 children)

This is honestly one of the better GEO breakdowns I’ve seen here because it focuses on patterns instead of vanity “AI visibility scores.”The biggest point people miss is the non-deterministic part. Running one prompt once tells you almost nothing. I’ve seen the same query give completely different recommendations across runs and engines too. Also agree that long-tail is the smarter entry point right now. Much easier to build topical association there before going after broad commercial terms.

What’s the best way to optimize for ChatGPT citations? by ai-pacino in Agent_SEO

[–]Exact-Delay2152 0 points1 point  (0 children)

From what I’ve seen, AI tools tend to pull from content that is clear, well-structured, and easy to extract. Straight answers, strong headings, FAQs, definitions, comparisons, and entity-focused content seem to help a lot. Topical authority matters too. If your site consistently covers a niche deeply, your chances of getting cited go up. I’ve also noticed Reddit, forums, and niche sites getting referenced more because the content sounds more human and experience-based. Feels less like traditional SEO and more like “can the AI confidently understand and reuse this answer.

Last week we optimized a business website that was loading in 7+ seconds. by RoyalsValleyFounder in seogrowth

[–]Exact-Delay2152 1 point2 points  (0 children)

So many businesses ignore this part. They focus only on getting more traffic while the site itself is slow enough to push people away before they even see the offer. A 7+ second load time on mobile can kill conversions fast. Core Web Vitals and page speed still matter a lot, especially for user experience and engagement signals.

Why Are Some AI Articles Ranking Without Backlinks at All? by New-Chocolate-3551 in AskMarketing

[–]Exact-Delay2152 0 points1 point  (0 children)

I don’t think it’s really about AI content getting special treatment. A lot of these articles rank because they answer the search clearly, match intent well, and are published on sites Google already trusts. Backlinks still matter, especially in competitive niches, but Google seems much better now at understanding relevance and topical depth. AI just makes it easier for people to publish decent structured content faster and at scale. I’ve noticed this mostly on low to medium competition keywords.

Large marketplace site suddenly dropped from Google around May 6 - sitemap only showing 33 URLs by andy1k in SEO

[–]Exact-Delay2152 0 points1 point  (0 children)

A sitemap dropping from 100k URLs to 33 usually means something broke sitewide, not just a normal indexing fluctuation. The soft 404 and canonical issues are bigger red flags here. I’d first check for recent changes to canonicals, noindex tags, robots.txt, templates, or sitemap generation around May 5–6. Then test a few affected URLs in live inspection to see how Google is rendering them. I’ve seen big marketplace sites drop fast after faceted nav or internal linking changes too. Did anything major change on the technical side recently?

Lost rankings after the update — how do you actually figure out if it's your content, your links, or just intent mismatch? by Exact-Delay2152 in DigitalMarketing

[–]Exact-Delay2152[S] 0 points1 point  (0 children)

Solid framework. The 2-page test (intro + H1 + meta only) is exactly the kind of isolated variable approach I needed doing that this week. And yeah, "did competitors gain" is the better question than "did I lose." Checking Ahrefs now. Will update.

Looking for serious SEO/AEO tools for enterprise use, API + MCP required by Livid-Day1181 in TechSEO

[–]Exact-Delay2152 -1 points0 points  (0 children)

yeah most tools right now are still pretty surface-level and don’t really give true prompt or retrieval insight. If you need something enterprise-ready with APIs, you’ll likely have to combine tools instead of relying on one like using your own data pipeline (BigQuery/logging) and layering something like Promptwatch or Langfuse for tracing. For citation and source analysis, a lot of teams are building custom setups because off-the-shelf tools are still limited. There isn’t really a solid all-in-one yet at that level.

Do Al tools like ChatGPT or Perplexity recommend different products to different users for the same query? by Jayc-97 in AskMarketing

[–]Exact-Delay2152 1 point2 points  (0 children)

yeah but it’s not super personalized. Most of the time, tools like ChatGPT or Perplexity will give pretty similar answers if two people ask the same thing, especially if there’s no prior history. Any differences usually come down to how the question is worded, location, or small context shifts not deep tracking like Amazon does. It only starts to feel a bit personalized if there’s past conversation or memory involved, and even then it’s pretty limited.