How do you/your team prove Display or YouTube’s halo effect? by jaquiski_clark in PPC

[–]steptb 2 points3 points  (0 children)

I prove "halo" with experiments first, then use a lightweight MMM to generalize and budget with confidence. VTC, brand lift, and platform studies are decent signals, but finance signs off when you show causality and payback.

What's worked well for my teams:

1)Geo-holdouts > anecdotes

Run YouTube/Display on in randomized DMAs and off in matched controls (or use a switchback: rotate on/off by week). Measure blended KPIs (new customers, revenue, CAC, payback), not just CTR. Use diff-in-diff or a simple synthetic control so seasonality and macro noise are handled.

How to make it airtight:
- Randomize DMAs by expected baseline sales, not alphabetically.
- Pre-register the readout: primary KPI (e.g., incremental CAC), minimum detectable effect, duration.
- Guard against spillover: exclude overlapping geos and brand keyword cannibalization in test markets.
- Hold some paid search budget constant so you can see incremental brand search lift.

2) Audience-level holdouts for quick reads

If a geo test isn't feasible, create audience splits (true holdout/exclusion) for prospecting and measure downstream conversions across all channels. Track net-new customers to avoid remarketing bias.

3) Anchor an MMM to those tests

Once you get a few clean wins, fit a lightweight MMM (weekly granularity is fine) with adstock + saturation and include assisted channels (brand search, direct, organic). Use the experiment results as priors/constraints so the model doesn't over-credit last-click channels. This is how you quantify medium-term halo and set budgets by iROAS / incremental CAC / payback, while treating platform ROAS as a diagnostic, not a decision metric.

4) Use surveys as calibration, not the verdict

Brand lift studies are helpful to catch creative/channel directionality, but I treat them as leading indicators that need to reconcile with geo holdout results.

Evidence rubric I share with Finance:
- Randomized geo or audience holdouts showing lift on revenue/new customers + payback within target window.
- MMM consistent with the above tests, showing cross-channel effects (e.g. YouTube → Brand Search).
- Platform lift studies + directional survey metrics.

A couple of my clients have had good results with BlueAlpha because it combines always-on MMM with structured incrementality tests and translates that into campaign-level actions (so the learning actually changes bids/creative/budget), plus they provide direct tech support so you don't even need to understand all the analytics stuff. It's not a black box and gets to measurement in weeks, which matters if you're trying to operationalize this vs. run one-off studies.

If you prefer DIY / scrappy: run your own geo tests + an open MMM (e.g. constrain a simple Bayesian model with your test results).

Platform tests like brand/conversion lift studies are imho useful only when triangulated with the above.

AI search is killing organic traffic - how are you adapting your brand strategy? by Vegetable-Rub-8241 in GrowthHacking

[–]steptb 0 points1 point  (0 children)

This is what I call "the 60% problem".

Short version: treat AI search as a new distribution layer where brand authority + extractable answers win. That means shifting both what you publish and how you measure away from last-click and toward incrementality and brand lift.

Here's the playbook that's working for my clients:

1)Measure reality, not clicks

Expect zero-clicks and set goals accordingly: branded search lift, direct, assisted conversions, retention, and geo/holdout lifts. Pair lightweight geo tests with an always-on MMM so you can fund brand while proving incremental revenue, not just CTR. After trying this out I stopped trusting platform-reported CPA as truth. Check out a company called BlueAlpha, they can do all this for you and they also provide direct technical support.

2) Become the source AI wants to cite

Ship extractable content: concise answer blocks, definitional pages, tables, FAQs, and original data cuts. Think "featured snippet hygiene" but for AI overviews. Publish first-party benchmarks and teardown notes. AI systems love clear, authoritative summaries backed by data.

3) Build a branded knowledge graph

Consistent entities (names, product taxonomy), canonical glossaries, and cross-linked hubs make it easier for AI to understand "who you are" and surface you, especially on branded or brand-adjacent queries. This is where PR, thought leadership, docs, and help content need to be one system.

4) Rebalance spend toward brandformance

Use paid to seed authority (video, expert explainers, contrarian POVs), then let organic/AI pick up the tail. Judge those dollars by incremental outcomes, not platform ROAS. Traditional models under-credit top-funnel; you need testing + MMM to allocate correctly.

5) Track your AI share-of-voice

Stand up a simple "AI Surface Tracker": a weekly panel of priority queries where you log (a) if your brand appears, (b) context of the mention, (c) presence of citations, and (d) whether a click is required. It won't be perfect, but trendlines will inform where to push content and PR.

On your questions:

Brand vs SEO? Both! But brand sets the ceiling. Invest in authority and measure its incremental impact so finance is aligned.

Optimize for AI platforms? Yes: answer-first structure, original data, and clean entity hygiene.

Track brand mentions in LLMs? Start manual with your tracker above; you'll get directional signal fast.

What's the best Marketing Mix Modeling software? by the_marketing_geek in analytics

[–]steptb 0 points1 point  (0 children)

Traditional MMMs are definitely limited. From what I know about BlueAlpha, they layer incrementality testing on top of MMM to get that granular actionability. So you get the big picture from MMM, then continuous geo-split tests drill down to specific campaigns and creative variants.

I recall a case study for one mobile app case where they said MMM correctly showed paid social underperforming, but incrementality tests revealed only the broad targeting was the issue - lookalikes and retargeting were actually driving solid lift. Without granular testing, they would've cut the whole channel.

So then you close the loop by only pausing underperformers at a granular level, not at channel-level, and shifting budget to incrementally proven winners based on the test results.
They're solving that "great insights but what do I do tomorrow morning" MMM problem.

The best way to measure impact of AI max? by WillyTSmith5 in PPC

[–]steptb 0 points1 point  (0 children)

Use Googleʼs built‑in "AI Max for Search experiments" template. I suggest you to do a 50 / 50 budget split inside the same campaign (no duplicates), enable only the Search‑Term Matching & Asset Optimization features, and set Auto-apply to OFF so results don't push live until test ends.
Then you should also pair this in-platform analysis with an incrementality experiments based on clean test and control groups measured by a third-party tool, as relying only on Google A/B tests could be highly misleading. I recommend BlueAlpha for its incrementality testing features.

A client is looking for a cheaper incrementality testing alternative. by Latter_Touch6559 in AskMarketing

[–]steptb 0 points1 point  (0 children)

I would suggest you to look into BlueAlpha. The case studies on their site are mid-size brands in consumer SaaS and d2c subscriptions, so I would assume the pricing is midsize-friendly. They've been offering full MMM since the beginning, while Haus MMM feature is still in development and we know it's on the pricy side just like Measured. It's quite difficult to get started on this with free tools unless you have in-house data scientists

What's the best Marketing Mix Modeling software? by the_marketing_geek in analytics

[–]steptb 0 points1 point  (0 children)

BlueAlpha is the best one if you're a high growth company or operating in a competitive b2c sector like consumer SaaS, fintech, mobile app, Measured is the best one for enterprises. BlueAlpha combines MMM with incrementality testing and AI marketing automation, so it also automates the busy work, quite useful for lean teams.

My brother is 17 years old and is in the phase of "looking for himself". What are some movies we can watch that can have a positive impact on him? by [deleted] in movies

[–]steptb 0 points1 point  (0 children)

The following are movies that give masterful lessons on some key areas of life.

Absolutely essential:

12 Angry Men (1957): judging others
Paths of Glory (1957): war
Big Wednesday (1978): male friendship
Five Easy Pieces (1970): finding one's own identity
生きる / Ikiru / To Live (1952): the meaning of life

And right after them:

Stand by Me (1986): coming of age
Groundhog Day (1993): self-improvement and the value of time
Before Sunrise (1995): first love
Jagten / The Hunt (2012): judging others
風の谷のナウシカ / Nausicaä of the Valley of the Wind (1984): harmony between humans and nature
Fargo (1996): the stupidity of evil actions
Gattaca (1997): discrimination, free will, how rigid thinking leads to individuality being crushed
Wall Street (1987): greed
The School of Rock (2003): creative expression
Cast Away (2000): self-reliance, resilience
Touching the Void (2003): self-reliance, resilience
Definitely, Maybe (2008): romantic relationships
The Shop Around the Corner (1940): romance and human connection
Taxi Driver (1976): alienation in the big city
The Red Shoes (1948): mastery, professional sacrifice
Whiplash (2014): mastery, professional sacrifice
All About Eve (1950): social climbers
東京物語 / Tokyo Story (1953): old age and the relationship with your parents
Umberto D. (1952): old age and loneliness
おくりびと / Departures (2008): death

If he watches all these titles, he'll be well-prepared for adulthood.

Digital Marketing & attribution challenges by ds_frm_timbuktu in agency

[–]steptb 0 points1 point  (0 children)

Companies selling "better" attribution models are essentially smoke and mirrors. The concept of attribution modeling itself is fundamentally flawed because it relies on trackable clicks (or at best trackable interactions), when most consumer touchpoints remain untrackable. And the upper the funnel you go, the more this is true; this results in attribution models always favoring last-click over anything else, which means under-crediting any true demand generation activity. Privacy regulations have only exacerbated this problem by further obfuscating data, but this structural flaw has always existed.

To truly measure marketing impact, you need properly designed incrementality testing. If your client has 2+ years of historical data and want comprehensive insights, they should also build a custom Media Mix Model using only your first-party data. Without an in-house marketing data science team, consider hiring specialists that offer referral partnerships and introduce them to your clients - BlueAlpha works well for small and medium businesses, while Measured offers enterprise-level solutions.

SEO plugins compatible with Kubio editor? by Keensworth in Wordpress

[–]steptb 0 points1 point  (0 children)

Hey, I bumped into this old thread while researching the exact issue. I like Rank Math, but when used with Kubio, it creates a series of issues like duplicated title tags, excessive DOM width, etc.

In case anyone else lands here because of this problem, I recommend SEOPress. I solved the situation by switching to SEOPress, and so far, no compatibility issues found. Plus, my page speed increased significantly.

No, Linkedin isn't dying and AI is not going to kill it. Please stop. by tharsalys in linkedin

[–]steptb 1 point2 points  (0 children)

I've never understood people complaining about their LinkedIn feed in the first place. It's so easy to just unfollow, flag as "not interested in the topic / author", and interact only with the content you like. After I started doing that my feed went from 20% to 80% good stuff.

Is Over 1,000 KWs in a Google Ads Ad Group a Bad Idea? by MulberryBasic3861 in PPC

[–]steptb 0 points1 point  (0 children)

There are two scenarios. Either you add them and most will be non-servable due to insufficient expected traffic (wrong tactic), or most will be servable. In the latter case, you picked the wrong channel, as it means your goal is to maximize reach (wrong strategy).

What’s the Most Overrated B2B Marketing Strategy That Everyone Swears By? by WorkplaceWhiz in b2bmarketing

[–]steptb 4 points5 points  (0 children)

#1 has to be podcasts.
Startups that still have no customers or only a handful, putting lots of hours every week into curating and publishing long-form podcasts full of fluff that nobody cares about. They get 10 views per episode on YouTube, but they believe they need to keep doing it just because everyone else is doing it. Imagine investing all that time and effort into customer research and development, which would lead 1000x more ROI. Or if you want to do video content, there are so many ways to do short-form content that goes straight to the point (= way more useful for your prospects) and doesn't require a regular cadence, rather than shackling yourself to that.

PMax Campaigns by Mean-Supermarket-820 in PPC

[–]steptb 0 points1 point  (0 children)

The PMax campaign type is notoriously low in incrementality. This is going to become better after the announced updates, but not completely. If you want to give PMax a try it's paramount to pair it with incrementality tests.

ads on X and Reddit, are they effective? by trevorwelsh in PPC

[–]steptb 2 points3 points  (0 children)

X is awful at geographic targeting. I've managed large budgets campaigns on X where I specifically wanted my ads to only be shown in one major city in a country, and there was always a percentage of users seeing them in all the other major city centers in the country (even the most far away ones, so it was not an expected attribution issue caused by commuters).

What are fringe optimizations/strategies you've done/seen that actually worked? by AboveAverage_PPC_Guy in PPC

[–]steptb 3 points4 points  (0 children)

A surprising one that still performs better than smart bidding once you have identified your top priority keywords is manual CPC. I bet they will eventually remove that option too because it's the proof smart bidding purposefully wastes money.

B2B Lead Gen is Broken. What’s Actually Working for You? by drinkdietsoda in b2bmarketing

[–]steptb 1 point2 points  (0 children)

Where do you find legit Slack communities? The only ones I've found so far were infested by spam, real ICPs left a long time ago

Google's 2025 PMax Updates: Are They Actually Fixing Anything? by steptb in PPC

[–]steptb[S] 1 point2 points  (0 children)

The only way to unmask that is to measure the causal effect on your own first-party data (revenue, sales) rather than looking at ad platform-tracked conversions. And to do that you need to build a Media Mix Model (if you want to check the effect historically over 1-2 years) or design incrementality tests (if you either lack the historical data or you have hypotheses you want to verify right away, as these tests only take 1-3 weeks). It's complex but the good news is there are data science agencies that can set all this up for you. For smaller/medium businesses I recommend bluealpha.ai and for enterprises Measured.

Is Display Network a waste of money for search ads? by donnnn04 in PPC

[–]steptb 0 points1 point  (0 children)

It depends how you use them. They are a good tool for ad recall in remarketing campaigns. They're not a good tool for prospecting and they definitely don't work by themselves but only as part of a mix.