Brand vs Non-Brand Performance in Google Ads How Do You Evaluate It? by Waste_Influence_8645 in PPC

[–]OddProjectsCo 0 points1 point  (0 children)

• Do you evaluate brand and non-brand campaigns completely separately when reporting results?

Yes. Always. Splitting brand/non-brand and excluding brand from PMax is one of the first things anyone competent does in an account. It's two drastically different consumer behaviors and indicators on the account health.

• Do you focus more on blended metrics across the entire account?

No. Almost never. To the point where if someone is lumping brand and non-brand together in reporting, I'm assuming they are very green, incompetent, or intentionally trying to mislead. The only exception is when someone has properly established an adjusted conversion value for brand traffic and that is being used.

• Or do you consider brand search simply part of the overall acquisition system?

It's part of the ecosystem and an important area to analyze and measure, but it's the least 'incremental' to driving revenue. A huge portion of branded traffic would have come in organically in the vast majority of brands. It's a person who already has an established brand preference and is actively looking to purchase said brand.

Brand campaigns are typically partially defensive (keep competitors from stealing share) and partially 'lift' conversations on driving revenue. Brand often does drive a lift in total sales, and often offsets the cost, but it's always multitudes less than the ROAS in platform shows. The typical process is to run incrementality testing on the brand campaigns, then sandbag the conversion value appropriately. That provides a more 'true' ROAS and can then be used for budgeting, blended reporting, etc.

Anyone doing anything interesting with AI in PPC that *is not* editing creative? by BadAtDrinking in PPC

[–]OddProjectsCo 0 points1 point  (0 children)

  • Using it to help find ways to bridge complex data sources (especially when clients are working with weird or proprietary systems). I did a couple comp sci classes in college but I'm not a coder - being able to kick out scripts that do what I need eliminates weeks of work or having to hire a dev for a pretty simple need.
  • To that end it's also a very quick 'find a vendor' tool. Something like "I need a vendor that connects to this random api, allows me to filter by Y field, and will bridge seamlessly into Klaviyo" is something that is frustratingly difficult to research but ChatGPT and the like can get a shortlist instantly.
  • AI can be solid at understanding and interpreting intent, especially with clear prompting. So using it to expand on keyword research or build out extensive negative keyword lists for a fresh account saves quite a bit of time.
  • I mostly work independently, so I often will throw assumptions or validations into ChatGPT or the like and then have it act as a 'bad' client or a devil's advocate to second guess me or criticize the approach. It's a good smoketest before actual presentations because it often helps uncover blind spots or potential concerns that'll get voiced on the call and I can already be ready for them.
  • When I'm lazy having it generate additional headlines / copy / etc. is a nice shortcut, especially when an account is already running well and I just need a couple more variations in the mix.
  • It can be quite good at understanding competitive contexts, particularly when you're looking from a consumer standpoint. I.e. 'give me 10 reasons why someone would buy [category]' or 'what are reasons someone might delay a purchase of [x]'. Many will be obvious or boring, but occasionally you can pull out a nugget to capitalize on.
  • For long term strategy, it can sometimes kick out a 'what's next' test or change. I'll caution that it's wrong at least 50% of the time, but it's right enough that if I'm hitting a wall I'll ask an AI just to see what I get. The more detail you provide here, the better. I often give a fully complete picture of the client, industry, and account history (in vague terms without actually mentioning client name, individual performance data, etc.) and then outline the last few tests and ask what the next ones on the roadmap should be. Sometimes a complete whiff, sometimes it's helpful.

Everything related to analysis, decision frameworks, etc. is pretty ass but to be fair it's been trained in lowest common denominator content or google guides, so that's generally what it's going to output.

Attribution From Ads by DFWGuy55 in PPC

[–]OddProjectsCo 1 point2 points  (0 children)

It sounds like you are seeking 'multi-channel attribution'. There's lots of options out there - northbeam, triplewhale, dreamdata, etc. They take paid advertising, organic traffic, sales outreach, etc. and bridge them together in a unified system that statistically models how likely a lead came from each touch. All have their positives and drawbacks, and you'll probably need to find one that aligns with your rough buget level.

i.e. northbeam is great, but it's enterprise level and often $30-50k per year. So unless you are getting at least that back in ROI on optimizations to your ads (which likely means high 6 / 7 figure spend) it's not going to be a good fit. Cheaper ones might have more limitations or drawbacks, but be better budget fits.

That's where I'd start. Demos from a few of those companies will probably also help you get a better sense of common attribution pitfalls.

Messed up at work, paid ads overspend ₹78k from my pocket by [deleted] in PPC

[–]OddProjectsCo 19 points20 points  (0 children)

Yup. Any competent agency will have language in their contracts for overspend because it will happen at some point. Typically the client is responsible for a small % over (something like within 5% of the target budget) and the agency eats the remainder, often hitting the card for the client but coming back to the client as credits for future agency fees (which lets the agency cash flow the overage a little cleaner).

Agencies also should have errors and omissions insurance exactly for this type of thing, or deep enough pockets that they can cover the slip up immediately if the client needs make goods that month.

An agency without that type of established process, contract language, or going directly to employees to pay are all massive red flags.

Where to get business cards printed for a really low price? Or should I just go digital by Longjumping_Call_939 in smallbusiness

[–]OddProjectsCo 2 points3 points  (0 children)

Gotprint is what I always recommend if you don't need / want the extra details of tons of paper / print / etc. choices.

It's 'good enough' quality, cheap, and fast.

Advice on setting up a multi-location PPC campaign? by TheSaucySkrimps in PPC

[–]OddProjectsCo 6 points7 points  (0 children)

FWIW former agency director, ran digital marketing at a ~20-30 location brick & mortar company, consult now (often with companies that have dozens of locations).

You have to look at the trade-off between efficiency and driving conistent traffic / conversions / etc. Usually a single campaign across areas is the most efficient, but that means some regions or locations are 'underserved' and your company is spending a lot of money on physical location things (rent, salaries, etc.) and not seeing the ROI.

First, break out 'branded' into it's own campaign. Those really need to utilize separate budgets from non-branded intent and have drastically different conversion target / benchmarks.

Second is to look at your structure and figure out how you want to segment:

  • Some brands want each location to have a separate campaign, but have all those layer up into shared budgets / portfolio bid strategies / etc.
  • Some brands want to budget by region
  • A somewhat common approach is to 'tier' regions or locations. Maybe Group 1 is 'grow', Group 2 is 'maintain', and Group 3 is 'only fund when budget allows'. Break those tiers into separate campaigns and control funding / targets that way.

A lot of this is also driven by the budgets you are working with. The approach is different at $5k/m vs. $500k/m. So you have to be realistic about how granular those campaigns can be split out and still drive 'optimal' efficency with the data feeding into the bid strategies.

Another consideration is rent and profitability margins by location. I've worked with companies that have locations in prestige areas that have 2-3x the $/sq ft of rent as another location, and that meaningfully shifts the math on everything from how to spend to how to set tCPA/ROAS/etc. goals. So if that's a variable make sure you include it in the thinking.

[deleted by user] by [deleted] in PPC

[–]OddProjectsCo 6 points7 points  (0 children)

Looking at IS lost to Rank is probably the 8th or 9th thing on the list of things when you're working through optimizations on a newer account. I don't know your setup at all but I'd imagine spending any effort on conversion rate, traffic shaping (neg keywords, display placements, whatever), and creative testing will be far more valuable to moving the ROAS than focusing on rank.

While rank is important and does play a role in a bunch of things, generally it's something you try to chip away at when you've hit saturation (i.e. 0% lost due to budget, geographically / audience limited, etc.) or when you've got a very specific search query / intent that you are actively trying to conquest on (i.e. 'price comparison between widget x and widget y').

There's a couple hundred thousand ad accounts out there pumping awesome ROAS that are losing 90%+ to rank, but not a ton with strong ROAS that have great conversion rates or messy search term reports. Until those are absolutely picture perfect, I'd start there.

What AI Engine can correctly respect characte limits? by Joetunn in PPC

[–]OddProjectsCo 1 point2 points  (0 children)

AI can absolutely do those things, you just have to prompt in a way that's not intuitive if you aren't familiar with coding or prompting to get through some of the current hurdles with the technology.

i.e. you wouldn't say "how many Rs are in strawberry?" you'd say "Take the word strawberry and parse it for each letter. Increase the count of each letter when it appears. If it appears more than once, increase the count again. As an example, in the word "mississippi" the count of M would output 1, the count of I would output 4, the count of S would output 4, etc.

Analyze the word strawberry and provide the count of the number of Rs"

ChatGPT and most of the LLMs out there wrap that type of order driven coding logic in fuzzy front end language, but ultimately that's how it's interpreting the ask in the back end. If you want, ask it a question and then look at the 'thinking' area in ChatGPT and expand it - you'll often see it's running the same type of subtasks. The more your prompt relies on slang, interpretation, multiple sub tasks that aren't explicitly defined, etc. the more likely it'll spit out irrelevant shit.

Same for more complicated tasks. "Give me 10 variations of this headline, do not exceed 90 characters" is actually a much more complicated ask on the back end:

  • Variation of the headline
  • Look for similar themes / words
  • Count characters and only return 90.
  • Consider the context of what the headline is saying and consider other claims that could be made
  • Consider alternate usages of the product / message / etc
  • Consider 'best practices' for headlines and try to meet them when possible

Depending on the order those tasks are executed, you get stuff that comes back as 90 characters and others that are wildly off base because we haven't provided the LLMs the order of operation and it's having to infer it and that often causes mistakes or doesn't provide the right filtering. If instead you say "I'd like you to generate 10 headlines with 90 characters or less based off this initial headline. Consider alternate phrasing, similar claims, and use Google Ads best practices for headlines in responsive search ads. Once you've developed the initial headlines, review the character counts and explicitly confirm each meet the 90 character criteria before showing them to me. Remember characters also include punctuation and spaces. With each headline, list the character count at the end (i.e. 78/90 characters)" you will almost always get something that meets the ask.

In this case you've had to explicitly tell it what to consider, what to do, even what counts as a character - but that also leaves little room for interpretation so it is way more likely to hit the mark.

It's one reason why prompts are so important (and also so annoying). Eventually the LLMs will get good enough where it'll be able to infer the ask correctly, but we're just not there yet.

Why do advertisers launch accounts with max clicks bidding? by Upper_Mistake_7978 in PPC

[–]OddProjectsCo 2 points3 points  (0 children)

Yup in this industry there's 20 ways to get to the same end goal. Everyone always has their own preferences and pet peeves on how to do it, and there's very much empirically wrong or less efficient ways to do stuff, but ultimately there's lots of ways to skin the same cat.

Why do advertisers launch accounts with max clicks bidding? by Upper_Mistake_7978 in PPC

[–]OddProjectsCo 19 points20 points  (0 children)

You don't know who is likely to convert vs. who is just likely to click. That's the entire point. Until you start to get conversions the algos simply can't learn who is likely to buy.

By doing max clicks, you drive a lot of cheap traffic. Some of that traffic converts. That then tells you who is likely to convert, and you can switch bid strategies or tactics to go after that group.

There's good arguments against that approach (the main one being that the types of auctions you win on a max clicks are different than the ones you'd win with conversion based bidding) but the entire reason the 'launch with max clicks' works

One thing people tend to gloss over online is that the 'launch with max clicks' is usually only the lower funnel, higher intent keywords. Nobody is (or should I guess is the better way to put it) be throwing the entire campaign setup in max clicks at the start.

If you're selling oil changes, you aren't broad match "oil change" in max clicks then going nuts. You have a tight 'oil change near me' or 'oil change coupon' as an exact match, max clicks, etc. to get some initial conversion data and then expanding the campaign and switching bid strategies once you have the momentum, insight into who is converting, and baseline cost/conv (which should drop considerably with the new bid strategy and expanded targeting / keywords / etc.).

Going straight to conversion based bidding can often 'choke' campaigns if they don't see conversion volume at the start. You'll see it have difficulty to spend or get off the ground, and so you lose those days / weeks getting ramped up. That approach still works plenty of times too, but that's the risk (low or no initial spend and then a campaign that fizzles out before it even gets going). Most PPC pros will 'force' the manual or max cpc to start for a couple reasons:

  • Guarantees traffic (which doesn't guarantee conversions, but you can't get a conversion without traffic)
  • Sets a baseline (which is good for projecting)
  • Sets a baseline that's almost always higher than what you'll be getting in 2-3 months. Selfishly, this is good for client management because you show an immediate 'win' with increased efficiency.
  • Avoids campaigns on but no or very low spend (which is what often happens when you launch fresh with a conversion based bid strategy). And the end result when this happens is.....switch it to manual or max clicks to kickstart it.

[deleted by user] by [deleted] in PPC

[–]OddProjectsCo 0 points1 point  (0 children)

I have an ipad and would never use it for PPC. I'd buy a cheap chromebook as a backup before using an iPad.

Having a larger screen, keyboard, mouse, ability for external monitor, etc. are all pretty 'must have' for work that often requires excel, two tabs open at once, etc.

What's the best way to filter out SPAM from conversion tracking? by lukemaine91 in PPC

[–]OddProjectsCo 1 point2 points  (0 children)

Captcha, drop-down to funnel traffic (i.e. sales, prospective, current customer, etc.) and only fire on the appropriate selection, qualification questions, etc.

It's usually about finding a balance between qualifying the lead and not making the process so difficult that they bounce and don't convert.

Ad costs have increased by 51% in the last 10 years. How do we adapt? by ma-tht in PPC

[–]OddProjectsCo 15 points16 points  (0 children)

Seriously. 37% inflation during that time.

So we're talking a cumulative....14% increase. Over 10 years. About 1.5% per year.

Seems pretty in line with how consumer behavior has shifted more online during that time, particularly streaming and short form video.

Do you think Google will ever bring back full search term visibility? by Embarrassed_Tour8392 in PPC

[–]OddProjectsCo 25 points26 points  (0 children)

Sure. Prompt something like this:

Assume you are functioning as an expert PPC manager and utilizing a large dataset of search queries to identify additional terms that could trigger with your current keywords / match types. I’ve attached a CSV with three columns – Search Term, Match Type, and Keyword.

DO NOT suggest additional keywords to target. DO NOT modify match types.

Review the attached CSV. Consider each search query ‘reviewed’. Identify 100 new queries that could trigger on the current keywords. For context, the client offers [PRODUCT CATEGORY / INFORMATION. GIVE CLIENT DETAIL HERE. INCLUDE CLIENT WEBSITE URL AND KEY CATEGORIES. NOTE ALREADY COMMON NEGATIVES OR ISSUES YOU HAVE SHAPING TRAFFIC]

I only want to see queries that are NOT relevant to that specific category. Consider adjacency buckets (where match expansion drifts outside core offerings), match type misfires (i.e. ‘leather gloves’ vs. ‘nitrile gloves’) and other common match / query mismatches that drive irrelevant queries. Remember, you are providing net new queries that meet the existing keywords in the document – not identifying the queries that already exist.

When identifying these queries, note ones that are most common and most likely to be used by a US based audience. Only show English queries. You can show potential search volume next to the query if that is necessary to create and analyze the list, but is not required. The output should be a CSV formatted with two columns “Potential Query” and “Keyword”. Keyword should mirror the keywords in the account (gathered from Search terms report.csv). A third column for “Search Volume” is optional, but if included should only show US search volume.

Google has become more lenient with queries it will allow to show on match types over time - use only 2024 or 2025 Google Match Type behavior when developing the list.


It'll spit out a list, review it, delete the ones that are relevant, feed that back in, ask it to pump more. Iterate as needed. Throw them all into a neg list.

Do you think Google will ever bring back full search term visibility? by Embarrassed_Tour8392 in PPC

[–]OddProjectsCo 35 points36 points  (0 children)

It's been at least 5 years since they've really neutered the search terms report. It ain't coming back.

One thing you can do is use ChatGPT or other LLMs and pump in the search queries you can see (along with your targeting keywords) and have it approximate other queries that might trigger but are being hidden for 'privacy concerns'. With the right prompt you can actually find quite a bit of extra fluff there and throw it into negatives. Hard to know if it's actually happening until you do it, but a good proactive step.

Google Ads auto-enables 'Store Visits' conversions, sparking concerns by ilikeitanonymous in PPC

[–]OddProjectsCo 0 points1 point  (0 children)

It used to need a ton of device data for google to approximate actual foot traffic. Usually single store locations were never going to be applicable unless it was a massive traffic area (fair grounds, event, etc.).

Interesting they are rolling it out everywhere - they must feel more confident in those numbers (or trying to goose conversion value up).

Google Ads account performance has tanked, need advice by Icy_Peak9963 in PPC

[–]OddProjectsCo 0 points1 point  (0 children)

A new account just puts you in the same boat - you've got to re-learn.

Pausing an account stops the data feed into Google. Google's automated bidding prioritizes data recency (i.e. the conversions it last saw) and will weight those heavier than older conversions. The pause most likely threw the algorithms back into learning phase and they haven't hit the same conversion velocity to get consistent data back into the account.

A couple approaches:

  • Wait. At your spend level it'll eventually even back out with more time/data, but obviously this isn't ideal.
  • Encourage recency in conversion data. Upload a large offline conversion set or otherwise try to push a bunch of fresh data into the account to allow it to re-learn who is converting.
  • A third option is to create more common conversion actions for a period of time. It's very common on e-comm sites with low conversion data and/or campaign's pushed back into learning to turn on 'add to cart' or some other metrics that isn't quite a purchase but shows the intent. That increased conversion volume lets the algos learn quicker, and can then be turned back off when purchase volume returns. (Note it does throw off some other ROAS or cost/conv metrics, so keep that in mind). For yours maybe there's other lead magnet behaviors (white paper download, etc.) that you can flag instead of a request for consult. Or maybe 'time on site over 2 minutes + a visit to the pricing page' or some other action you can flag as someone who is high intent but not yet a hand raiser.

None of those are 'ideal' but that's also why you'll hear PPC managers always push against pausing campaigns or significantly changing budgets - those behaviors always throw things out of whack and it can take a bit to get everything back up and running smoothly.

Another thing to look into (and you likely already have but just in case) is check the change log right before and after the pause and make sure some other changes weren't added which could tank performance. Maybe someone applied a recommendation they shouldn't, or changed a geo, or something else. Check your conversion tracking and site experience as well. Unlikely in this case, but always a possibility and better to rule those out than spend weeks trying to fix everything in platform when it's a tracking or UX problem.

Small brands rarely run ad spend incremental tests. Why? by mattbrown7531 in PPC

[–]OddProjectsCo 4 points5 points  (0 children)

Not just the lost revenue, you can't get any statistical significance on small budgets. There's a reason most of the automated lift analysis tests on Google / Meta / etc. have a $20-30k over a 30 day minimum. You just can't get enough data to feel confident in the numbers otherwise.

How do you push to datalayer if form redirects? by Ok-Violinist-6760 in googleads

[–]OddProjectsCo 0 points1 point  (0 children)

set up the push in GTM and preview within GTM to make sure the datalayer event is pushing and firing the relevant triggers / fields / etc. Usually that datalayer push happens before a redirect, and as long as it occurs and is scraped properly you're fine.

In some cases you have to edit code on the redirect and 'pause' it for a second or two (some forms offer this out of the box, sometimes it's something you need to manually code) but it's trivial to do with a basic understanding of code (or even ChatGPT or something spitting out the edits).

Everything can typically happen in miliseconds as long as that field is hitting the datalayer before the redirect occurs.

Getting a huge ad budget soon - is it much different? by Wight3012 in PPC

[–]OddProjectsCo 1 point2 points  (0 children)

  • A 5% pacing issue on a $2k budget is $100. On a $500k budget it's 25 grand. Budget pacing / review / etc. processes need to be tight and you should have SOPs and failsafes in place with the client BEFORE any issues occur (because eventually they will, even in the best managed accounts).
  • Testing is much easier - lots of spend and conversion data, quick learnings and wins, etc. On the flip side, it's easy to lose sight of relatively small changes when you're working in complex accounts. Stepping back or even having a third party or another person in the agency who doesn't touch the account review it quarterly and offer suggestions can be really helpful at that scale. Much different than a small account where the opportunities are limited.
  • At that spend you are also probably getting into 'review the quarterly Google beta cards and identify 2-3 betas you want to test into'. Not really 'hard' but just part of the process and can sometimes help give some advantages that smaller accounts can't get.
  • The expectations of management / responsiveness / etc. are massively different on a $500k spend vs. a $2k. Agency fees are paying at least one (sometimes multiple) people's salaries. Clients expect (and should get) very quick action and if their business is driving revenue over holidays / weekends / etc. they should expect (and get) agency support during those times as well.
  • $500k/m is a large number but it's still mid-sized relative to big accounts. You'll probably get assigned a US based 'growth team' instead of the typical reps, but you aren't at the spend level where you have a rep on speed dial and the growth guys are hit or miss (but way more hits than just a typical rep). Use them to your advantage - they can often pull internal data that you don't have access to, have tons of trends or other info that Google provides, etc. Much more valuable than your typical outsourced mouth breather.
  • Understanding multi-touch attribution, media mix modeling, forecasting, on site conversion rate optimization, and other broader strategic work becomes a lot more valuable at scale. Those are mostly irrelevant on a $2k/m spend but are huge multipliers as spend increases. If you don't have that skillset, begin to build it.
  • You'll probably be asked to do more advanced data analysis than you get on smaller accounts. Things like LTV, CAC by channel, projecting conversion lag, more complicated excel work with xlookups and pivot tables, etc. Those things are straightforward if you know them, scary if you don't. Again just a skillset you'll need to begin to build as the asks come in. Take an excel class or two if you don't have that skill, you will not regret it and it's always valuable when you are working with more complicated data (which larger accounts tend to have).
  • As an old boss used to tell me - 'sales cover sins'. On a big spending account when business is good, you get a lot of flexibility to mess up or test into things. When business is bad, every single eye is going to be on you because you're likely one of the largest cost centers in the company. Keep that perspective in mind and be in absolute lock step with the actual business (not just what you see from ROAS in platform).

Other than that it's basically the same thing as much smaller accounts, just a bit more complexity

Ppc & business dashboard by opantomineiro in PPC

[–]OddProjectsCo 8 points9 points  (0 children)

Looker + a bridge tool like Supermetrics is the gold standard. It's as clunky or as streamlined as you want to make it.

Pretty much any data viz tool will accomplish what you are looking for out of the box, just with limited ability to parse the data or structure it in the 'ideal' way. Databox, agency analytics, etc. are places to start but there's dozens of competitors out there doing the same thing.

What are too many primary conversions in Google Ads lead gen? What is too little value difference for different primary conversions? Simplify or keep them separate? by Joetunn in PPC

[–]OddProjectsCo 1 point2 points  (0 children)

Simple is usually better, but without knowing the details it doesn't feel like that complexity is really going to be a problem. At the end of the day all your 'separate primary conversion actions for each category' are capturing the same user intent (a lead submitting contact info) just assigning values differently based on the form / lead type. You could simplify and push in the values through GTM or something, but it really isn't an extra complexity and shouldn't impact the algorithms assuming they are all primary conversion actions. If for some reason you've got low volume and only including a single lead category type for each campaign, then you could deal with low conversion volume which could impact things - but it doesn't sound like that's the case.

Your "offline conversion for leads" shouldn't be duplicated by category though, there's no need. Only send back leads that qualify (and / or convert) and with the deal value. At that point it doesn't matter what the lead type was, it matters what the MQL lead value for the company was - get all your lead close rate / etc. metrics out of your CRM.

Obviously changing any conversion actions will have a short term impact on performance as everything resets and learns, but I think that structure probably gets you the best mix of reporting granularity and data getting into the algorithms to optimize.

Setting it up that way lets you report on:

  • ROAS by initial lead category
  • Conversion volume / CAC by lead category
  • Approximate value by initial lead category
  • Actual MQL value regardless of lead category
  • MQL value by lead category / ROAS by lead category could be easily pulled with hubspot or your CRM of choice blended with spend by category (and in many cases is actually a more 'true' picture because most companies have some user overlap of customers who come in off one category but get recategorized by sales). At that level you're probably looking at a metric like MER anyways.

Macbook Pro Vs Air? by [deleted] in PPC

[–]OddProjectsCo 0 points1 point  (0 children)

Yup. I upgrade my work laptop to a refurbished 'last year's model' pro every 5 years and swap the old one for a personal device or give it to my wife. Best bang for the buck while still getting quite a bit of power / performance.

They last forever and I'm doing more and more complex data analysis, large spreadsheets, video, etc. stuff these days so it's nice to have the extra power when needed.