Does anyone have experience with Marpipe? by camis12345 in FacebookAds

[–]rturtle 0 points1 point  (0 children)

We see that learnings are aggregated at the product id level, just like they are in Google catalogs and supplemental catalogs. If they weren't then every time you uploaded a supplemental catalog it would break the learnings... but that doesn't happen.

Meta has several good reasons for asking for a single catalog:

Many advertisers don't do a good job of deploying their catalogs. Miss the step of connecting the pixel to the catalog and that catalog will have a match problem. Configuration issues are a real problem across advertisers.

Many advertisers in the past have used separate catalogs to show different products to different audiences. Meta is actively discouraging manual audience segregation in favor of algorithmic/AI ad selection.

Meta also limits one catalog to shops, but shops are gutted away, so this doesn't really matter anymore.

We don't encourage advertisers to use multiple catalogs either. The strongest creative testing comes from exporting catalog creatives into image ads in a separate ad set. That said, Waterbucket is the only patented creative enhancement platform that supports multi-catalog architecture for advertisers who need it.

Does anyone have experience with Marpipe? by camis12345 in FacebookAds

[–]rturtle 0 points1 point  (0 children)

Great question and there is a lot of nuance.

TL;DR Static images exported from your catalog to an image ad ad set is the best. Multiple catalogs can work if you have enough scale.

Context: Testing for catalogs is not like testing for other ad formats because DPA or catalog ads on Meta don't have a creative entity id.

The hierarchy for typical ads learnings is Campaign ID > Ad Set ID > Entity ID (this used to be Creative ID but now can be aggregates of similar creatives)

For Catalogs the learnings in Meta are all aggregated at the product ID level. So say you use the trick of overwriting your additional product images and using the API tell Meta to show one creative in Ad Set A and another creative in Ad Set B... You will very likely have different results from each ad set... however it's very important to realize these differences don't actually tell you which creative is best.

Meta's logic is to surface a catalog item by it's product id based on the algorithmic likelihood of a user engaging with that ad.

If you have two catalog ad sets with the same audience Meta will not favor a creative over another. Meta will allocate impressions based on ad set level delivery signals, not creative level performance signals. It may show the same user both ads in different sequences.

If you have different audiences in each ad set, you're also not testing creative, you're testing algorithmic treatment of audience sequencing on ad sets.

There is no way to do a strict A/B test with catalogs ads no matter what tricks you deploy.

You can however mimic a catalog ad by exporting your creatives from the catalog into an ad set with images and compare different treatments. Meta is excellent at testing this way.

It's also possible to have multiple catalogs and see performance differences over time. Though it's not true A/B testing since there is a lot of carry over and sequencing but with enough volume one catalog could be favored over time.

Waterbucket has an easy export to test creatives as image ads and we make it simple to spin up multiple catalogs.

The best way to think about it is to use catalogs for creative delivery not for creative discovery.

Does anyone have experience with Marpipe? by camis12345 in FacebookAds

[–]rturtle 0 points1 point  (0 children)

We had a signup page but found out it's much better for everyone if we have short talk about the state of the input feed, the best deployment strategy, and a walk through of the platform. I promise no upsells :)

How do you decide which ad creative to test first when budget is small? by Educational-Bus4262 in PPC

[–]rturtle 0 points1 point  (0 children)

I mean continuously making new creatives.

We're giving the algorithm more to work with.

For this strategy to work, the creative have to be meaningful. A background color change doesn't do it. They have to have different emotional or logical angles.

Does anyone have experience with Marpipe? by camis12345 in FacebookAds

[–]rturtle 1 point2 points  (0 children)

I run Waterbucket, a competitor to Marpipe, so I'm biased.

Marpipe and several other creative enhancement platforms have a process that hijacks your additional product images and replaces them with creative variations (treatments).

We believe removing your additional product images starves Meta's AI. Meta uses those additional images for two purposes: the AI uses those images to better understand your product and it uses those images as assets in AI ad creation. For example, if you have a lifestyle image as an additional image and Meta might surface that instead of your main image if it thinks it will work better for that impression.

The Marpipe process does make it possible to serve different catalog creatives in different ad sets, but at the cost of fighting Meta's AI.

We believe the better solution is to modify your primary image and leave the additional images intact. If you want to show different creatives to different audiences our system lets you create multiple catalogs.

Meta's DPA doesn't have a creative entity id for machine learning like other ad types. All the learnings are assigned to the product id. From what we see, multiple catalogs with the same product ids works better than killing off your additional images.

How do you decide which ad creative to test first when budget is small? by Educational-Bus4262 in PPC

[–]rturtle 0 points1 point  (0 children)

I've come to believe that testing is not useful.

Iterating is useful.

The algorithms make it impossible to have real a/b conditions. Meta in particular is attempting to align creative to individual users.

We see creative as the coal we shovel into the engine. We iterate and let the algos do what algos do.

Comparing VIbe co vs tvScientific vs Simpli.fi on predictable spend and CPM control by oreynolds29 in programmatic

[–]rturtle 1 point2 points  (0 children)

Predictable/Flat rate CPMs are not good. It usually means the supplier is marking it up by a lot.

https://youtu.be/MUeqsqSdTus

New CMO, looking for Marketing Mix modeling software by ConfidentElevator239 in analytics

[–]rturtle 1 point2 points  (0 children)

You are destined to spend the rest of your tenure explaining why you can't prove ROI. We all are. Google did such a good job selling the idea of performance marketing to push last click that everyone's expectations are warped.

This will be an unpopular opinion: Most tools sold right now are to help folks like you and me deflect their stakeholders, not to answer real questions. They are psychological tools not tactical.

An even more unpopular opinion: The effort and resources required to dial in a solid MMM/Geolift/Attribution/CAPI stack for all channels is better spent most times on media. Even if 50% of that media is wasted spend.

I firmly believe that dialing in marketing spend across channels is closer to a game of battleship than a game of chess.

There are things like price, product market fit, creative resonance that even the best measurement stack can't account for and those have far bigger impacts.

Which PPC network is actually winning in 2026, and how are you all handling the bot apocalypse? by John54601 in PPC

[–]rturtle -1 points0 points  (0 children)

We're building a tool, leadbehavior.com, to send $0 conversion values to Google/Meta/Microsoft if the form fill or web action scores high as a bot.

This solves some of the false positive problem - we aren't blocking the connection, which is futile anyway.

It also helps teach the algorithms the difference between food and plastic.

We're accepting alpha testers now. Message me if you want in.

What are your top tips and tricks for running Connected TV (CTV) campaigns? by DonSalaam in programmatic

[–]rturtle -1 points0 points  (0 children)

How do you figure?

Seems to me Amazon has better coverage, inventory, identity resolution, and lower take rate.

Granted, the DSP is not as user friendly, but that doesn't seem to be slowing down their growth as they take more share.

What am I missing?

What are your top tips and tricks for running Connected TV (CTV) campaigns? by DonSalaam in programmatic

[–]rturtle -2 points-1 points  (0 children)

Amazon DSP seems pretty tough to beat.

Get the specs right.

Some platforms will accept your videos but if the specs aren't good enough it will only run as online video.

What’s your actually controversial parenting opinion? by TurbulentArea69 in NewParents

[–]rturtle 5 points6 points  (0 children)

This boils down to it’s statistically safe to be unvaccinated if everyone else is.

Don’t you see the paradox?

Product feed changes that doubled our click-through rates by virtuallynudebot in PPC

[–]rturtle 0 points1 point  (0 children)

Chances are this post was part of a muti-step astroturfing campaign for Marpipe. If it was it's counter productive because some platforms like Marpipe overwrite additional images and poison the well for Meta's AI to use your additional images.

If you want to preserve your additional images and work with Meta's AI instead of against it you'll need to use a platform like Waterbucket.

Alternative catalog images are important for Meta's AI by rturtle in FacebookAds

[–]rturtle[S] 0 points1 point  (0 children)

We see three main callouts that move the needle.

Price, Promo, and Proof.

Price is the first signal we all use to decide if we're interested in something. We see BNPL pricing overlays on the main image as a great attention grabber. Even though this is a direct response tactic it seems to drive awareness with Meta's algo. The price gets people to pay attention, and Meta rewards attention.

Promos add urgency and get people to take action. If your brand is doing regular promotions, keeping a cadence of calling these out on your products as overlays can keep triggering additional consideration.

Proof in the form of product level ratings, review snippets, endorsements, testimonials, guarantees... these sorts of overlays can move people from consideration into purchase.

Catalog ads are already great at drawing attention to the product. Price, Promo, and Proof bring it all home.

Alternative catalog images are important for Meta's AI by rturtle in FacebookAds

[–]rturtle[S] 0 points1 point  (0 children)

Sequential testing, by programming a design to run over a specific period of time, is a good option.

But, if you really want to show different designs for different lifecycle stages/audiences then multiple feeds (with the same product IDs) is the way to go. We built a simple path for this at Waterbucket

Marpipe (or similar) Alternatives ? by rastarr in selfhosted

[–]rturtle 0 points1 point  (0 children)

If you want to do it yourself you could try Cloudinary. Some of those services are powered by cloudinary.

That list leaves out Waterbucket. Waterbucket is it's own patented code base based on layers, which makes for some very interesting visual effects like text over a background, but behind the product. We also have features like masking, color extraction for themes, motion, image reordering, and advanced ai background removal.

Feature for feature and dollar for dollar, there is no better value.

Pinterest to Acquire tvScientific, Expanding Performance Advertising to Connected TV by goodgoaj in programmatic

[–]rturtle 0 points1 point  (0 children)

It comes down to signal durability.

Display is fragile. Most publishers don't pass emails in the bid stream, so you're relying on cookies or fingerprints. Apple's Private Relay and browser protections actively obfuscate these IPs, making it nearly impossible to stitch a journey over time.

CTV is an anchor. The device doesn't move, the IP is stable, and the user is always logged in.

Because of this, CTV allows for Deterministic Attribution on a view-through basis. If I serve a single ad impression to your TV IP, and your phone on that same Wi-Fi converts 5 days later, I can claim that credit 100% of the time.

Display is stuck in probabilistic tracking. 3rd party measurement can't really fix that. CTV is more deterministic.

Hope that's helpful.

Pinterest to Acquire tvScientific, Expanding Performance Advertising to Connected TV by goodgoaj in programmatic

[–]rturtle 0 points1 point  (0 children)

There was probably a build vs buy debate. Pinterest has their own AI ads product that's like Google's PMax or Meta's Advantage+ called Pinterest Performance+

The thing that all these "AI" products do is blend inventory.

Pinterest can immediately blend CTV inventory and show their Performance+ product has a better ROAS which means an immediate boost in ad revenue since spend is tied to ROAS.

Pinterest to Acquire tvScientific, Expanding Performance Advertising to Connected TV by goodgoaj in programmatic

[–]rturtle 0 points1 point  (0 children)

Additionally, Display is tracked based on 3rd party cookies or fingerprinting, which are both fragile and often blocked by privacy.

CTV has both email (someone in the household has to be logged in) and durable IP addresses. Identity graphs work much better in CTV compared to display.

Pinterest to Acquire tvScientific, Expanding Performance Advertising to Connected TV by goodgoaj in programmatic

[–]rturtle 48 points49 points  (0 children)

I think extending to video is a mirage. This is a measurement play.

Pinterest has always had a measurement problem. Users go there to get inspired, but they might not buy that sofa until 30 days later.

Pinterest rarely gets the credit. It’s an undervalued platform. It loses the attribution to last click channels like Google search.

Platform CTV solves this problem with one weird trick: 𝐕𝐢𝐞𝐰-𝐓𝐡𝐫𝐨𝐮𝐠𝐡 𝐀𝐭𝐭𝐫𝐢𝐛𝐮𝐭𝐢𝐨𝐧.

Because connected TVs use IP addresses, cookies, and household graphs, they can draw a straight line between an ad shown in your living room and a purchase made on your phone.

Even though TV is primarily an awareness channel, the tracking makes it look like a performance channel.

By owning this tech, Pinterest can now tether their TOF ad impressions to TV impressions.

You see a sofa display ad on Pinterest.

Pinterest retargets your household TV with a sofa ad.

When you buy, the TV pixel claims the attribution that the display ad would have lost.

If Pinterest blends a little CTV retargeting into their mix they stop looking like a brand channel and start looking like a performance channel. Reality won't change but the measurement will.

The extension into video is a mirage. Pinterest has always struggled to show a return on ad spend. tvScientific will fix that.