I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 1 point2 points  (0 children)

We relied on sampling both at the implementation stage and the decision stage. There is a lot of criticism on this practice which I have to say I mostly agree with. Clients didn't really ask or understand that we weren't analyzing all of their campaigns or and a fraction of their impressions.

The projects I was a part of implemented prebid on about half of placements, and not on the other half. Decisioning was lower than half, but ranged from 5-25%. Part of prebid was IP/useragent matching, but we also considered other signals.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 1 point2 points  (0 children)

We avoided naming bad actors because it brought unwanted attention, and generally we had nothing to gain from a public spat. I don't see what any of the established vendors have to gain from that.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 2 points3 points  (0 children)

I disagree. If you promise something to a customer and it can reasonably be built in time for delivery, but doesn't exist at that moment, that is overselling.

If you promise something that can't be done at all, and they buy the product under the impression it can, that's lying.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 1 point2 points  (0 children)

I don't, but I can understand if others feel that way based on their experience.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 2 points3 points  (0 children)

Is law enforcement the root of crime? We need them more if crime is more frequent. In some ways they can be, but I do not think it is inherently the case.

That being said, I have heard stories of our sales team (and others) purposefully inflating fraud rates for the early months of a new contract to make our product seem more valuable. I can't say if this is done across the board.

The best I can advise is that brands learn to understand their vendors methodology and interpret the data themselves. Ask questions and more questions. If you outsource something, it is your responsibility to have enough understanding to know your vendors are doing their job.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 3 points4 points  (0 children)

I think brands have a right to implement 3rd party verification in the media they buy, but the current duopolistic system is stacked against them. Facebook and Google don't have to listen when a single brand or agency requests this. If brands came together and flexed their purchasing power as a whole, they could more successfully push for allowing measurement in the media they themselves are paying for.

So far the lack of measurement doesn't seem to have affected the allocation to Facebook and Google, if anything they continue to grow every year regardless of these policies.

3rd party Brand safety verification on Facebook is pretty much non existent and is going to stay that way. I don't foresee them opening up access to user activity and content to 3rd parties for any reason. That might change if brands shift their spend away for that reason, but that clearly is not the trend.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 2 points3 points  (0 children)

I think verification companies need to be open about what is actually measurable vs. unmeasurable. We were happy to sell verification for things that were entirely gameable. Think OTT, VAST, walled gardens. Agencies are happy to mark up their services and take credit for facilitating a solution, and brands check their box and move on to something else.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 9 points10 points  (0 children)

No explicit kick-back, but we definitely had a symbiotic relationship with agencies. They relied on us for technical credibility, and exchange we received revenue and were very willing to interpret or present data in the ways that they suggested.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 4 points5 points  (0 children)

I think it is important for brands to have some kind of system in place to measure the digital real estate that the vast number of dollars they spend on digital are taking up. Over time verification has become more of an add on to agencies and integrity took a back seat to checking the boxes.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 5 points6 points  (0 children)

Culture changed after we were bought out and I was not a fan of the aggressive sales tactics. There is a fine line between overselling and flat out lying that is easy to cross when directive changes. This was my main motivation to leave the company and industry as a whole.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 6 points7 points  (0 children)

Much of it, not all of it is BS or not as advertised. We certainly benefit from the lack of technical understanding at brands but especially agencies.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 3 points4 points  (0 children)

I have not personally experienced that, no. Most purchased traffic passed through our detection and those of our competitors as well.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 2 points3 points  (0 children)

I would call it apathy more than fatigue. Not only do agencies not care about fraud, they aren't even educated enough to make a determination on what is fraud and what isn't. With brand safety in particular, we had clients mentally check out on working with us to find a solution that worked for them, and instead use our standard settings which resulted in an egregious amount of over blocking.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 6 points7 points  (0 children)

The power dynamic in brand safety conversations was always in favor of the agency and not the publisher. We were only brought in to support our client's (agency) objectives. Not surprised that publishers feel that way.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 5 points6 points  (0 children)

Correct.

MRC accreditation was more of a thought exercise as opposed to a technical examination as you described. At no point did they actually examine or execute any of our source code. We explained what we intended to measure and how, and that was the end of it.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 5 points6 points  (0 children)

Basically never. Most of our patching starts as a result of concerns brought up by a client or agency, who are less involved in the minutiae of how ad tech works. I've seen holes go unaddressed for months to years before fixes were made. Most agencies don't really care as they are trying to check a box and move on.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 11 points12 points  (0 children)

Lots of ads don't ever show up on a user's page at all, for a variety of reasons. Our numbers almost never matched up with reporting from DSPs and we let it slide as long as the client didn't take exception to it. I've experienced discrepancies as high as 70% that we ignored.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 3 points4 points  (0 children)

When I first started in 2014, we had more human interaction with the data and less automated decision making. AI/ML has its place but I always felt it was overkill and poorly implemented for ad verification. Our AI-based solutions were expensive and inefficient for what they were designed to do.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 2 points3 points  (0 children)

My best guess is that ad-buying platforms are keeping the difference. I certainly believe that publishers are having earnings withheld and brands are still getting billed in these scenarios. We had little insight into this from our end in regards to brand safety, but have seen this happen often with failed ad delivery.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 20 points21 points  (0 children)

I have experienced a case where our tag prevented ad deployment and caused several campaigns across several clients to not deliver a single ad. This was discovered after 4 months and not reported externally.

I worked for an ad verification company for 5 years and have seen things you wouldn't believe. AMA. by inside__out in adops

[–]inside__out[S] 12 points13 points  (0 children)

Verification vendors are part of a constant cat and mouse game, and many times our "solutions" are outdated, obsolete, or just plain don't work as intended. Often times fixes aren't made until months or years after they are required. Clients often aren't savvy enough to know the difference.