Inside The Ad Account: How To Do Creative Testing In 2026

Host Brad flies solo to walk through exactly how his team structures Meta creative testing across eight figures a year in ad spend. The episode is a tactical follow-up to a previous episode with Phil, prompted by a viral Twitter thread Brad posted on the same topic.

The core philosophy: stop trying to outthink Meta's algorithm on budget allocation. Instead, run a single CBO campaign per product or offer, treat ad sets as simple folders (not audience segments), and let Meta decide where the money goes. Brad backs this up with real ad account screenshots, shows why a higher-spending campaign with a lower ROAS can actually be healthier than it looks, and closes out with answers to the most common follow-up questions he received from the community.

Key Takeaways

  • Why Meta's algorithm is smarter than you at deciding which of your ads deserves more budget and what that means for how you should structure your campaigns.

  • The single reason you need to stop creating more ad sets and start stacking all your ads in one.

  • Your top-spending ad set has a lower ROAS than the one barely getting any spend, does that mean you're burning money, or is Meta showing you something you're missing?

  • The truth behind what actually happens when you pull it into an ABO to force-test it.

  • How do you build a creative testing system that generates consistent weekly output without needing a huge production budget or team.

  • What "spend is a signal" actually means for your creatives and how you should decide when to turn an ad off (You might actually be turning them off too early)

  • Is splitting your account into a testing campaign and a scaling campaign helping you, or just adding complexity that slows down Meta's learning?

  • How new visitor percentage and frequency metrics should change the way you interpret your campaign's ROAS.

  • The actual risk of running ads to two completely different customer personas inside the same ad set on broad targeting.

  • How to tell when it's time to question your creative quality versus just staying patient with your launch cadence.

To Watch Phil’s Episode head here

This episode of the Scalability School podcast is sponsored by NorthBeam and they just launched Northbeam Incrementality. Northbeam Incrementality gives you easy, automated, self-service incrementality tests, while protecting you from the major mistakes so many people make while running incrementality tests. Your MTA handles the daily tactics, your MMM guides the long-term planning, and Incrementality provides the causal truth. It’s a closed loop that allows you to scale what works and cut what doesn't. Right now when you head over to www.northbeam.io/incrementality, they’re offering Scalability School listeners 50% off unlimited tests for a year when you join. Just tell them we sent you!

To connect with Andrew Foxwell send an email Andrew@foxwelldigital.com

 To connect with Brad Ploch send him a DM at https://x.com/brad_ploch

 To connect with Zach Stuck send him a DM at https://x.com/zachmstuck

 Learn More about the Foxwell Founders Community at https://foxwellfounders.com/

 Learn More about the The Hive Haus Creators Community at http://HiveHausUGC.com


Full Transcript

I've been reading it verbatim, but it's part of a broader discussion. But he said more substantial concept differentiation, assets with a high degree of visual conceptual contrast, distinct messages. This is creative diversity. What he is talking about is creative diversity back in 2023. And I know it feels like it's been a long time since 2023, but we've been talking about creative, not me and we, but like the internet, Twitter, e-commerce, Twitter has been talking about this for years now. And I think that's really easy to forget.

The bet that we're making is that meta is going to be right on the allocation more often than we are. And I know there's conversations from people I respect very heavily on Twitter and in this space that say, well, if you have a high hit rate, then it doesn't matter. And I just, I'm not willing to make that bet. And I'm not just, we're not just throwing creative unintentionally up and saying, well, if it spends, it spends. But it is freeing in a way to know that you can, you can push the volume and know that if it's not going to deliver, it just won't deliver. Now, it doesn't mean be lazy on your ads and your creative strategy.

But what it does mean is you can know that you don't have to try to force a spend into something. And that's frustrating. It can be a very frustrating thing. I've got one campaign and I've got two ad sets. And if you look at the top campaign, there's one thing that stands out. It is spent, what is that, like four times more budget than the campaign below it.

And it has a ROAS of a whole point lower. So top campaign, if you're listening, spent $197,000. Bottom campaign spent $55,000. Top campaign performing at a 2.63. Bottom campaign performing at a 3.4. Oftentimes people see that and they say, Fred, what are you doing?

Like why are you not scaling more into that specific ad set? Like you're leaving money on the table. It's performing 50% more efficient than the other one. And my answer is, well, Meta knows something that we don't about why that can't spend more. And what we've found is that there are usually some signals that give us context into what that is. Now, we can't say for sure because Meta is a little bit of a black box, but we can look at some other metrics to tell us, well, why is Meta making this decision that they're making?

And now, let's take a listen to the Scalability School podcast. Hey, everybody. Welcome back to another episode of the Scalability School podcast. I am flying solo today to talk about all things creative testing in 2026 through Meta. I'll move my face out of the way and jump into the presentation in a second. Just want to say, glad you're here and excited to go through this.

I know Andrew is off in, I believe, Lisbon hosting a Foxwell Founders event. He texted me this morning saying it's going amazingly well. And so really excited for him. And I'm sure Zach is off somewhere in the world slaying in socks. So I'm going to hold down the floor today by myself and jump into this creative testing deep dive. So we did a podcast episode a couple of weeks back with Phil.

It was fantastic. I've got it open here. If you're on YouTube, you can see this. If you haven't watched it, I would recommend pausing this, going back and watching that because that's kind of the setup to this. This episode is going to be a bit more tactical. I'm actually going to show you screenshots from ad accounts and show you how we do the setup and just be a little bit more specific with here's actually exactly how we do this.

Whereas in that episode, we talked a little bit more and kind of Phil and I agree, but we debated a bit more about what the options are and what people do for creative testing. And in a follow-up to that episode, this episode you're listening to right now is a follow-up to a follow-up because I made a follow-up post on X Twitter. I don't call it X. I don't like saying that. But I made a follow-up post and an article on Twitter talking about exactly this and kind of how we do this. So I'm taking this, but I'm adding a little bit more sauce to the podcast episode.

And I'm going to go into this. This was both that episode as well as my article were some of the most popular episodes that we've, or things that we've put out that we kind of have experienced. And so people are very interested in this information, it seems. And so I wanted to make a follow-up to that. I will say before we jump in, and then I'm going to get into this, and I'm going to rip through and try to make this the most tactical second-by-second piece possible. But I do notice that a lot of people kind of get really stuck up on like small tactic things.

I wouldn't say this is small necessarily. And I know that there are different opinions about how to do this. I think the most important thing is not to be dogmatic about what the right option is. And at the same time, you should have an opinion about how creative testing should be done, because then you can focus on the things that matter as opposed to being stuck into, you know, constantly trying to change up your ad account structure to see if you're missing out on what's next for creative testing. So pick a lane and stick with it and reevaluate every maybe couple of months or quarterly or once a year. This is not something that you probably need to be changing, certainly not weekly or even monthly.

So this is how we do meta creative testing in 2026 across over eight figures a year in meta ads. So let's go ahead and jump into this thing. So the two camps that people fall in, and we talked about this in the previous episode. Also, if you're not watching this on YouTube, I'm going to be showing some visuals and kind of giving a presentation. So I would suggest popping over to that. I'll do my best to explain things that are on screen, but just for clarity's sake.

There's two camps that people fall into. People generally fall into the camps of either you want to force spend into creative tests because you invested in creative and you want to know, is it going to work or not? And the other camp is letting meta allocate budget between the creatives that you're launching and the existing ads that you already have in the ad account. That's kind of fundamentally what it boils down to. Now, there are different ways that that shows up in the ad account. And the way that we like to do this is a single CBO campaign per product or offer.

Right. And so let's say we're selling pants. We will set up a single campaign for those pair of pants. Let's say we were selling a pant bundle. We would make a new campaign for that pant bundle. Right.

And the reason that we do that is because there's a different value assigned to that purchase event. And so we want to hold those things separate and give them space to to breathe and test and set targets specific to the offers and the product. And so there are definitely cases for expanding out from one campaign. But the way that you can think about this is if you are a single SKU brand, you can get away most times with a single campaign with some caveats. The other caveats, and there's more, but one of the more common caveats is if you're running a cost based bidding. So whether that's highest volume cost cap, bid cap or a value based bidding, a min ROAS or highest value, you can have two campaigns that are basically the exact same structure.

The bidding type descends it being different. So that's another way that shows up. So here's an example of what it looks like in one of our ad accounts. This is a little bit small, but this is basically the entirety of the ad account. Right. It's a single CBO that spent 400, a little over 400 grand in this 30 day stretch.

And it's a, it's a, it's a single campaign. Right. And so it's running on broad targeting. We are, it's a, it's a single, single SKU or offer. Right. So the offer is the same throughout this, this entire campaign on that specific product.

We're not running, you know, 50, you know, $50 off and, you know, 20% off or something, you know, like it's consistent. We do exclude lifetime customers. So there's a couple of ways that we do that. I'll go from, this isn't on my screenshot here. It's not, it's not listed from worst to best, but I'll kind of explain them from, from worst to best. Pixel purchases 180 days.

So that's using Meta's pixel based audience. It's fine, but it's not great. Using Klaviyo lists are also nice because they can sync up and be automated. And it's also included in your Klaviyo plan. If you have a Klaviyo plan and then Shopify audiences. So Shopify audiences and Klaviyo audiences are similar in that they're both first party data.

Shopify audiences just tend to be better. You know, in my experience from an exclusion perspective. There's a time and a place to target repeat customers, but we're not going to talk about that in this video. Probably. I may come back to it at the end. But for the most part, we are trying to reach net new people to the brand and net new customers more, more, more precisely.

I don't know that I'll talk about retargeting directly in this video, but I may, I may touch on it a bit. And so that's, that's how we set that up. And the bet that we're making, right? The bet that we're making with this campaign setup is that Meta is going to more often, that's an important point, make a better allocation of the dollars than, than the human deciding to run creative tests. And again, that's, that's the bet that we're making here, as opposed to running an ABO setup where we're forcing ad spend or even a CBO with mid budgets or even separate campaigns for separate types of creative. The bet that we're making is that Meta is just going to be right more often than we are, assuming that we give them clear expectations of what our goals are.

So that's, that's the premise. So what that means then is that ad sets are just functionally become folders and folders is actually even generous in the way that we set this up, which you'll see in a second in my screenshot. But unless you're using ad sets to define audiences, like maybe you are using lookalike audiences or you are using interest-based audiences, or you're trying to reach a specific cohort or, or age demographic or even geographic demo, right? That's, that, that's a caveat that I would, I would set aside for a different point. But if, if you're not using that, then ad sets functionally just become folders. And they're not, although you can dictate the targeting on the ad set level, if you just have two broad ad sets that are set up with the exact same, with the same exclusions, they functionally just become folders.

They're just containers for that. Now, some people will launch new creative batches into their own ad sets or unique customer personas into their own ad sets. The only pushback that, and the reason I pushed back on the idea of doing that is because, again, if, assuming you're going broad, which we would suggest doing, assuming you're going broad, launching a new ad set doesn't tell meta to go target anyone in particular on, on broad. That is done on the creative level. So you could have two ads that speak to completely different customers in the same ad set, and they will reach the person that they're supposed to reach after a little bit of time, right? Like, I'll take a little bit of time for meta to pick up and get some conversions and start to go down the right route.

And a lot of people hesitate, and they say, well, learning is stored at the ad set level, and that's how meta is going to make decisions. That is not true. So you can take solace. I don't know if that's the right word here, but you can take comfort in the fact that if you have an ad targeting a 65-plus-year-old female and an 18-year-old dude in the same ad set, they will reach the person that they are supposed to reach, especially true if you have, like, landing pages that are super dialed for them, which is beyond the topic of this video. So ad sets functionally become folders. And so my face is going to be in the way of some of this, but that's okay.

So why do we have multiple ad sets then, right? So the simplest version of this, if you were setting this up today from scratch, you would have a single campaign with a single ad set, and you would continue to add ads in there until you hit the max limit. Sometimes that's 50 ads. Sometimes it's 150. This seems like it differs by ad accounts, and they slowly have been rolling this out since the introduction of ASC back in the day. But once you hit the max limit, then you add a new ad set.

And, you know, people, I think people get a little bit stressed out by this because what that means is, if you're looking at my screenshot here, I have five active ad sets with spend to varying degrees and different numbers of orders. The numbers in this case is just the order in which we launched them. And so you see that ad set four has the most ad spend, and that's because the ads in there are the best. We don't have a min spend forcing budget in that direction. Meta has allocated a budget between these in the way that you see them showing up in here. And so I think that that is very helpful context.

And so the question that I had before we committed to doing this, and we've been running this setup for a little over three years now, we used to segment the ad sets by creative batches or by persona, which I talked about previously. But I had a conversation with a meta rep on Twitter and then separately about does it matter? Like, do we need to be splitting creative by these different themes, especially if we're just running broad targeting? And they said, no, these are just functioning folders. And so what I want to do is show you that this is from 2023, and it hasn't changed since then because I had a conversation more recently with our direct meta rep. And so this tweet from Yoni, who I believe is still a meta, he said, more substantial concept differentiation is one of the ways to get this.

I'm not reading it verbatim, but it's part of a broader discussion. But he said, more substantial concept differentiation, assets with a high degree of visual conceptual contrast, distinct messages. This is creative diversity. What he is talking about is creative diversity back in 2023. And I know it feels like it's been a long time since 2023, but we've been talking about creative, not me and we, but like the internet, Twitter, e-com Twitter has been talking about this for years now. And I think it's really easy to forget.

And I asked the follow up. I said, so different messages to different people causes no issues with the delivery of the ad set, even if those people are widely different demos. So what I'm asking is, can I put my two different ads speaking to two different people in the same ad set? And he said, yes, that's the point. Fuel the system with more ad variety to reach a broader and more diverse audience of prospects with a propensity to convert. Give the platform more flexibility to decide where the next best dollar is spent across a wider range of ads.

Right. And so we lean into that. And that's how we decided to set up the ad accounts. Now, there are some objections that come with this. And one of the ones that we hear all the time is if I don't force spend with ABO, how do I know that my ads will get a fair shot? Well, we know is that spend is signal as a result of this.

And again, as I started the video by saying, I started this episode by saying the bet that we're making is not that we are always it's not even that the meta is always going to be right. The bet that we're making is that meta is going to be right on the allocation more often than we are. And I know there's conversations from people I respect very heavily on Twitter and in the space that say, well, if you have a high hit rate, then it doesn't matter. And I just I'm not willing to make that bet. And I'm not just we're not just throwing creative unintentionally up and saying, well, if it spends, it spends. But it is freeing in a way to know that you can you can push the push the volume and know that if it's not going to deliver, it just won't deliver.

Now, it doesn't mean to be lazy on your ads and your creative strategy. But what it does mean is you can know that you don't have to try to force spend into something. And that's frustrating. It can be a very frustrating thing to spend a bunch of time on a creative concept or a bunch of money on a creative concept. I again, my two cents is that it's a sunk. It's you don't want to fall victim to sunk cost fallacy and just spend it deep, spend yourself deeper into the hole because meta doesn't believe in the concept.

Because even if you even if meta is right or even if you're right one time for every time that meta is right 10 times, then you could still be in a massive, a massive hole of creative testing. And so that's kind of why we lean into the spend is signal philosophy. And what's great about that is you can actually see this playing out in real time. So this is a screenshot from Motion. Motion has this awesome on their creative reports. No, no.

Yeah, this is not sponsored by Motion. But in Motion's creative reports, they have a little like graph, a line graph view where you can see how spend goes and trends over time. And you can do this in a bunch of different ways. You can do this on an ad level. You can do it on a landing page level. You can do this on the AI tag level now, which is amazing.

But I love I love this graph. I think it's one of the most valuable graphs that people just don't think about because spend is signal. And you can actually see in real time how meta is making decisions between all of your different ads. And so I think Shireen, Shireen Albert coined the term tornado. Like she basically said, like, hopefully I'm phrasing this correctly, but you don't want to make funnels. You want to make tornadoes because what you don't know is at any particular time, what ad somebody is going to respond to that's going to get them to convert or is going to get them interested and at least down the right path.

So what you want to do, and this is I think is the real point of creative diversity, is try to have ads that speak to all of your potential customers at all the different stages of buying that they might have and answer all the questions they might possibly need answered in order to make a purchase. And that's both from a creative perspective, but also from a landing page perspective, which is probably time for a different podcast, because I think landing page diversity is widely slept on. And I think that's going to be one of the one of the key themes for the back half of this year that people will finally catch on to, is that you might have you might have an ad that is very top of funnel that runs to, you know, maybe a PDP. But maybe you have one that runs to a quiz page, and that's actually driving more net new traffic to the website. It lowers your cost per 1000 reach, which I know is a big topic of discussion. But then you have a bunch of images that talk about HSA or promos or all of these different pieces, which we would consider, right, more bottom of funnel messaging.

But who is to say that, you know, you know exactly what the customer needs to see at this moment in time in order to convert. And so the point is, and the way that we think about setting up ad accounts more and more is give meta all of the messaging and all the resources. Right. And I think this is why you see with Andromeda and their AI usage. They are trying to push towards letting you let them run more AI features because they are trying to say, we actually know better than you to a degree that you don't even understand yet. And that's why we want to customize this.

Now, is it there to be determined? And it's hard to say, but more and more we're thinking about how do we just like set up at the ad accounts to give it all the ads, give it a give it a governor with cost controls and our goals, which is the way that we prefer to do things. And just give meta the ecosystem of ads that somebody might need to see at varying points of the purchase journey and let them figure it out. And you can see that very clearly. Right. We've had ads that pop off really quickly and then fall off.

Right. And this did really well for a short moment of time, but then it kind of fell by the wayside. But it still makes up a decent chunk of spend. And so that's a little bit. I'm kind of going overkill on my on the this graph. But I just think spend velocity is is a is a really important signal to lean into.

Usually when winners hit, they hit fast. Now, there can be times where seasonality, you know, you've had an ad live for months and all of a sudden something happens seasonally and it shifts. And all of a sudden that becomes a winning ad again. Spend velocity in the short term is something that we pay a lot of attention to because spend is signal. OK, friends, quick break. This episode of Scalability School is brought to you by North Beam Incrementality, which honestly is so awesome.

Let me just tell you a little bit about it. If you're a marketer, you probably agree. Incrementality testing is broken. It's insane. You got to do all this crazy stuff. But with North Beam, you do it right in the platform.

It builds tests using your own MTA data, launches them directly in the ad platform and automatically monitors them throughout the test. And if something threatens validity, you're alerted to it within North Beam. Before it ruins the results. The really nice thing is it solves all these problems you hear about incrementality testing all the time, like labor, spreadsheets, statistical reference. Designing tests is normally so slow, right? And error prone.

So with North Beam Incrementality, it's automated and accurate and your team is able to spend with confidence. Give it a shot with North Beam Incrementality. Really, the future of incrementality is here and North Beam has figured out a way to make it easy. OK. Well, what we don't like to do is turn off a ton of ads very frequently, right? Just because an ad is spending today doesn't mean you need to kill it today.

And just because it doesn't spend it for the next three days or the next seven days doesn't mean you need to kill it over the course of that time. Now, is there a time where it's worth going through and cleaning up ads that are getting no spend? Sure. Sure. I think that that's a fair, fair thing to say, especially if you're running on, you know, running up on page limits or something like that. If an ad has been live for two to three weeks, it's spending a dollar or less than sure.

You know, you can probably feel mostly confident in turning that off. But there's just because it's not spending. There's not a lot of downside in leaving it on either. I think a lot of people get worried that they have too many active ads and that meta can't distinguish between them. And I would just say that we've not ever really seen that to be the issue. There are some caveats to this, right?

I've got some concerns of like when we turn off ads. If you've got an ad that all of a sudden it's super clickbaity and your click-through rate is three times the average of your ad account, meta is going to overspend on it because it has built up an understanding of your historical action rates, which is basically if you have a 1% click-through rate consistently and your conversion rate is 3%, all of a sudden you have a 3% click-through rate, meta still thinks your conversion rate is going to be 3%, at least in the short term on that ad. It's using historicals to kind of make a bet a little bit about what will happen in the future. So over the course of two to three days, your spammy ad is going to rip on delivery because meta still hasn't caught up. They will catch up. But if you watch that ad and you understand that it's very clickbaity, then totally fine to go ahead and turn that off.

Okay. So let's keep going. So the real system, creative cohorts, and this is where you can actually find some rhythm and some cadence to it. And so we've kind of gone through, okay, what does the actual ad account structure look like? It's single campaign ad sets. When you max the ad sets out, you add in more ads and you just keep stacking ad sets until you find your winners, right?

But it's important to still look at your batches. So you can launch them on a weekly basis. You can launch them more frequently than that. I would just make it a goal to launch something every single week. And if all you can do for now is two ads a week, fine. Just do two ads a week.

But it's important to figure out the system to get scale in the first place. And then you can refine and get better over time. But creative strategy can really, it thrives when it has more to work off of. So if you've got a bunch of cohorts of weekly launches, right, like you can see on the screenshot, and this is another, this is the new creative report, I believe, from Motion. It's fantastic. You can see how much creative was launched over the course of a week.

You can click the little dropdown. And you can actually start to see how much did this batch of creative get in overtime. And what you'll see most times is that new batches aren't getting a ton of spend. Every once in a while, you're going to hit a banger. And like that is just, that is the nature of outlier ads is they're outliers. And it takes a lot to get those.

But you can still stack up small wins on a weekly basis. I've seen ad accounts that they have ads that make up 90% of their spend. And I've seen ad accounts where they have ads that make up 10% of their spend. And nothing is actually over that. But what you're doing is you are stacking up the baseline. If you can just squeeze out $100 a day per spend across a couple of new ads and a couple of new ads, you just continue to stack the volume.

And that's the goal. Like I said, the goal is accumulation over time and not fixating on just one week of performance. Because sometimes it just takes time for things to actually pop off and spend. And you just want to consistently build a portfolio of winners, which is what you can see here. I'm recording this just a couple weeks after I took this screenshot now. I was recording this a bit after.

But what you can see is we had a lot of winners over this creative cohort. This is ad set number four, like the screenshot you were seeing earlier. We have a ton of winners in this ad set. But you're not always going to hit a bunch of winners like this. The goal is across a bunch of weeks, you are hitting winners every once in a while. That just adds some amount of volume and scale to your ad account.

And if you keep making those bets, eventually you will crack the one that can spend 90% of the ad accounts. You still have a bunch below. Now, what I wanted to do was provide some screenshots and some context to different ad accounts. So you can see this in real time. And when we start to deviate from the single campaign structure, you can see why. And I can provide some context as to why.

So in this case, pretty straightforward. Single campaign spent $300,000 over the course of 30 days. Here's a different one. What you'll notice is there's a couple of differences in here. The top campaign is just run on bid cap. The one below that is a min ROAS bidding.

So the same content in both of those, just bid cap versus min ROAS. Then we've got a New Year's, which was a promo specific campaign. And then we've got an additional one below that, which was a different product. So the bid cap on top and the bid cap on bottom were just two different products. Now, the sourcing, interestingly, is actually combining the products. And because they have, they're different price points, but they operate on a similar target ROAS, we could combine those into one min ROAS campaign and we feel more confident about that.

But the top and the bottom campaign, two different products, sourcing combined, but is running on min ROAS. We feel comfortable combining them. And then the New Year's campaign, which was a promo campaign, which we also split out as a separate campaign. In this case, we've got three different campaigns. This is for three different products. For this one, two different campaigns, two different products.

This one, three different campaigns, three different products. Two campaigns, two different products, right? Pretty straightforward on how it's actually set up. Now, there are some tools that are helpful in the process of doing this. Here's kind of how we think about creative. This is just like some nice stuff at the end.

Again, no proper affiliation with any of these tools, just things that we really like that help with the creative testing stack. Obviously, you've got Meta Ads Manager, appreciate it for. But then we use Ad Managed AI. There's a ton of great ad launching tools out there. There's new ones launching every single day that everybody's vibe coding. But we use a tool like Ad Managed AI to launch all of our ads because we're constantly launching a ton.

We use Motion for performance review, cohort tracking, like I showed you. It's baked in. And then they have AI tags, right? So when I was showing the graph over time of how things spend and trend over time, that's baked into Motion. And it's, again, one of my favorite tools. So you can see not only how individual ads are trending over time, but different types of messaging, right?

We had a client last year, I did a six-month performance analysis, halfway through the year. And what we looked at was, how did messaging trend by month? And they sell a product that can be used year-round, and they have some seasonality, but for the most part, like they do really well year-round. But every two months, the top-performing messaging changed. And so it's really important that we looked at the AI tags that Motion has. And you can see when you talk about how this works for, I'm going to have to anonymize this a bit, but when you talk about how it works for winter versus how it works in spring versus how it works in summer versus how it works, you know, the messaging changed substantially.

Now, it was more specific than that, but there was always a new theme of something that worked better every two months. And if you don't track that, you don't know that because seasonality can impact messaging very easily, as you can imagine. And therefore, you might have winners that look like losers all of a sudden because you're forcing adspend into them, but it just might not be right for the time of year. But wait two months and it could pop off. So that's why I love those cohort views for Motion. Okay, I had a bunch of different questions that came up as I posted on X and some of the replies that we got following the YouTube video or the previous podcast that we did with Phil.

And so I kind of chunked up the most popular questions that I got and just wanted to address those here as quickly as I can. So first was, how are you adjusting budgets when you launch these? Well, the good news is that the campaign is set at what I'm willing to spend on a daily basis. And because we run cost controls, it's usually inflated by 30 to 50%, which means if we are hitting our budget, it's because we're hitting our budget, we're on target, which means we should open up our budget. The degree to which you open up your budget is relative to how efficient you are. You know, if you're super efficient, you're 2x your efficiency goal, then you can be more aggressive with 100%, 200% increases.

If you're right on your target, then maybe you want to be a little bit more cautious and keep it no more than 50% of an increase. And if you're not running on cost controls, you know, how do you think about scaling it? I think you can feel, again, it depends on the degree to which you are above your targets. But if you're above your targets, 20 to 30% is safe. But I think you can feel confident increasing by 50 to 100% if you are smashing and you're on target. I personally don't love running lowest cost or highest volume, whatever they're calling it these days.

But that's how I would think about budget adjustments in that case. Are you running ads to your exclusions list in a different campaign? Yes. So I said I wasn't going to talk about this earlier, but there was a question about it. I got this a few times. We run specific retention campaigns.

Simply, our retention campaign setup is we target lifetime customers, but we exclude people who are kind of active, right? Like they have a chance of coming back. There's like 80% or less chance that they do come back. And so we're targeting basically lapsed people, people who without seeing an ad would be unlikely to come back. Now, if you have solid volume, you know, if you have hundreds of thousands or millions of customers, you can run holdout tests on that fairly easily to see if meta is actually incremental when you target those people. We ran this recently and we found that meta was incremental when doing this for one of our clients who's got hundreds of thousands of customers.

They run at like a 5X in platform. And it was, it came out to be like a 4X in the meta conversion of study that we ran. So it was definitely incremental. A lot of concerns that people have is that new ads don't get spent. So I have a little bit of a side anecdote. I actually saw a tweet from, I believe, I might butcher his last name.

So apologies, Nick, if you hear this, but Nick Therio, he's on Twitter. I think he also makes some great YouTube videos and reminded me of an experience that I had with one of our clients where we've had multiple clients who are spending 10 to 20K a day and they are spending time and attention on making new creative assets. And we're also helping them with creative assets, right? And we were having issues getting spend into some new creative assets, right? We would have winners and it was long. It's allowed the account to scale, but we were making so much new creative and there's obviously tons of batches that just aren't getting spent.

And so the hesitancy becomes, well, we're spending all this money and all this time on this creative. Like we should try to force some spend into it. And so if the ad didn't get spent in the CBO, we would put it into an ABO and force spend into it. So it was a loser in the CBO and we put it into the ABO to give it a chance. Every single one of them fail. 100% of them. And this has happened every time I've done this.

And I would love, if anybody has had an opposite experience, I would love to see that, honestly, because I think it'd be very interesting to see. But my experience has been when we've met, meta is right way more often than I think people give them credit for when for the people that have the ABO opinion. And again, there's plenty of people who have scaled the ABO. I'm not saying it doesn't work. I'm just saying this is my preference of what I've seen to work well. A lot of people ask, do you just keep adding creative to the latest ad set until it's full?

Yes, very simply. That answers that. The one clarification that kept coming up is your testing campaign than your scaling campaign. Yes, we do not split testing versus scaling just for quick context on that. And then we have this, I've had this thing come up several times, which is about having your top spending ads, having a lower ROAS and other ads. And so what I wanted to do was pull an example from an account recently and just kind of talk through this.

So these are on an ad set level, but it still tells the same story. And what you can see, there's a couple of things going on here. So I've got one campaign and I've got two ad sets. And if you look at the top campaign, there's one thing that stands out. It is spent, what is that, like four times more budget than the campaign below it. And it has a ROAS of a whole point lower.

So top campaign, if you're listening, spends $197,000. Bottom campaign spends $55,000. Top campaign performing at a 2.63. Bottom campaign performing at a 3.4. So oftentimes people see that and they say, Brad, what are you doing? Like, why are you not scaling more into that specific ad set?

Like you're leaving money on the table. It's performing, it's performing 50% more efficient than the other one. Oh, and my answer is, well, Meta knows something that we don't about why that can't spend more. And what we've found is that there are usually some signals that give us context into what that is. Now, we can't say for sure because Meta is a little bit of a black box, but we can look at some other metrics to tell us, well, why is Meta making this decision that they're making? And so things that I always like to look at, you can find this in North Beam, Triple Whale, Google Analytics.

Any tool will give you this insight. Any one of those analytics tools will give you this insight. It's new visitors as a percentage of traffic from that campaign. And so on the campaign number one, which is the one that has worse efficiency, but way more spend, it has a 61% new visitor percentage. So out of the people that click the ads over this time period, 61% of them have never been to the website before. 40% of them have been to the website before. Now that campaign as a whole is delivering a much better new visitor percentage, but this is just one example.

The ad set below that, the one that is way more efficient is sending half as many new people to the website at only 33%. So that's, you can tell, kind of what I'm alluding to here is the top campaign is fueling the rest of the funnel. And that starts to become evident as you look at new visitor percentage. If you don't have access to one of those tools or you're not comfortable in Google Analytics since they torched it, you can look inside of Meta and there are some signals that can lead you down this direction. And so the example that I'm showing in this screenshot is, again, top spending campaign performing less or worse on an efficiency level, but the frequency is a 2.59, whereas the lower spending one, it has a higher, even a higher frequency at a 3.64. So it's a higher frequency on substantially less spend.

And what you can see also next to that is the cost per 1,000 accounts reach. So that's not CPM, that's reach, it's CPMR, which is what people refer to. It's half. It is half of what the top campaign is. And so the top campaign is just fueling the rest of the funnel. And that's why we generally try to, again, it's one of the other reasons why we try to not tell Meta how much they should spend into specific things across the board, right?

Specific promo campaigns. This happens on Black Friday every year. You know, people have this debate on, are my Black Friday assets going to spend more than my Evergreen assets? Well, if you run them in campaigns that are on cost controls, you don't have to make that decision. You just leave room for them to spend into it and Meta will make that allocation. Now, again, there are super talented media buyers who maybe watch CPMR like a hawk.

Like a hawk, they watch new visitor percentage like a hawk. They watch frequency by a hawk, like a hawk rather. And they can analyze these decisions. I just personally think it's not a great use of time when Meta can do it very well. So I think that helps kind of answer some of that question. So here's like a simple version recap of everything.

So if you want to pause and watch this, long story short, one CBO campaign per product or offer. Ad sets are functionally just folders. Make new ones when you get the limit. Build the system for making consistent weekly content and just focus on getting volume first and then you can optimize for quality later because I think quality is a bit subjective. Let Meta allocate the spend. Don't force it.

And then track your cohorts over time. If you go weeks and weeks without hitting any winners, then it's time to think about quality a little bit more. Like what are you missing in your creative strategy that is leading you to launching a bunch of losers? And let spend dictate that. But the good news is as you review those losers, you can feel a lot better knowing you haven't just blown thousands of dollars in testing creative. That wasn't going to work.

So if you have any questions about any of this, feel free to leave it in the comments. You can hit me up on Twitter. You can probably email Andrew. His email will probably be somewhere in the description. Happy to help, but wanted to show some more tactical follow-ups to how we actually do creative testing. Thanks so much for tuning in this week and we'll see you next time.

And yeah, sorry I missed beefs. I don't have any beefs this week. Just trying to make the world a better place one CBO at a time. This episode is brought to you by Brad's company, Work Marketing. If you need a D2C marketing agency, let me tell you, Homestead is great, but Work Marketing is also fantastic. And let me tell you, you aren't going to find friendlier people out there in the e-commerce space.

So we decided to do these little ads for each other's companies. So hopefully you find it interesting. But seriously, great team at Work Marketing. Very smart. Brad and Jordan are incredibly dialed in. I just gave them a lead.

Already made this brand. I gave them like, I don't even know, double the revenue that they had the previous month or something. So, you know, it's very exciting to be connected with Brad. And if you need a great agency, there's really no one better. Zach, anything to comment on Work Marketing? Yeah, I mean, if you want an agency that cares about your business much more than they care about their own website, I just tried to load workmarketing.com and it was broken.

So they're definitely going to give more of a shit about your business than their own. So I highly recommend Brad and the team over at work. They've been incredible. We've referred a lot of business over to them as well. Really, really good as far as like cracking funnels and figuring out like rapid growth for brands. So I highly recommend these guys.

Yeah, please go check us out on YouTube. Rack up those views for us. We'd love to see it. And then subscribe. Make sure to subscribe on YouTube as well. And I relentlessly refresh the YouTube comments because it dictates my mental health for the day.

So please say something nice about all of us. Thank you, everyone. Thanks for listening. Honestly.

Next
Next

The Evolution of Creative Strategy With Sarah Levinger