How To Test Creatives On Meta In 2026

In this episode of Scalability School, host Andrew Foxwell is joined by Phil Kiel of Taikun Digital and co-host Brad Ploch to tackle the single most-asked question in the Foxwell Founders community: how do you actually test creative on Meta in 2026? The question was asked over 160 times in just 45 days, making it the undisputed #1 topic among media buyers and brand operators in the community.

The conversation digs into two distinct camps of creative testing, forced spend (ABO) versus letting ads earn spend (CBO) and unpacks the real-world pros, cons, and trade-offs of each. Phil, who runs accounts at significant scale, makes a compelling case for the CBO-first, content-centric approach, arguing that the role of Meta's algorithm is to allocate spend to the best creative, not for buyers to manually dictate it.

The episode also confronts the volume trap: the pressure media buyers feel to launch massive quantities of ads every month. Phil's take is refreshing and practical, 8 to 10 solid concepts per month (roughly 40–50 assets) is a solid starting point for most brands, built around strong creative briefs rather than raw volume. The hosts challenge the Twitter/X culture of flex-posting ad launch numbers, arguing the real metric that matters is creative quality and concept diversity.

Key Takeaways:

  • Are you forcing spend into creative tests and accidentally funding your worst ads?

  • Is the ad set structure in your Meta account actually doing anything, or is it just an expensive folder?

  • How many new creative concepts should you be launching per month?

  • CBO vs. ABO for creative testing: which one is actually right for your account right now?

  • Why launching too many ads may be training your account to spend on losers. 

  • What does it actually mean to 'let ads earn spend' and how to know when a creative concept has really failed versus just not getting a fair shot.

  • Why creative volume is the wrong metric to optimize and what you should be tracking instead.

  • What's the difference between a creative concept and an ad asset?

  • The one thing you should do today if you're a brand owner paralyzed by creative output pressure.

This episode of the Scalability School podcast is sponsored by NorthBeam and they just launched Northbeam Incrementality. Northbeam Incrementality gives you easy, automated, self-service incrementality tests, while protecting you from the major mistakes so many people make while running incrementality tests. Your MTA handles the daily tactics, your MMM guides the long-term planning, and Incrementality provides the causal truth. It’s a closed loop that allows you to scale what works and cut what doesn't. Right now when you head over to northbeam.io/incrementality, they’re offering Scalability School listeners 50% off unlimited tests for a year when you join. Just tell them we sent you!

To learn more about Phil Kiel and the Taikun Digital team you can follow him on X https://x.com/PhilKiel or head to https://www.taikundigital.com/

To connect with Andrew Foxwell send an email Andrew@foxwelldigital.com

To connect with Brad Ploch send him a DM at https://x.com/brad_ploch

To connect with Zach Stuck send him a DM at https://x.com/zachmstuck

Learn More about the Foxwell Founders Community at https://foxwellfounders.com/

Learn More about the The Hive Haus Creators Community at http://HiveHausUGC.com


Full Transcript

(00:01) And then you're launching these ads into those ad sets and you're allowing Meta to either spend or not spend. And there is a whole host of pros and a whole host of cons. A whole host of pros mainly falls under, you're probably going to lose less often, lose less tests, also lose less money, cash, profit. The con is you're probably going to make a lot of ads that don't ever receive any spend.

(00:24) A bit more like how I like to think of apparel and fashion, which is you walk into a department store for a brand. Each section of the shop is a different campaign. So you've got like jackets and you've got men's wear and you've got women's wear rather than testing area of the shop and a scaling area of the shop.

(00:42) I like that. I'm not a Zuck apologist here. So like I'm not dying on any hills, but I think for most of the time it's like, no, you don't need to make room. If your budget is set on the campaign, an ad set level where you can get out of learning, then you should trust that Meta is 90% of the time or more is going to make the right allocation.

(00:57) Brand owners or founders over the last couple of weeks where they felt really guilty because they're not hitting this volume of ads that's going live. Because I've got to work on something creative. I've got to work on something creative. And instead of launching any, they're just launching none. And my recommendation was always the same.

(01:12) It's just like, forget about trying to launch a certain number. Create one piece of content as if it was going to be posted organically. And instead of posting it to organic, just upload it to the ad account. All right. Welcome to scalability school episode about how to do creative testing in 2026 and beyond really, hopefully, but you know how to do creative testing today.

(01:42) Insane guest for this. We have Phil, who is an absolute legend in the DTC space. Is Phil Kiel, of course, from Tycoon Digital. Has been somebody that people follow all over the world and is an absolute legend. And so Phil is here to help talk about this at 9 p.m. his time. So thank you, Phil, for joining.

(02:04) I would for sure not have said yes to this. So thank you. Well, I've slept all day and I've awoken like Dracula. And of course, we have the always handsome Brad here with us to my co-host. Rumor is that could be coming back soon. We'll have to see. He is currently, you know, doing a million things and raising a child.

(02:23) Still on on dad leave a little bit. So we look forward to that. One thing I want to mention at the top of the episode is we do drop in the newsletters on scalability school dot com. You can sign up for our email newsletter worksheets and extra perks and things. So you may want to go over there and sign up for the email at scalabilityschool dot com.

(02:44) Make sure that you have, you know, kind of all the resources that you're getting or that we talk about on here. Those sometimes get sent out via email. So, you know, a little plug there at the beginning. So going into the episode, talking about how to do creative testing. Number one thing to say is this is the number one question in the Foxwell Founders community.

(03:00) So I got a report on our bot, which is within the Foxwell Founders community. You can basically ask a question to the AI bot, which sources answers from, you know, from Slack, basically from our community. And it this is interesting because it came back and said that the number one question that people are asking it over the last six months was it was asked one hundred and oh, yeah, forty five days.

(03:28) It was not six months asked over one hundred and sixty times in forty five days. How do I do creative testing? And this is something I asked Brad and he's like, yeah, you just like rip it. I'm like, but what does that mean? So we need to get into talking about this. And and Phil is a perfect guest to get into it.

(03:47) So number one on creative testing, two camps, I think there's the for spend under ABO creative testing. And then there's the second camp, which is let ads earn spend. Brad, you want to give a little more color on this? Yeah, I can. Before I say that, Phil's Phil, I go to I go to Phil for two reasons. One, incredible Twitter content, amazing learnings.

(04:10) I really enjoy what you put out on the Internet. Two, when I'm trying to convince my wife to let me get my hands tattooed, I'm just like, look, look at how sick this guy looks. You're making the first mistake where you're asking the wife. That's true. Yeah, that is a mistake. I just need to show up. I just need to show up.

(04:29) OK, let's talk about the two camps. Phil, do you do you would you prefer to take the CBO side or the ABO side? Not because we're picking one better, but we can explain it. I'll let you explain whichever one you prefer. I mean, it's a tough it's a tough one. So I'm more of a CBO fan. I am. We are running both.

(04:46) Definitely. We are definitely running both. And I'm much more on the mindset of like what's right for the business right now. Yeah. Yeah. Makes sense. OK, so let me I'll tee up. I'll tee up the ABO camp. You tee up the CBO camp. And we'll talk about pros and cons. Just kind of go back and forth. So on the ABO side, so to Andrew's point, there's there's there's really two camps that people kind of fall into.

(05:06) And there's like mini versions of both of them that kind of exists across ad accounts. But it's basically we force spend. We let meta decide. And there's a little bit of like a middle ground in the middle there where you can do the mid budget, things like that. But I'll talk about the forced spend, which generally looks like ABO creative testing side.

(05:22) And I'll just present a couple of ways that you can do it. We'll talk. We can talk about pros and cons after we present the both. So on the forced spend side, basically what this looks like is you set up when I say ABO, if you're not familiar with that, ad set budget optimization and each new creative test that you're launching gets its own ad set with its own dedicated budget budget.

(05:41) And you were telling meta, I want to spend these dollars no matter what the efficiency is, whatever, like I want to spend because I'm going to learn against this creative. That's what I'm trying to do. There's a couple of ways that people split their ad sets then on top of that. So you can either you can batch ad sets based on like creative batches.

(05:55) So maybe you have like a weekly cadence for producing creative and your team submits a batch and you get the batch and you can launch like that. There are people who I laugh, but like I've seen it and I'm sure it's fine in many cases, but people will set up a single ad set per ad, even to a degree where they're running literally one at a time.

(06:14) And they're just like trying to say like, okay, does this, does this work? And it's got single ad with its own budget. There's a couple of like mini variations of that where it's like similar to batch where it's like maybe you have a video concept, you have three hooks for the video concept, you launch that into an ad set.

(06:27) The last way that I generally see that these, these are run is kind of some kind of grouping. So it's an ad set budget, but I have a theme behind the way that I'm grouping my ad sets. So maybe it's handsome men with glasses and a beautiful beard. Like that's who I'm targeting. And then the mustache, the mustache group, like those are the two groups.

(06:48) And you're like, put your creative tests go into, into those ad sets. But the, the, the big takeaway from, from camp number one is you are setting up mainly ad sets where you were forcing spend into creative tests and saying meta, I don't really care what the results are. I'm framing that maybe a little bit poorly, but like I'm going to spend these dollars no matter what, because I'm going to use the learnings of that to dictate what I want to do as opposed to this other camp, which Phil, as you jump into that, which is the letting

(07:14) ads earn spend. And I wonder if you agree with the framing of that camp, which is a forcing versus not forcing. Yeah. I think that's, I think you've, I think you've explained it beautifully there, Brad. Um, and yeah, the CBO approach is where you are, you're controlling more of the activity outside of the ad account.

(07:33) And then when it goes into the ad account, you're essentially letting the babies leave the nest and, and I have a fly or fall. And we can keep going on that metaphor or we can leave it there. But yeah, you're essentially having a CBO campaign, campaign budget optimization ad sets per whatever. There's lots of options there and we can, and we can talk through that.

(07:57) And then you're launching these ads into those ad sets and you're allowing meta to either spend or not spend. And there is a whole host of pros and a whole host of cons, a whole host of pros mainly falls under. You're probably going to lose less often, lose less tests, also lose less money, cash, profit. The con is you're probably going to make a lot of ads that don't ever receive any spend.

(08:24) The counter argument to that is if they don't receive spend, how often when you then force spend into those ads, do they, are you just, you know, meta's decision is confirmed as a whole, you know, the right thing to have happened. And it's quite often that that is the case. We have, you know, zombie campaigns, which we can talk to, which is sort of like round two and which, which is always a good thing to do.

(08:50) I think the main thing that I think about is what's right for your ad account right now. And there's so much content out there on the internet around brands that are doing amazing things and the structures that they're using. And so often I order ad accounts and we're getting into the why rather than maybe we should just talking about how first, but, and so often I order ad accounts and I'm like, when was the last time you tested an ad that won and you're still doing ABO testing? It's like, you keep banging your head against

(09:21) this brick wall and admit, and yeah, I think that's, that's usually where we then make the suggestion to move to CBL. I mean, it's so much of it is like, I think a lot of the, the ABO method was built and from what people thought was the right thing to do and where a lot of the education was focused for a long time, which was the graduation method.

(09:40) You put it in there, it's separate, it's clean, right? And then you don't do that. And I think this reason that CB, well, let me, let me wrap that up. So you, you know, take an ad, if it works well, you put it into a scaling campaign. Well, this is not something that I think the majority of us are doing anymore.

(09:57) You're, you're, you know, it's just, it kills the performance of it. And I think if you're still doing this, you're putting yourself at a disadvantage. And so, you know, now what we're talking about is putting it into a CBO campaign, which is one goes on the, the idea of, you know, consolidation. But also the second option of the CBO campaign in that camp, which I think you'll learn through this episode, we're, we're certainly in more is you have to have a certain amount of budget in that CBO campaign to really like be able to do this.

(10:28) Is there a way that you calculate that, right? Like, let's say that you have a CBO that's running and you're going to launch a new creative into it. And the budget is hypothetical. It's a thousand dollars a day. Okay. If you're going to add 10 new creatives into it, are you going to increase that to $1,500 a day as you're testing it? Or are you going to keep it the same and just say, we're just going to let it compete against these other ones because spend is the metric, right? And I was talking to Harry, Harry D shout out Harry Demelish.

(11:01) He was just an incredible advertiser, creative strategist about this. And he really is like, look, spends like the only thing. And we've talked about that in this podcast as well, right? You can talk about creative metrics. Somebody I noticed this morning asked you about your, some of your social content about asking how you track, fill your win rate, which I'm interested in hearing about that too.

(11:20) But I think, is there, is there a budget there that you have to think about? You can just keep tossing stuff in the same CBO? In the agency, we scale based on performance, not based on launching new ads. And if the performance isn't there, we're launching new ads to try to improve the performance rather than we're in a hole.

(11:41) Let's get a bigger shovel. So we keep, so we, so we dig ourselves into an even bigger, we're going to improve performance by launching different ads. And there's a very big chance that, especially if the performance isn't there, there's a very big chance that new ads will get spent. And that's a good place to be in.

(11:58) The bad place to be in is when your highest spending ads aren't good enough because they can become a bit of a crutch where, you know, got to keep them live to drive that performance. But when you launch anything new, it doesn't get spent. And then you have to ask, okay, why aren't those new ads getting spent? That then I think is a question around everything else other than testing method, offer, message, angle, you know, the creator, product market fit, all of those sort of things.

(12:26) And I think that's a much bigger lever than ABO versus CBO. I think if performance isn't there, moving from CBO to ABO will rarely improve performance in the short term. So we should probably end the podcast there. Just like that's, all right, lesson learned. I did write it down as well. If you're asking the question, ABO or CBO, and this isn't to put anyone down, but I think if you're asking the question, ABO or CBO, you should probably choose CBO.

(12:53) And that goes back to, you know, a lot of the content that we see online and, you know, how all these huge brands that are growing massively year on year, they're so aggressive. Their margins are so aggressive. Their performance targets are like 0.8x ROAS, for example, from an acquisition point. If you, they're in a very good place to do ABO.

(13:11) Yeah. The budget, the budget point is an interesting one, Andrew, because it's like, Phil, you answered it exactly how I would have. So it's like, I'm going to say it's brilliant, but it's because I totally agree. It's like, yeah, like you shouldn't need to, you shouldn't need to make room. And like, we have, we, we get, we will have people, you know, I'm sure, Phil, you've experienced this, or it's like every once in a while, a client will send you an audit.

(13:34) And it's like, hey, did you see that they said you have too many active ads in your campaigns? And it's like, well, what does too many ads mean and how you define it? Because it makes these, it make you, what assumption you're making when you say that is meta is treating all these ads equally and spending against them equally.

(13:51) And I think it really like, should, should you increase budget? I mean, the, the, the 80% answer I think is like, no, for most people where it can get weird is if you have a really high AOE product and you launch a ton of creative all the time. And when, to your point, Phil, it's like, when you launch something, it's going to get some spend.

(14:10) It might not be a ton, but if you launch a hundred ads and you have a $500 AOV meta is going to spend against all of that. And so it's going to try to use a good chunk of your, your existing budget to go towards stuff because it wants to give it a chance at least at the beginning. And so that can screw up efficiency in the short term, but otherwise it's like, no, you don't need to increase the budget because meta is, meta is allocating.

(14:29) And I'm not, uh, I'm not, uh, I'm not a Zuck apologist here. So like, I'm not, uh, I'm not dying on any Hills, but, um, I think most of the time it's like, no, you don't, you don't need to make room. If your budget is set on the campaign, uh, an ad set level where you can get out of learning, then you should trust that meta is 90% of the time more and more is going to make the right allocation decision.

(14:48) You know, let's talk about bidding as it relates to this and do create, doing creative testing. So, you know, sometimes I've heard people saying, look, we're going to, we do all of our testing, uh, or, you know, we run it all on lowest cost or we run it all on min ROAS or, um, I've even heard things about, um, you know, minimum spend with new creatives.

(15:07) Um, do you, number one, do you put in minimum spend in your opinion? Is that a good thing to do? And number two, what do you think about the bids as it relates to testing within CBOs? So I want to just real quick on it, like say like, okay, so if we're, if, if, if, if you're coming to, to, to Brad and Phil to make a recommendation on like what you should do in your ad account, we're, we're both collectively leaning towards like, Hey, the CBO approach is probably the right one.

(15:31) So the reason to stay away from ABO to your points is try to avoid this because you're going to be forcing spend. And if you don't have a good hit rate or success rate with your creative, or you've been, you know, bad at making ads for a while or your offer sucks or whatever it is, you're going to, you're going to, you're picking up a bigger shovel, shovel and digging yourself a bigger hole.

(15:47) So if we're leaning towards the AB or the CBO camp, what does the CBO camp look like? Which I think is what Andrew is starting to, to, to the path that we're going down. Um, and like, what does that actually look like? So we're saying you should probably go CBO. So like within CBO, there's now a bunch of different ways to set that up.

(16:01) Um, the way that we generally do it, and I think this might be a little bit different from how you guys do it. Um, and I think you'll be able to better answer Andrew's actual question as a result of that is we generally, like we have a single campaign. Like let's say we, we'll use, we'll use Zach cause he's not here.

(16:15) We're selling socks and we have one specific sock product. We're setting up a single campaign for that skew of socks. And with one ad set in there for the socks, we're not launching new ad sets. In most cases, when we have new creative, we're launching everything into that existing campaign, regardless of who it's speaking to.

(16:29) Because on an ad level, and I think I have a point saved on this at some point, like on an ad level, meta is still making a decision around how much to spend into each individual ad, um, based on your total budget. It's not looking at who it's delivering. Like the ad set, this ad set learning thing, I think is- Yeah, you're giving it the freedom and you're, and you're segmenting it by product.

(16:48) Yes, exactly. And, and by product is not because we're trying to force creative learning to a specific product. It's because those products might have a certain margin of the profile. That's because that's where you're making those money. Yeah, right. Yeah, exactly. I'm just printing socks, you know? Yeah, exactly.

(17:00) But yeah. Yeah. It's like, I'm trying to set, I'm trying to build a funnel around socks because they have a certain margin or a certain offer profile or whatever it is. And so, so what we're doing, I guess, is to wrap that up is single, single ad set in most cases. And we're just launching in new ads until we max it out.

(17:14) Very rarely are we going through and pausing a ton of ads. Again, not that we don't do it. It's just like, it's just rarely we do it. So that leads to the mid, the min spends and like those variations. I guess I'm curious if you are doing the budget and bid adjustments or like how you're looking at that. We don't use minimum spend very often, but we have done in the past and it's, I like the idea of it.

(17:35) And they did bring out, so the first iteration of that product or feature was cash. So we've, we've got the, the campaign budget is a thousand dollars. We can say, I'd set one at a hundred dollars per day minimum. I'd set two at $200 per day minimum. And then, you know, that's $700 remaining. That will be spread wherever Meta wants.

(17:58) The latest development of that, which I think is a couple of months ago was percent. So you could essentially say 10% of spend. And then as you scaled the total campaign budget, that would then mean, you know, that 10% stays relative. And I think there's a, you know, there's a, there's a logic to why that is attractive, but we don't, we don't, we don't use it whatsoever.

(18:19) And the main, yeah, we don't use it. The main reason is because quite often we'll launch ads and they'll get spend in some format. If we launch five ads, two of them will get spent. And if they don't, then that's a pretty strong indication. The bids, I think is a, is an interesting topic and a good, and a good argument.

(18:38) And this moves away from sort of how to test creative and more into sort of lowest cost and bid caps and stuff. We don't do any, we don't do any manual bidding on creative testing. And we do do some of it on sort of scaling. I think it's just like, I think, you know, you've got, you've got all these. Yeah.

(18:55) I mean, what's the difference for you from a creative testing versus a scaling? Like the reason I asked this question is, by the way, Taylor holiday, a hundred percent listens to this. So if we talk about cost caps the wrong way, he's going to flame us all on X. Okay. So actually I'm sure he doesn't listen to this.

(19:13) All right, friends, quick break. This episode is brought to you by North beam, the marketing attribution platform that we love over here at the pod and good news. North beam is launching North beam incrementality. North beam incrementality gives you automated easy self-service incrementality tests while protecting you from the major mistakes.

(19:31) So many people make while running these incrementality tests. Cause honestly, most of these incrementality tests are kind of a mess, right? So what I love about it is you, it gives you a casual truth. You can act on every day, not one-off siloed reports. And it does it end to end. So it automates lift testing does.

(19:49) It connects you with your MTA and MMM. So you can scale. It works and cut what doesn't all in one dashboard. That's the real game changer. Now the North beam has incrementality. It means you can run the absolute trifecta of marketing measurement in one single platform, all using the same data. North beam incrementality launches this quarter.

(20:08) Get over to north beam.io forward slash incrementality and be the first on the list. And they're offering 50% off unlimited tests for a year to advertisers who join now. Tell them we sent you at the scalability school. Can't wait. Actually, I'm sure he doesn't listen to this by the way. But you do, listener, and you are freaking awesome.

(20:29) So good job because you're smarter than most people on the internet by listening to this podcast. So on that sense, like you have, what's the difference between those two? Is it scaling versus creative testing? The only place where that will be really dramatically different is when you're using something like inflated budgets, where you're setting the budget at like $20,000 per day, but it's only spending 2K, 3K.

(20:57) And structurally, this would be you taking banger ads that have worked and keep working. And they are in a new, they're in a CBO that's been running for a while or something. Yeah. And you're going to jack it up to let it spend when it does, when it's really good. And to that, and that's like a scaling thing.

(21:20) So these are a group of ads that have started as babies that have grown up that are now in their own scaling thing. And you have a new CBO that then you're testing other stuff in. Is that correct? Yeah, correct. We don't do that very often. And it's kind of, and I don't know how you would test creative if you were doing using an inflated budget structure, because you're essentially saying you can spend as much as you want here when the performance is good, but I'm going to force spend into this, these assets over here.

(21:47) And it feels like you're trying to do two things at the same time. And there's also going to be a hell of a lot of overlap. So you've been asking, you just asked, how would we do that? And I've answered the question, whether or not we would do that. No, no, no. I mean, I'm not talking about that. Like, I'm not talking about you inflating on a testing one.

(22:07) I'm talking about on a scaling one. So like, how many ads have to get in? Like, one of the things I've heard within these, within the challenge of, okay, it's a CBO's, you know, testing structure is, won't I have too many ads that are live? Like I have too many ads that are live, you know? To answer that question, if you've got too many ads live and your performance is good, then you would increase the budget.

(22:31) And that increased budget would therefore go back to your original question, you know, earlier on, which was when you launch more ads, do you increase the budget? So if performance is good, you can double the budget. In theory, that gives more room for assets to go live. For assets. Yeah. The way I'm sort of thinking about it now is more from like a, an organic marketing structure.

(22:56) And it's less about, it's less about scientific test and scale and duplicate, which I think is where we were sort of five, six years ago. And it's more around make the best possible content that we can for the brand, try and learn along the way and have a piece of strategy and then scale it most efficiently.

(23:16) Yeah. And so that's kind of your mentality with it, which is not trying to do much manual stuff, but letting meta decide. I think that, I mean, I completely agree. It's focused on the, it's content centric versus, you know, logistic centric and, and lever pulling centric. So what I wonder is, let's say that you are scaling, have, you know, you're scaling in this, you're doing, it's doing well and you want to launch a, well, in what instance would you launch a brand new CBO then? Brad, do you want to answer that?

(23:49) Or do you never launch a brand new CBO? I mean, like I've heard both. Like some people will say, look, I'm going to do a new CBO every quarter. Cause like it's becomes too much. I don't want to have everything dependent on this one. You know, some of it's obviously product or offer or funnel centric, right? They're going to launch a new thing.

(24:08) And it's like, that's what we're going to do. What's your opinion on this, Brad? So we will generally speaking, we have campaigns by product or by offer. So like, let's say we sell the same product, but we're splitting up into, we have different offers. We'll generally make a new campaign for the new offer because we, Taylor is listening.

(24:27) We do run a lot of bids, so please don't yell at me. Um, but no, we, because, because we're, because we're setting bids differently based on the goals of that campaign, then we want them to be separate. You could do that within, you could do it within a CBO, but so, okay. So let's go back to our socks examples.

(24:42) Um, if I have a sock campaign and it is, let's say it's, you know, lowest cost or cost cap, cause it's bidding on the same thing. Well, functionally, um, depending on if you're putting a limit at it or not. So you've got a sock campaign on lowest cost or cost cap, and you've got your single ad set in there.

(24:58) Um, we will just keep launching ads in there. And the only time we'll make a new ad set is if we need, we ran out of room. They, thankfully in most cases, now they've increased the limit on that to 150 from 50, but there's still, there's still ads that are limited at 50. Um, so what it might look like is we have a, we have a campaign with a bunch of ad sets in it that, um, are all in purpose of the same thing.

(25:17) They have the same bids. It's, but it's all for that same product. So when do we introduce a new campaign? There's a couple of, couple of cases, new product or new offer, um, or promo. Um, we want to switch to a different type of bidding. So maybe we're going to go from a cost-based bidding to a value-based bidding, um, which, in which case we'll do that.

(25:35) Um, and those are really like, that's really like the only scenario that we're adding a new campaign. Like we're not doing any like crazy horizontal scaling thing or like, you know, trying to do it that way or we're forcing budgets. Um, and the main reason from that, like I've, the consolidation point is like, it'd been an interesting topic on my mind lately.

(25:50) I saw, I saw a thread recently where, um, I can't remember who it was, but he was talking about his bid caps and his lowest costs. And he's like, you know, I split out interest-based testing on top of that. And all of a sudden the interest-based testing looked, looked good, but it made my broad not perform as well.

(26:05) And then it's like, I think what we're trying to do is like give meta the meta, the, the, the parameters on what success looks like, which is purchase optimized campaigns for a lot of people for a very long time. Um, maybe use cost controls, maybe you don't, but then as much as I can allow them to make decisions within those guardrails of, of that, like they're going to do the best job allocating things.

(26:24) Uh, so long winded way of answering your question. It's like they really, the only time we change and add a new campaign is we've added a, uh, a different way we're doing value-based bidding instead of cost-based bidding or alongside or running both. We, um, we've run an ad account, which is for a, a single supplement and we have a CBO campaign per, uh, segment and that's, and it's a liver product.

(26:46) And that segment is, is the types of people that would buy this liver product. So it's, uh, people who are overweight, people who drink too much, people who have fat liver disease, you know, X, Y, Z, and you can continue down that route. And there's logic that those campaigns could work together. Or there's a, you know, there's a sort of romantic idea that each one of those different campaigns are going to be targeted in different audience.

(27:11) There will always be overlap, but it supports our input, which is creative. So if we're going to 10 X the number of assets for over drinkers, where are they going to go? We have to have somewhere to put them. Um, and that's the main reason is for, it's just folders on a computer essentially. And to ensure that we are acquiring customers across all of those different segments, because obviously we can control the budget at the campaign level.

(27:40) Whereas if we didn't, and we had them all grouped together, maybe one CBO with all the ad sets being those different segments, then in theory, we could, that could actually lead to risk. We've moved away from CBO to, from APO to CBO. We've moved away to, into CBO because we want to limit risk. But actually if we do, if the example I had where all those different segments were different ad set in the CBO, we would over focus on one audience.

(28:04) And that would, um, you know, put us in a potentially a tough situation after a couple of years, you know, we're over focused on one product, one audience that then disappears. That's a business risk. Our competitors are over focusing somewhere else, you know, when we're not taking market share there. So, yeah, but I think there's a good, if you've got a single product, I think there's, there's, there will be reasons why you could launch more CBO campaigns.

(28:29) I would probably recommend focusing on persona, angle, uh, creative type, potentially. And we could talk about that in partnership ads going into CBOs. Um, Brad's example of like a CBO per product is a bit more like how I like to think of apparel and fashion, which is you walk into a department store for a brand.

(28:52) Each section of the shop is a different campaign. So you've got like jackets and you've got menswear and you've got women's wear rather than test. You don't have a testing area of the shop and a scaling area of the shop. I'll set up like that. I like that. But like in the, in the event, so like, uh, on the, on the example that you gave, the supplement example you gave, um, those personas, like the, the functional benefit inside of meta to you guys is it's folders, it's organization.

(29:18) We can see how performance looks on each of these things. And like, maybe it's not perfectly delivering to that exact segment, but like the intent behind it. And that drives what you guys do from a decision making basis and how your creative strategy comes together, which I think is really valuable. The argument that people make in favor of ABO testing, which I think is a totally fair one is like, it's, I think it's, I think it's fair in theory, but I think it makes a wrong assumption about people's intent, which is like, we're forcing it.

(29:41) So it means we need to care more about the, the input here. Um, and we're going to pay more attention to it. Cause we know if we need to turn it off, we know if it's not working, we know we can learn from it as a result of that. Um, but so anyways, the, the functional benefit of the CBO splitting that you're talking about is we can talk to these different personas.

(29:57) We're going to be accountable to making creative for these different personas actually in meta's delivery. There's maybe not a massive like functional difference because of they're really just folders to your point, but like it drives how you guys take action. Is that fair? Yeah, exactly. And then, and then there's benefits from a business level.

(30:15) We can see which customers come from those campaigns. We can track them in Clavio. We can track their LTV. We can, we can, we can pause one. We can double down on the other. We can raise credit to support one campaign cause the LTV is so strong. But you wouldn't do that if you only had a hundred bucks a day or if you had 500 bucks a day, like you're, you're, you're getting new campaign when you feel confident that this is going to be efficient.

(30:35) I'm using the term learning now cause I think there's something magical about learning, but like, it's just a good practice to have, which is like, we shouldn't be splitting out new campaigns if we don't think that we're going to be efficient enough. One really quick example. We have a apparel client who like ran out of inventory through the end of the year, basically through most of January, cause things just went really well in November.

(30:53) And so, yes, they have a bunch of different products across a bunch of different price points, but it's like we, if we start to split those campaigns out, it's like we have seven campaigns and there's like no budget going through here where that could possibly be efficient. So then we, we did consolidate everything back down and we made, here's all products.

(31:07) It's value-based bidding to control for the ROAS. We're just going to let it rip because otherwise there's no way where this is going to be efficient enough cause we just can't spend enough. The second inventory is back and we can push spend back up. Then we're going to split those back out. But like, it all kind of comes back to the whole point.

(31:21) Andrew, you brought up earlier consolidation. It's like, it's context dependent on what the program is spending. So Phil, I guess what I'm saying is like you, you wouldn't be doing that if you had a hundred, 500 bucks a day. It's like you're doing that cause you feel confident that each of those are going to be efficient on their own.

(31:35) Yeah. Yeah. And they, they take, they take months, courses, years to build up to. Yeah. Yeah. I mean, spend is a huge function of this, I think. Right. In terms of the testing. Let, let me talk about the, you know, metrics that we look at when we're testing. So, so I'm going to do create, I think, you know, I'm going to do creative testing.

(31:55) I'm a brand. I haven't done it very efficiently. I've spent all this money on creative. I'm hesitant to put it all into the same CBO. Right. Because what if I don't get clean metrics? And I think that there's two things to say here is like, one, I think to a degree advertisers and brands have been sold the idea that the metric is the whole, like metrics, creative metrics specifically are the holy grail.

(32:22) Right. And I think that actually what we're learning about really scaling brands is that spend within an efficient CBO is like the most efficient thing and is the best metric to look at. Because, you know, many times these, there's not necessarily a correlation between an ad having a good creative metric or a good creative hit rate and actually selling something.

(32:51) So how do you feel about creative metrics now, Phil? And how are you judging this if something isn't getting the spend? I think because like I know this is a common thing that we hear from clients like, you know, I spent all this money in creative. You're not even going to look at the, you know, how are you going to say like, well, actually, we have some early indications here based on this rate that this is doing pretty well.

(33:15) What is that and what are you measuring and how are you explaining that to clients? We use the leading indicators other than spend. So, and we can, and we can talk through those, but we use those other metrics, everything that's available, CPM, CTR, cost per click, thumb stop, everything up to when someone buys the product, add to cart rate, cost per add to cart.

(33:39) We use those to justify why something got spent or why it didn't get spent. So 90% of the spend went to add A and we had five ads. Why do we think that happened? The click through rate was 3% and all the other ads had a click through rate of 0.5%. Okay. That tells us something. What was this? What was the, what were the other metrics? Thumb stop, add to cart rate, CPA and ROAS.

(34:06) Were they good? Yes. Okay. That's good. That 3% click through rate was logically relevant. People were clicking on it for the right reason or the same thing again. All the spend went into add A. It had a 3% click through rate and the performance was awful. Okay. Why did it have that click through rate? Oh, the first three seconds, something happened.

(34:27) You know, there was a dog. There was something in it that looked phallic. I've seen that before. Okay. We should probably pause that, pause that ad now. And the other ads, two to five, will then get spent. So we'll use those to justify why something got spent or it didn't because clients do not want to hear.

(34:48) You spend five grand on these assets. This one ad, it had a click through rate of 5% and it spent $40. Clients do not. That doesn't, that doesn't communicate trust from us as an agency. Right. Yeah. Yeah. That doesn't, that's not going to go well. I have a slightly different take, which is I used to like, we can force spend into it if you want, but we're not going to, we wouldn't recommend doing that.

(35:13) And it's just like, yes, you spend five grand on the assets. Would you like to spend an additional five grand finding out they still don't work? So it's like, I'm, I'm the arbiter of your money. I'm not making an opinion about whether I like or don't like the creative. I'm making an opinion about, will this make you more money? And I, if Meta doesn't think it will, I'm inclined to think that they're right most of the time.

(35:32) But I, but what you said there is really important is like, you can't just get us to have all this money spent on the creative and then do nothing with it. I think back to the point about what people who prefer ABO, their argument is, well, it forces us to look at it and reconcile why something did or didn't work.

(35:47) Fine. Um, what, what, what I'm not saying, what you, what you said is like, we're going to look at the, we're going to look at the creative and we're going to look at the metrics and we're going to try to come up with reasons why it didn't spend and justification for it. Cause that's super valuable. Cause you can take, you can, you can take the analysis and do something with that information.

(36:04) So what I'm not saying is, nope, sorry, it didn't work. See ya. Let me know what you got to do creative. But we're saying is like, okay, it didn't get spent. If you want to force spend, sure, we can do it. We probably wouldn't recommend doing that, but we can do it. But here's some of the reasons why it might not work.

(36:17) If you compare it against other ads in the ad account, um, we, I will say we shy away from the soft metrics. We try not to lean into soft metrics too much. Cause I think that I personally just think that they cause more confusion than anything, um, in the way that we communicate at least. Um, but I think it's like the broader point is like you were trying to give reasons to why something didn't work.

(36:34) So there's actionable takeaways with that, um, whether or not it's, it's spent or it didn't spend. So I think we're, I think we're getting to the same conclusion just in that slightly different paths. I really liked the idea of benchmarks when I've been training paid social execs over the past couple of years, what I've been trying to train them on is like, get a, get a daily sense of what the benchmarks are in your ad account.

(36:54) And that just takes, that just takes looking at the ad account every day, even though you're not potentially doing anything every day, you're looking at the ad account every day. So you understand the benchmark of every metric. And then if every video that goes live has a 20% thumb stop rate and you launch some new ads with a different creator and some different hooks and it's, it's high production versus low production.

(37:15) You've never done that before. And then all of a sudden the thumb stop becomes 40% rather than 20%. It's like, hold on, let's see what happens here. Because, you know, we, we, it's not very often that performance dramatically improves. It's not a very often that you go from like a $200 CPA down to a $50 CPA, you know, doubling your thumb stop could be a sign of something that's really, that's really big to happen.

(37:41) Especially if you're making a big jump when it comes to the type of assets that are going live. We're not, we're not, we're not looking at thumb stop every day, but again, it's those benchmarks that, okay, you know, what's a benchmark click through rate in this ad account? Okay. It's about, it's about 1% normally.

(37:57) Okay. Let's just make sure it stays around there. I mean, so let's talk about this. So we're talking about, you know, to do creative testing properly, recapping your, you know, your, you're putting assets in a CBO, potentially by product, if you're able to do that or by funnel or by offer, maybe if you've expanded that way.

(38:15) There also, I think is this, you know, thought about, look, what, but, but I love an ABO because, you know, it's a container that it keeps it in. And, you know, I think it was Brad, your meta rep, right? Said something like outside of different bids, targeting or exclusions. It's a, it's just organization.

(38:39) That's all that an ad set is, right? We had two clients over a short window of time, basically say like, we want to try forced spend into this. And I said, I wouldn't recommend it. And I gave, gave the logic, but it's like, look, I'm comfortable doing it. But I'm, I'm going to tell you that, like, I don't, I don't think it's the move, but if you want to do it, let's do it.

(38:59) So we did it. And all we found was losers. It's like, and like, we were all making creative. Like, it wasn't just like, it wasn't like we were making shit ads. Like they weren't, it's like, we were all making a ton of ads. And what, what happened is we just ended up spending more into, into the worst ads when we, and then you look in the CBO and they just, they weren't getting, they weren't getting spent.

(39:18) And that happened, that happened twice where I tried the forced, forced spend method. And all I found was just force efficiency across the board. Or when we had something that looked like a winner and we dropped it in the CBO, it just still didn't get spent. Cause like, it was so contained in that small, that small window.

(39:33) And so, um, I messaged our rep cause I was just like, I need to know what's going on here. Cause I think the, the, the thought for a long time was if I use, because targeting is, is, is tinkered on, on an ad set level, because I can put an interest, because I can exclude people, because I can do this on an ad set level.

(39:48) And that has implications for learning. The ad set is where the learning is stored. I think that's what the thought was for a long time. I thought this for a long time until I saw a tweet from, from Yoni, who I don't think he's even on Twitter anymore. I don't even know where that guy went. He's, he's, he's in, he's in Canada and he's awesome.

(40:00) He's super helpful for meta. Um, what happened is he probably got a bunch of DMs asking for people to get their accounts unbanned. And he's like, I'm out. But I saw him say like, no, the, the actual ads, it doesn't matter. So I messaged our rep. I said, does the ad set do anything? Is there actual functional learning? If I put two ads that are speaking to completely different people in the same ad set, is that going to mess up my delivery on an ad set level? And his, his answer was no.

(40:21) Like it makes no difference. If you exclude people in a different way, or you, you set value-based rules on that ad set level, you can influence the way that delivery happens. But the ads themselves are delivering independent of each other within that to a degree, like independence and extreme works. Like if you have a video and an image, the video might drive more new, new, um, visitors to the website.

(40:41) But if you have completely different personas and an extreme example of this, we have a client, we had a client that sold this like body wash product and they had a, they had a collab and we had, um, creative for the girlfriend and the creative for the mom. And they delivered properly to those segments within the same ad set.

(40:59) And so I guess like the confirmation is like the ad set is functionally a folder outside of what you can do on the ad set level to influence targeting and bids and things like that. Now there's this feeling, you know, look, you look at an account and every brand owners here, you need to be testing more creative.

(41:16) You need to be trying more creative. That's, you know, huge narrative that we're, we're pushing as advertising agencies and, you know, because it's true and it's, it's needed in, in most cases. So how do you think about type Phil? Um, you've mentioned it previously a little bit, but like, you know, is it creative diversity? Uh, that you're just like, kind of like the way that you're doing, like, look, you need to, you know, develop personas and then, you know, you need to be testing video statics, need to be testing partnership,

(41:48) whitelisting, like, cause it's a diverse set of ideas. And, you know, within the same CBO, is it kind of that mentality or like, how do you determine a number or how much you should test creatively in terms of type? In terms of creative diversity across type of asset, so formats. Yeah. I mean, like, how does it, how do you determine that? Or is it just like, I fricking don't know.

(42:10) We should have five videos, five statics and five partnership ads. You know, like, what do you, like, what's the number and like, how do you determine how much you should be testing by type? Because I think we've all seen that we're testing down persona, but like what, what by type is the, is the number that makes sense? Yeah.

(42:29) Yeah. I mean, if you were starting, if you were starting at scratch, so you didn't have any data to start off with to tell you what type of asset you should be launching, I would be trying to be as equal as possible within reason. Probably gonna, you're probably gonna lean more into images just because they're, they're easier to, and cheaper to deliver and quite, and quicker to deliver.

(42:48) So, um, it could go, it could, it could go 50, 50, for example, 70, 70% statics, 30% video is probably a good starting point. That's not going to cause too many issues for most brands. So, um, where we try to strategically do that is when we look at percent of spend per format. And the only time we'll ever do that really is when there's other, there's a reason to invest into a format that we're not currently spending on.

(43:19) Like, uh, okay, video shows some promise. We should be doing more. We should be creating more videos. If the data is showing this ad account never works with videos and statics absolutely crushing it, then we're not really going to be revisiting that. I'd say for a good year, you know, circle back at Q4, for example.

(43:39) Uh, so that's sort of like diversifying across formats. The, the second piece is, okay, so when do you go into it? Well, how do you go into videos? And a, and a logical way is you test an asset in statics, find some copy. Okay. How do we make that into a video? You know, we've got a headline, we've got a couple of bullet points on a static image ad.

(43:59) Let's just get the founder ripping a video, face the camera, talking through those points. And that would be a very straightforward way to start that. Uh, you've got justification from the static ad. Got spend and it performed the static ad. Let's turn it into a video. Does it get spend again? Um, and then we can get, we can get busy.

(44:20) We can get nerdy. Those different formats spend in different placements. That would be great. You know, the static goes to more Facebook feed, desktop. The video goes to more Instagram reels and stories. Great. We're, we're active across all placements where audiences are. The partnership ad is really, is really just like all chips on black, to be honest, because every time we test them, it's all anyone, I mean, it's all anyone's ever banging on about.

(44:44) Very rarely. I don't think we've ever actually seen a brand who's really invested in partnership ads from a volume and time point of view, where it hasn't turned out to be a fruitful. So it's worth, it's worth it to you at this point in time. As part of how you do creative testing. Yeah. But it does fail if you just try it once and it doesn't work.

(45:06) And then you stop. I think that that's what you're saying is like the volume. It's like the time that they put in. So they, they figured out who is worth partnering with, take the time to make those relationships, get the content, brief them thoughtfully, and then get it in the ad account as a result of that.

(45:18) All right. Yeah. Yeah. Yeah. So we're like where we want to get to the sort of the, the, the destination that we want to get to is where we've got new partner content that we can pull live every week. And that could just be free for ads. It doesn't need to be any more than that, but we're doing it every week.

(45:34) Like clockwork. And it'll really compound over time. I think it's good to bring us to this segment in the show where we call, we call this beefs. And this is a time to sort of take out the trash. If you have any, you know, I think my beef with creative testing, you know, really is that there's this idea that it has to be scientifically done.

(45:55) And you kind of mentioned this previously, right? And I think that's how we as marketers were taught to think and how we were brought up of, okay, it has to make sense. It has to be a clean test. It has to be statistically significant, et cetera, et cetera. And the reality of it is meta is now serving, whether it's an ABO or a CBO to Brad's earlier point, they're serving it to the proper audience, looking at the content of the ad and the landing page that it's being sent to.

(46:21) So the more that you can consolidate and test alongside things that are getting a lot of spend, it doesn't feel like that's the right thing to do. But that ultimately is actually the right thing to do in 2026, I think, for creative testing and also diversifying on persona and the things we've talked about, creative type.

(46:42) But, you know, that's my beef is that it has to be this like clean scientific thing. And it's like it doesn't. It's actually like a lot simpler. It's what Brad told me via DM last week. Like, just fucking rip it. That's like, you know, so like, I don't know. What do you guys any beefs on creative testing or anything you guys want to get into? I would just say it feels like when people start to talk about creative testing, they are in these camps.

(47:07) And like, I'm cool with you being in either camp. But like most of the people end up talking past each other. Because like what we're talking about is being thoughtful about the types of creative that's going into the ad account. And then Phil, to your earlier points, like spending more time on the things that actually influence the ad account performance, which is like the analysis of the creative, the partnership focus and ads.

(47:26) And how do we get more of that? How do we get more content? How do we set up these systems? How are we thinking about the landing pages? Is the ad to landing page to offer consistent with the person? It's like, if you set up creative testing in a way where it's like you have a system for getting it live and paying attention to it and analyzing it, reviewing it and doing something with it, then it shouldn't be some like complex, crazy system that, you know, is like so confusing that you just don't do it.

(47:49) But I guess my beef is like most people are just talking past each other when they argue about creative testing structures. Because like what we're all saying is, and this is true of the, this is true of cost caps, bid caps, lowest costs argument as well. It's like, well, all of all, most of us are saying that generally know what they're talking about is like, you should focus on the things that actually move the business forward, which is probably not the nuance between a cost cap, a bid cap, a lowest cost and all that dumb shit that we argue about.

(48:14) I've got tons of beefs. I'm quite a, I'm quite a, I'm quite a mind mannered, happy person, but yeah, I've got tons of beefs. Specifically on creative testing, I think my beef is on, we've moved away from how much spend you've measured as a size of your, you know, sort of, that's the wrong thing to say, but you know, like your talent level is how much spend you manage.

(48:39) And now the thing to measure on is how many assets you launch. And there's always going to be someone with a faster car. There's always going to be someone who's launching more ads than you. You know, it's like, I launched 5,000 ads last month. It's like, someone's going to say a bigger number, but we've, we've got, we've got a brand at the moment.

(48:58) I don't think we've launched an ad this month, this year. I don't think we've already launched any in January, sorry, in December. They're, they're growing massively, like 50% year over year. They're like, I think they're like, they did 20 million last year. They're going to try and double again this year.

(49:13) They're spending like six, 7k a day in January. They're a massive Q4 tax season brand, which is obviously coming up. And it's nothing, it's nothing that, that gets the tactics and nothing that gets engagement on Twitter. It's everything but that. That's not fun. I mean, obviously the model is built, of course, as we've talked about that, it's like shock and awe, try to make people, you know, feel like an idiot and feel like they have FOMO and then like sell your product and that you have all the answers.

(49:45) You know, I think, yeah, it, it doesn't give enough credit to the fact that most of us are so busy on a daily basis running around trying to just like keep the ship running on the brand or the agency or with clients and bringing them like good creative. That's going to scale on a regular basis that like this other stuff on X is like kind of a joke.

(50:07) It's like, here's my seven step process of how I, you know, and you're like, all right, like nobody's, it's not real. It's just fake. I have spoken to a couple of brand owners or founders over the last couple of weeks where they felt really guilty because they're not hitting this volume of ads that's going live.

(50:27) Because, you know, I've got to work on some new creative. I've got to work on some new creative and instead of launching any, they're just launching none. And my recommendation was always the same. It's just like, forget about trying to launch a certain number and just work on one ad. Yeah. Don't, don't try and do four with different hooks.

(50:44) You know, we're going to test the hook. And there's like Andrew was saying, you know, there's this scientific test. Just create one piece of content as if it was going to be posted organically. And instead of posting it to organic, just upload it to the outcome. Yeah. I think it's great advice. You just got to set up the system that you'll be, it's like a, it's like diet advice.

(51:03) It's like pick the diet that you'll be, you'll adhere to. At least at first you can optimize from there, but it's like, you got to at least, you got to at least build them to build the muscle. Yeah. Yeah. The system is, is interesting. With most. Okay. One more question makes me think about this. With most of the clients that you guys have on creative testing, is there, like you said in the system, is there an amount or is there, is it, or is it just about like, we need to get net new ideas into the pipeline.

(51:33) And if we're doing that on a weekly basis, as long as it's not more than zero, we are okay. How do you approach telling clients about that? So we've, we have a default starting point, which is eight to 10 concepts a month. And we certainly focus on the number of concepts. It's like the briefs. Start with the number of briefs that we've got to write.

(51:53) Not to the clients then say, well, how many, how many assets is that going to be? And I try not to answer that, but it's usually around five to six assets, five to six assets, a brief to call that 40 to 50 assets a month. And I think if you're being really intentional about the briefs and the variation between those assets in each brief, then that should be, that should be enough for most brands.

(52:20) We could, we could talk about spend figure where you might need to start doing more. I think in most cases, if you're not doing that, then that's a good starting point. Yeah. Was that, is that like the situation you find yourself in? It's like when you're doing these audits and you're talking to these brands, it's like, okay, you've launched three things in the last month.

(52:36) Like you don't, you just haven't, you don't have the baseline. So like our baseline is taking you away above where you are today. No, for most of them, they're launching too many. Really? Yeah. Like I ordered, ordered an account the other day where, and I've got the transcript. So maybe I can like turn the transcript into like an AI piece and then drop it in Foxwell.

(52:54) Well, that's, that's quite helpful. Usually it's use. That's quite helpful usually to get the sort of points across, but they were launching a new ad set every day. Sure. With new assets in. And I'm like, when was the last time you launched an ad that would? I'm like, I don't know. It's like, well, why are you still doing it? So.

(53:12) Yeah. They're trying to, I mean, I think to some degree, right? Like people in their brain want a system. And I think that their ad sets are a clean way to do that. And that's what's also was the predominant knowledge and the prevailing knowledge for a long time. So you can't really like blame people for, for thinking this way.

(53:28) Um, because it's what makes sense to them and it's what they're trying to do. We're all trying to build a system. The system is, yeah, I think more of what you said. It's around different concepts with briefs and then, or, you know, different sort of pitch towards different personas and whatever. And then we develop the ads around that.

(53:46) And that makes sense. So look, great episode, uh, almost an hour. Thank you to those of you that listened all the way through. And Phil, thank you for joining us late night. Hopefully you can get some good rest tonight after now you're stoked after this. You're just going to have to go do a workout. Brad, as always a pleasure.

(54:04) And, uh, thank you for listening. If you like what you hear, please rate us on all of the platforms because we are lonely and need to know that. We are validated by your votes. So like, comment, share, whatever, all that shit. We appreciate it to the tens of listeners out there. And, uh, yeah, look forward to next time.

(54:27) Adios. This episode is brought to you by Brad's company, Work Marketing. If you need a D2C marketing agency, let me tell you, Homestead is great, but Work Marketing is also fantastic. And let me tell you, you aren't going to find friendlier people out there, uh, in the e-commerce space. So we decided to do these little ads for each other's companies.

(54:54) So hopefully you find it interesting, but seriously, great team at Work Marketing. Very smart. Brad and Jordan are incredibly dialed in. Um, I just gave them a lead, already made this brand that I gave them, like, I don't even know, double, double revenue that they had in the previous month or something. So, you know, it's very exciting to, to be connected with Brad.

(55:13) And if you need a great agency, there's really no one better. Uh, Zach, anything to comment on Work Marketing? Yeah. I mean, if you want an agency that cares about your business much more than they care about their own website, I just tried to load workmarketing.com and it was broken. So they're definitely going to give more of a shit about your business than their own.

(55:31) So I highly recommend Brad and the team over at work. They've been incredible. We've referred a lot of business over to them as well. Really, really good as far as like cracking funnels and figuring out like rapid growth for brands. So I highly recommend these guys. The only way that we grow this podcast is by you sharing it with your friends.

(55:52) Honestly, like reviews kind of don't really mean anything too much anymore. They're really meaningful, but they don't do a lot for the growth of the podcast. Um, and so sharing YouTube links, sharing Spotify links, sharing Apple, whatever we call it under the podcast app. Now, anything you can share, the better we're going to be.

(56:09) Guys, anything else you want to say on this? Yeah, please go check us out on YouTube. Rack up those views for us. We'd love to see it. And then subscribe. Make sure to subscribe on YouTube as well. And I relentlessly refresh the YouTube comments because it dictates my mental health for the day. So please say something nice about all of us.

(56:25) Thank you, everyone. Thanks for listening. Honestly.

Previous
Previous

How AI Is Rebuilding Every DTC Creative Team

Next
Next

Rolling Reach And Other Metrics For Scaling