On AI and the Shape of Marketing Work

AI changed what can be automated. It also changed what humans should be doing instead. The line between the two is most of the job now.

Matt Benter · April 2026

A few years ago, I was running an information business that had a serious research habit. Not the kind of research you do for a single decision — the kind you do every month, as a standing practice, because the business lived or died on understanding the market better than the competition did.

We used agencies for some of it. In-house for some of it. Freelancers for the rest. Different kinds of research — customer interviews, competitor analysis, market briefs, positioning audits, opportunity work for new verticals we were considering. If I added it all up, the research function was a meaningful line item. Five figures a month, sometimes more, depending on what we had going.

I remember the feeling — not any single moment — of watching that function collapse in cost over about eighteen months. There was no dramatic day. No aha. Just a gradual recognition that the thing we had been paying for was now something we could do ourselves, in hours instead of weeks, for the price of a subscription instead of an invoice.

We ended up doing more research after that shift, not less. But the shape of it changed completely. And the change was not what I expected.

Here is what I thought would happen.

I thought we would save the money, run leaner, and spend the freed-up capacity on other things. That is how every productivity upgrade in the history of business had worked. A machine replaces a job, the labor moves somewhere else, the company gets more efficient. Classic.

That is not what happened. What happened is that the thing I thought was expensive — producing the research — turned out not to be the bottleneck. The bottleneck was deciding what research to do, interpreting the results, and acting on them. We had been hiding this from ourselves for years because the production work was so time-consuming that nobody had energy left to notice the thinking was what mattered.

When the production got cheap, the thinking got exposed.

And the thinking was worse than I realized. Not because my team was bad — they were good — but because nobody had the time to do the thinking properly when they were also doing the production. The briefs were approved quickly because the research had taken three weeks and we needed it to be done. The results were skimmed because there was already another project moving through the pipeline. The insights rarely made it into the next creative test or the next positioning conversation, because by the time they arrived, we had already made the decision without them.

None of this was anyone's fault. It was the shape of the function. Production had eaten the job.

This is what I mean when I say AI changed the shape of marketing work. It did not replace the marketer. It stripped away the layer of work that was hiding whether the marketer had ideas.

Everyone talks about the productivity gain. Everyone talks about the cost savings. Almost nobody talks about the honesty problem that comes with it — which is that for the first time in most operators' careers, the quality of their thinking is no longer protected by how hard their team is working on production.

For most of the last twenty years, marketing was a production problem. Everyone knew strategy mattered more than execution. Everyone said strategy mattered more than execution. But in practice, strategy was a two-day offsite in Q1 and then three hundred days a year of executing. The production work was most of the time, most of the budget, and most of the team.

The incentives lined up. Agencies got paid for production. Internal teams grew around production. Marketing leaders managed production. When strategy came up, the honest answer was usually that there was not enough time for it, because the production pipeline was always behind.

The production problem is now gone. Or, more accurately, it is so cheap that it no longer organizes the work around itself. You can generate a month of social posts in an afternoon. You can produce a landing page variant in ten minutes. You can test twenty ad creatives by lunch. You can run a competitor analysis in a morning that used to take an advisor two weeks.

This is a good thing. It also breaks the shape of the job.

Here is the thing I want you to sit with.

You already know AI is getting more common. You already know your customers are starting to recognize it when they see it. You already know that in eighteen months, using AI will be the default — the way "using software" became the default twenty years ago. Nobody will get credit for using AI. It will be the floor.

Which means the question is no longer whether you use AI. That question is already answered for you.

The question is where you use it, where you don't, and how you allocate the humans on your team against the parts AI cannot do well.

This is the question almost nobody is answering carefully, because most operators are still in the productivity-celebration phase. They are excited about how much they can produce. They are proud of the cost savings. They have not yet noticed that the businesses that get flattened in the next three years will not be the ones that used too little AI. They will be the ones that used it in the wrong places.

Let me tell you what I think the right places are.

AI is very good at a small set of things, and those things are the ones you should push it to the limit on. It is good at running more loops than a human team could hold. It is good at processing volumes of information that would drown a person. It is good at producing first drafts, variants, and iterations at a speed that makes testing genuinely continuous instead of quarterly. It is good at holding context about your business — the customer notes, the prior decisions, the test results, the playbooks — in a way that survives turnover, survives distraction, and survives the calendar year.

These are all infrastructure-shaped tasks. They are most of what a well-run marketing function spends its time on. And they are all places where the quality of the work is proportional to the quantity of information the system can hold, which means AI, done well, is strictly better than humans. Not because AI is smarter. Because a system with twenty thousand pages of context in its working memory will outperform a human with three.

AI is also very bad at a different set of things, and the bad is getting worse relative to the good, not better. It is bad at deciding what matters. It is bad at telling you when the obvious answer is wrong. It is bad at holding a point of view that makes someone trust you in a way that closes a deal. It is bad at reading a room. It is bad at knowing when to kill a project and take the loss. It is bad, most of all, at creating the unfakeable signal that tells a buyer this specific company is thinking about me, not at me.

This is what I mean when I say AI is missing the thing that makes things convert. It is not sentiment. It is not emotion. It is not warmth. It is specificity of judgment under real stakes. AI has read everything. It has lived nothing. The difference shows up in the quiet moments — a subject line that sounds like it was written by someone who has met you, a sales call that pivots on an observation the salesperson made in the first thirty seconds, a creative choice that breaks the pattern in a way the pattern demanded. These moments are tiny in any individual transaction. They are most of why one business beats another over ten years.

So here is the design problem.

The operators who win the next five years are not the ones who use AI the most. They are the ones who draw the line in the right place between AI work and human work, and then ruthlessly put their humans on the side of the line that only humans can do.

That second half of the sentence is the one everyone misses. The productivity gain from AI is not supposed to reduce your human headcount. It is supposed to reallocate what your humans are doing. If your AI stack is running at the top of its game, you should have more humans doing fewer things — specifically, the things that only humans can do — and those humans should be spending most of their time on that, not on anything that could be automated.

What I see instead is mostly the opposite. Operators are shrinking the team, keeping the same allocation, and assuming the remaining humans will absorb the thinking on top of their existing workload. This does not work. The thinking does not get absorbed. It gets skipped. The AI keeps running. The business produces more. And the thing that actually moves the business — the judgment, the specificity, the unfakeable signal — does not happen, because nobody has time for it and nobody has been reassigned to protect it.

A year later, the numbers look fine. Two years later, the acquisition cost has drifted up. Three years later, the business looks indistinguishable from every other business in the category, because every other business is running the same AI stack with the same allocation, and nobody is doing the thinking.

This is what I mean by AI-brittle. It is not fragility in the technical sense. The tools still work. The workflows still run. The content still ships. The brittleness is in the business's capacity to stay distinctive. The more you automate without reallocating, the more you blur into the median. And once you blur into the median, you have no pricing power, no brand preference, no retention edge. You are a commodity that happens to be running good software.

The alternative is what I think of as AI-strong.

In an AI-strong business, AI does all of the infrastructure-shaped work I listed above. It handles the research. It holds the context. It produces the first drafts. It runs the testing at a volume no human team could manage. It personalizes the touchpoints. It watches for patterns in the data that a human would miss.

The humans, meanwhile, are freed to do the work only they can do. They talk to customers. They decide what the business believes. They write the one piece of copy on the whole site that has to sound like a person. They have the uncomfortable conversation that resets a stuck account. They make the call to kill the campaign that is technically working but is dragging the brand in the wrong direction. They hold the room in the board meeting and explain, in their own words, what they are building and why.

The AI handles the volume. The humans handle the judgment. Both are running at full tilt. Neither is covering for the other.

This is the split that separates the businesses that compound from the ones that flatten. It is not about how much AI you use. It is about where the line is drawn, and whether you are honest about what goes on each side of it.

The line is not always obvious. Sometimes things that feel like they should be on the human side are actually infrastructure tasks in disguise. Sometimes things that feel like they should be automated turn out to require the one kind of judgment AI cannot replicate. Drawing the line well is most of the work. Getting it wrong is most of the failure mode.

But there is a test I use, and it is useful enough that I will name it.

When you look at a piece of marketing work, ask: if a customer read this and knew it was produced by AI, would their opinion of the brand change? If the answer is no, that work belongs on the AI side of the line. If the answer is yes — if knowing the truth would make the customer feel differently about the business — that work belongs on the human side, no matter how tempting it is to automate.

This test is unforgiving, and it is supposed to be. It will tell you that most of your content can be AI. It will also tell you that your founder email, your sales-to-close conversation, your brand voice on the moments that matter, and your hero creative for the campaigns you will remember in five years — none of those can. Those are the protected assets. They are what the humans on your team should be spending their time on. And they are the reason a well-allocated team of three can now out-compete a badly-allocated team of fifteen.

So here is the choice, because I said the essay would end with a choice, and it will.

You can take the path most businesses are taking. Add AI to the stack. Shrink the team. Celebrate the productivity gain. Keep the allocation the same. Produce more. Spend less. Watch the numbers look good for a year or two, and watch them drift toward the median after that, because the median is what you designed yourself to become.

Or you can take the harder path. Add AI to the stack, yes. But also rebuild the allocation. Put your humans on the work that only humans can do, and protect their time on it the way you would protect a non-renewable resource. Build the AI infrastructure underneath them, not beside them. Use AI to raise the floor. Use humans to raise the ceiling.

Both paths use the same tools. They just use them in different shapes. One produces a business that looks good on a dashboard and blurs into the category. The other produces a business that looks quiet from the outside and owns the category in five years.

The tools will not tell you which path you are on. They will help you get better at whichever one you choose.

Pick carefully. The decision you make in the next twelve months is the one you will live with for the next decade.

— Matt Benter