When AI Creates Margin, Who Gets It?

Empty Beach

AI is being sold to businesses as a way to improve margin. Many employees are embracing it for the same reason. The problem is that businesses usually mean profit margin, while employees mean margin in their lives.

Same word. Different dream.

That may become one of the biggest workplace tensions of the next few years.

I learned this long before AI.

Back when I worked in the high-pressure world of advertising as a copywriter and creative director, my art director and I would sometimes leave the building and go to Starbucks. Not to waste time. To create margin.

We knew that if we stayed in the office, with people constantly checking on us, asking for things, and wanting updates, we would not have the mental room to come up with the big ideas everyone wanted from us.

That coffee shop time was not a break from productivity. It was productivity.

Some of our most creative moments were not spent typing at a computer. They were spent leaning back in a chair, getting some distance, and talking and sketching our way toward a better idea. That’s something I think many organizations still miss, especially now in the rush to adopt AI.

Empty Beach
Where this post started. Room to pause. Room to reflect. Room to dream.

When AI creates efficiency, who keeps it?

For businesses, margin means more output from the same people. Faster turnaround. Lower labor costs. Less slack in the system.

For employees, margin means less drudgery. Fewer late nights. More breathing room to think, recover, and have some life left at the end of the day.

Once AI creates efficiency, somebody decides where it goes. Back to the human being? Or right back into the machine of work?

Recently, I was able to get away. I had time to enjoy nature, spend time with family, and read something not related to work. I got caught up in the characters and story of a novel. When I came back, I felt refreshed and inspired.

And sadly, I also felt the need to justify that time by telling myself it gave me some really good ideas for work.

Somehow even breathing room can start to feel like something you have to justify.

Efficiency can quietly consume margin

If AI removes low-value tasks and gives people room for better judgment and deeper focus, that’s progress. But that margin can disappear in two ways. Management can fill the gap with more tasks, tighter deadlines, and leaner staffing. Employees can fill it themselves, because many of us have been conditioned to treat freed time as space for more work.

That may look like progress on paper, but in practice it can become just another way work expands to fill every available space. Parkinson’s Law applies to AI. When tools create margin, the instinct is to fill it.

Evidence is already mounting. A UC Berkeley study tracking AI adoption inside a real company found that even without management pressure, workers filled every hour AI freed up with more work. Deep-focus time fell and cognitive fatigue rose. There’s even a name for it: “AI brain fry.” Whatever you call it, it’s another example of a tool promising margin but quietly consuming it instead.

We’ve seen this before. Email was supposed to make communication easier. Smartphones were supposed to make work more flexible. They did both, but they also made work more constant and harder to leave behind. AI could easily follow the same path.

The real opportunity of AI isn’t just to use it to do work faster. It’s to decide what kind of margin is worth protecting.

Human margin is not waste

In my classes, we use Steven Johnson’s Where Good Ideas Come From. His study of innovators in history points to conditions that produce breakthrough ideas like liquid networks, adjacent possible, error, serendipity, and slow hunch. None of those happen easily when every minute is scheduled, measured, and filled. They need margin. Time for reflection.

I’ve seen this in my own life too. Some of my best ideas have come in places that don’t always look productive: the break room, between conference sessions, at a social hour, in the casual conversation between one thing and the next. That’s often where ideas connect.

Organizations say they want creativity, insight, and innovation. Then they build systems that leave no room for the very conditions that make those things possible.

You can’t squeeze people into breakthrough thinking.

Margin is not always waste. Sometimes it’s the condition that makes better work possible.

A better question for leaders

Some push back on this. They say pressure is the point. Constraint forces creativity. Urgency eliminates mediocrity. There’s evidence for it. Companies built on relentless intensity have produced breakthroughs that more relaxed organizations never did.

While AI is being sold as a tool to give time back, some of the very companies building AI post job descriptions glamorizing 70-plus hour weeks, or a 996 schedule. The technology that promises margin is arriving with a culture that demands you surrender it.

But that model tends to work in specific conditions: mission-driven people who opted in, often early in their careers, working on outsized problems they personally find worth the sacrifice. It also has real costs: attrition, burnout, and the quiet departure of experienced people who have other options.

Importantly, it misses what AI actually changes. A high-pressure model squeezes harder to get more. AI removes the need to squeeze people just to get routine work done.

The question isn’t whether to demand high performance. It is whether human qualities AI cannot replicate, such as judgment, creativity, and strategic thinking, flourish under constant pressure or require something different.

The companies that benefit most from AI over time may not be chasing maximum short-term output. I’d bet on the ones that use part of the gain to create better conditions for human performance: more focus, less drudgery, better decisions, more sustainable energy.

A healthier AI model could look more like defining work clearly, what done well looks like, and letting people keep some of the margin they create – for better work, and for more life.

Who gets the margin

The deeper issue isn’t that employers and employees want opposite things. Often, they both want better results and a sense that work is making a meaningful difference. Tension comes from a misunderstanding about how those outcomes are produced. Work culture often treats margin as waste to eliminate rather than the space needed to think, care, recover, and do meaningful work well.

This can lead to loss of motivation. Worker morale is meaningful. When people lose heart, productivity erodes. Eventually, the best people leave. AI didn’t create that misunderstanding.

The real negotiation happening around AI at work isn’t just about efficiency or adoption.

It’s about margin. Who captures it. Who benefits from it. Who gets the breathing room.

At the agency, I used to run during my lunch hour. It relieved stress, helped keep me healthy, and didn’t take away from family time. Anyone familiar with the creative process knows downtime matters. My subconscious mind kept working on client problems and projects. More often than not, I came back from those runs with new ideas for the work I was doing.

That doesn’t mean I never worked long hours. Big pitches and tight deadlines sometimes meant late nights, work after the kids were in bed, and Saturdays in the office. That came with the business. But there’s a difference between working hard when the work truly calls for it and treating constant overwork as proof of commitment.

After several years, my boss called me into his office. He said that my art director and I had the best work in the agency. Our work won creative awards, produced profit for our clients, and we always met deadlines while handling more clients and projects than the other teams.

Then he said, “But…” You run at lunch and go home at night.

He didn’t understand that the margin was part of what produced the results he was getting.

Shortly after that meeting, my art director and I both left for other opportunities.

The future of AI at work may not come down to the technology itself. It may come down to who gets the margin.

This post was drafted with the assistance of ChatGPT and Claude. The ideas, experiences, and opinions are my own.

The Token Trap: Why “Tokenmaxing” in AI is the New Klout Score

MIDE Studio Brainstorming Wiith Post Its

Around 2012, an FSU professor made headlines for including Klout scores as part of his students’ course grades (10% of the total). It wasn’t an unreasonable experiment. If the market cares about this number, students should understand how it works. Klout scores shaped influencer partnerships, opened doors to brand deals, and establish who counted as a thought leader. The number had real weight.

I remember first encountering Klout at an IMC conference in Mark Schaefer’s keynote. He recently published Return on Influence. Right after that talk I checked my score and started thinking more intentionally about my online presence. That led to this blog, job opportunities, contributing to respected publications and eventually my two books.

The number was motivating, but I never got obsessed to the point of doing things just to watch it climb.

What’s interesting in retrospect isn’t that the professor was wrong to take it seriously. It’s how quickly the controversy revealed the problem with legitimizing a metric that could be gamed. When a score carries institutional weight, a market grows around engineering it. Soon you could buy fake followers and engagement. The score became more important than what it was supposed to represent until it stopped representing much at all. Klout shut down in 2018.

I find myself thinking about that moment again as I watch a new metric start to carry similar weight in AI-driven circles. My prediction: it won’t be long before we hear about a professor basing an AI course grade on token usage. Or a manager rewarding the employee who ran the most prompts last quarter. We may already be there.

“When a measure becomes a target, it ceases to be a good measure.” — Marilyn Strathern (popularizing Goodhart’s Law)

Amazon mandated 80% of its engineers use AI coding. Within months several major incidents happened, including a 6-hour outage with 6.3 million lost orders. Amazon attributed the failures to user error. That may be true, but when usage is the mandate and metric, conditions for error become easier. The 80% target can become more important than the safeguards surrounding it.

We may be entering the era of what’s called Tokenmaxing. It’s like Klout in a more expensive suit. You’ve probably already seen a version of it on LinkedIn. People sharing screenshots of a dozen AI agent tabs open on their monitor as a signal of how AI-forward they are. It’s the AI equivalent of watching that little orange Klout score climb.

MIDE Studio Brainstorming Wiith Post Its
Should we feel guilty for thinking with our hands and allowing time for inspiration and the slow hunch? This was from a brainstorming session on PostIt notes that my students had in the Markets, Innovation & Design studio.

What is Tokenmaxing?

In AI, a token is a unit of compute or 0.75 words. For developers, token usage is a meaningful metric. It affects cost, context window management, and efficiency. Paying attention to it makes sense in that context.

The problem arises when that same logic migrates into other fields like marketing as a proxy for effort or value. Tokenmaxing is what happens when the volume of AI interactions becomes the goal in itself. People burning through a monthly subscription budget, climbing a usage leaderboard, generating thousands of prompt variations to prove they’re being “AI-forward.”

If you aren’t maxing out your tokens, you aren’t really using the tool.

It’s a Use It or Lose It fallacy. Like Klout, the number starts to feel meaningful. Even if it might not be measuring anything that matters.

What Gets Lost Upstream

The concern isn’t just wasted budget. It’s what happens to the thinking process when output volume becomes the metric. Real creative work has an upstream phase. The slow, often uncomfortable part where you sit with a problem before you know what to do about it. Where you ask why before you ask what’s next.

That phase doesn’t generate tokens. It doesn’t show up in a usage dashboard. But it’s often where the most useful thinking happens, and it’s the phase that gets compressed when we start measuring activity instead of impact.

Not all important thinking happens in the ones and zeros of the digital world.

Mark Schaefer has written about AI turning marketing into “a pandemic of dull.” Everyone converging toward the same outputs, faster. Tokenmaxing feeds that pattern. When we optimize for volume, we tend to get incrementally better at doing the same thing as everyone else, until the work becomes difficult to distinguish from anyone else’s.

Reclaiming the Tiny Experiment

I’ve been spending time with Anne-Laure Le Cunff’s book Tiny Experiments. It’s been a useful counterweight to this kind of thinking. In a design-thinking context, a tiny experiment isn’t a step toward a usage milestone. It’s more like a probe into uncertain territory. A way to follow a question you can’t fully answer yet, and learn something from the attempt.

The difference in practice is meaningful. A tokenmaxing mindset measures success by what it produces: content, variations, volume. A design-thinking mindset measures success by what it discovers and the difference it makes.

Sitting in a coffee shop and overhearing how someone actually describes a problem you thought you understood, or visiting a store and watching how a customer navigates a decision in real time, and then designing a small test around what you observed, that’s a tiny experiment in the spirit Le Cunff intends.

The output isn’t content. It’s a new understanding of a real human that no prompt, and no amount of online data, could have surfaced on its own.

When we set goals around usage, we quietly change what we’re optimizing for and it stops being an experiment. A tool built for exploration becomes a treadmill for output. More than a decade ago Lisa Earle McLeod found salespeople who focused on quotas were out sold by ones who focused instead on helping people with a “noble purpose.”

Protecting the Thinking That Happens Before the Prompt

The answer to this isn’t less AI. It’s being more intentional about where the human thinking lives in the process. Klout scores didn’t make anyone a better marketer, but they did shape how influence was perceived and rewarded, until the market for gaming them undermined the whole thing.

Token counts carry a similar risk. They can start to feel like a proxy for strategic thinking without doing any of the work that strategic thinking actually requires.

The most valuable part of the design process is the part that doesn’t cost a cent in API fees. It’s the conjecture, the what-if, the question you sit with before you ever write a prompt. Le Cunff’s framing is most useful when it’s pointed at expanding your thinking rather than refining the path you’re already on.

Experiment to learn, not just to optimize. We didn’t fully absorb that lesson from Klout. It’s worth keeping in mind as token usage starts to feel like a measure of something meaningful.

I feel it. There’s a guilt that creeps in when I’m not reading AI articles, listening to podcasts, or running a deep research report to stay current. AI is moving quickly, and falling behind feels like a real risk. Then I open LinkedIn and see someone’s screenshot of a dozen AI agent tabs and the pressure compounds.

It’s just one more reason to feel like offline, in-person time is something I need to justify rather than protect.

But if I’m sitting in that coffee shop with my laptop open, catching up on AI news, or counting other people’s agent tabs, I’m missing the thing that makes the work better. The overheard conversation, the moment of watching someone struggle with a decision, the small human observation that no LLM model thought to or could collect.

That upstream time isn’t wasted time. It may be the most innovative thing a marketer can do. It’s just the hardest to justify in a dashboard.

About This Post’s Creation

This post was developed in partnership with Claude. I had initial ideas from observation, articles, podcasts, and reading Le Cunff. Claude helped organize, research further and refine.