The Border Collie Problem: What AI Leaders Miss About Human Work

If you read the manifestos of the leaders of the AI revolution, you’ll notice a common theme: the elimination of effort. We’re told that we are on the doorstep of an “Intelligence Age” where AI will handle the logistics, the drudgery, and the “boring” parts of being human.

In this vision, our purpose is to be “High-Level Orchestrators.” We move from being makers to being directors. It’s a vision of a frictionless life where the “What” is always available and the “How” is handled by a machine.

I see three contradictions in this vision that we aren’t talking about enough. And they go deeper than job displacement.

Is AI doing all the hard work really a utopia?

The Border Collie Problem: Work as Biological Necessity

The tech elite often speak about work as if it is a tax we pay to stay alive. Work is a burden to be automated away so we can finally “relax.” But there is a biological truth they seem to overlook. Humans are born to work.

Think about a Border Collie. These are incredibly smart, capable dogs designed for challenge. If you take a Border Collie and put it in an apartment with a self-filling food bowl and no “work” to do, the dog doesn’t feel liberated. It feels miserable. It becomes restless. It starts chewing the furniture because its brain needs a problem to solve.

We’re not much different. What’s this look like for humans? Consider a software engineer who automates away their own job, finally “free” to pursue their passions. Only to discover that the challenge and sense of accomplishment of work was one of them.

Or a manager whose department is replaced by AI agents and has time for all the hobbies they never had time for. But instead falls to endless scrolling, trying to fill the void without people to develop and problems to solve.

For most of us, our “Why” is found in the “How.” We derive satisfaction, identity, and a sense of worth from the struggle of a difficult task. The carpenter doesn’t just want a finished house. They want the feeling a precise cut. The teacher draws meaning from the work of reaching a struggling student and watching understanding dawn on their face.

When we automate the challenge, we don’t just “save time.” We risk creating a society of bored, restless humans with nothing to herd.

The Summit Paradox: Education as Achievement

Some AI leaders also suggest that traditional education is becoming a relic. Their logic? Why spend years learning to write, code, or analyze when a machine will produce perfect results in seconds? This treats education as just a data transfer. Information going from book to brain.

But education isn’t only about the output. It’s about transformation.

When you take a helicopter to the summit, you get the view, but didn’t climb the mountain. You’re not the same if you made the ascent on foot. The “climb” of education or the the long focused work, failed drafts, and intellectual challenge. That’s what builds character, resilience, and the ability to think critically.

The reality is that using AI just to get the answer doesn’t simply remove the work in learning. It removes the learning.

And this leads to a frightening question: How can we be “high-level orchestrators” if a whole generation opts out of learning?

To orchestrate, you must have judgment. To have judgment, you need a deep, foundational understanding of the craft you are directing.

A person who has never wrestled with a sentence cannot edit a masterpiece. A person who has never solved a complex logic problem cannot direct a technical team. If we skip the foundations, we don’t graduate to a higher level. We become shallow supervisors of a system we no longer understand.

The $11.7 Trillion Reality

Economic stakes make this more than a thought experiment. The U.S. labor market is valued at $11.7 trillion, and the AI companies are going after it all. This is how OpenAI can command a staggering valuation of $500 billion. Companies like Mechanize openly state a mission to train AI to fully automate all jobs.

Here’s what makes this more troubling. The same leaders promoting this vision casually mention “universal basic income” as if it’s a footnote, a minor detail to be worked out later.

But no government officials are drafting legislation. No economists are modeling how it would actually function at scale. No political coalition is building support for it. No experts are studying what it will do to us mentally. It’s just a term thrown out vaguely, a hand-wave toward a solution that doesn’t exist.

We’re left with a stark reality. AI companies are methodically dismantling the economic system that sustains hundreds of millions of people, and the “solution” is a placeholder phrase. People won’t have money without jobs. Families won’t have stability. Communities won’t have purpose. And the architects of this transformation seem content to build first and figure out the consequences later.

The Missing Piece of the Vision

The leaders of the AI age are building a world of results. But human meaning is found in the process. They’re racing toward a finish line, but what happens when we get there?

A life without friction isn’t a utopia. It’s a void. If we lose the “How,” we lose our connection to our own capabilities. We become spectators of our own lives, watching a machine do the things we were built to do.

Honestly, the question is no longer whether AI can do the work. The question is what happens to the human spirit when there’s nothing left for us to do but watch?

This isn’t a problem with a simple solution or a “top five tips” list. It’s the fundamental question of our era. Right now, the people building the future don’t seem to have an answer for it.

As educators and professionals, we need to be asking these questions with our students and colleagues. Not because we have the answers, but because the conversation itself is urgent. What does a meaningful life look like when work is optional? How do we preserve the transformative power of struggle in an age of instant results? Who decides what gets automated and what remains human?

These aren’t abstract philosophical puzzles. They are practical concerns that will reshape every institution we’re part of. And if we’re not talking about them now, we may not be able to rebuild what we’ve lost.

Primary Sources:

Vision: Sam Altman’s The Intelligence Age and Dario Amodei’s Machines of Loving Grace. Future of Skills: Jensen Huang’s recent NVIDIA Keynote which explains why the “syntax” of coding and traditional learning is shifting. Economics: U.S. Labor market data via the Bureau of Economic Analysis and mission of Mechanize regarding replacement of labor.

This was developed with Gemini for research and Claude as an editorial thought partner. The argument, perspective, and insights are mine. Imagine created by Nano Banana.

The Dark Side of AI: What Market Volatility & Super Bowl Ads Reveal About the Future of Strategy

I recently wrote about a biological advantage of your Narrative Brain. Our unique human ability to use conjecture (imagining a future) rather than just correlation (analyzing the past).

But as we head into Super Bowl weekend, a tension is emerging. It’s a conflict between the comfort of data-driven certainty and the messy, unpredictable nature of human creativity. And it’s making the market nervous.

The Market’s “Dark Side” of AI

Friday’s New York Times was direct: “The Dark Side of AI Weighs on the Stock Market.” After a year of AI euphoria, we’ve entered a phase of market volatility.

The index was down for the year, whiping out large gains over the last 12 months. Click on the image to view the article in the Times.

The anxiety isn’t just about AI taking jobs. It’s that AI might render business models obsolete by doing what those models were designed to do: optimize the known. If we only train human employees and students to act like algorithms (sorting data and following “best practices”) we make them replaceable by definition.

As a recent Op-Ed argued, AI is unparalleled at pattern recognition, but lacks human judgment. Our ability to decide if an analysis is wise. For too long we’ve only been training leaders to be the things the market is now devaluing.

In Defense of the Map

To be clear, this isn’t an argument against data. I have many colleagues who have built their careers on the mastery of spreadsheets and analytics. Their work is vital. As S.I. Hayakawa might say, they are the Map-makers.

They provide clear, rigorous data that tells where we are standing. Without them, we are flying blind. You can’t make an imaginative leap to the future if you don’t have a grounded understanding of the present. The “Map” (the spreadsheet) is the essential foundation.

Problems arise when we confuse the Map for the Territory. For decades, business schools and C-suites have suffered from Physics Envy. It’s the desire to turn strategy into a “hard science” with universal, immutable laws. We want to believe if we input enough data, the “correct” strategy will be revealed.

Business is a human science, not a natural one. In physics, if you drop a ball, it falls. In marketing, if you drop a product or an ad, the result depends on culture, timing, and narrative. There are no universal laws of marketing success hidden in a spreadsheet.

When we demand every move be “statistically significant,” we create a ceiling. Statistical significance requires a large sample of the past, but innovation leaps ahead. It’s often a statistical outlier that only hind sight confirms.

The Snickers Paradox: Why AI “Fails” at the Super Bowl

Nothing illustrates this better than a masterclass in human insight: the Snickers “Betty White” spot. It’s recognized as one of the best Super Bowl ads of the last 25 years.

My former agency, BBDO, created this campaign (though I didn’t work on this specific account). This week, predictive AI tools like Neurons Inc. released an analysis claiming the ad is “imperfect” because the branding and logo appear too late in just the last 11 seconds.

Despite the Snickers Super Bowl ad marketing success, AI says it focuses too much on the people and not enough on the brand. Click on image to view LinkedIn video.

Using AI analysis the company explains there is too much attention on the people and not enough on the product and brand with the word Snicker’s only being mentioned at 19 seconds.

From a “best-practice probability” standpoint, AI is right. The “Map” says you should brand early. But the “Territory” of human emotion tells a different story. The ad worked because it used conjecture to build tension.

You’re hooked by a 90-year-old woman being tackled in a mud pit, and the product is the “Aha!” solution. If you give away the ending in the beginning, you remove the interest. You create an ad that shows the brand early to follow a “best practice” but is ignored.

Neurons Inc, and their AI, says there’s too much attention on the people. They may enjoy the story but this widely successful brand has a lot of room for improvement.

Math, Meet Magic

Data did play a role in the campaign. As David Lubars, BBDO’s Global Chief Creative Officer, has noted, qualitative research identified a globally consistent “code of conduct” for how people interact within a group.

The creative team synthesized this into a profound human and product truth: When you’re hungry, you’re not on your game. Snickers is substance that “sorts you out.”

They didn’t just find a data point. They designed a narrative. And the world responded:

  • Recognition: The ad topped the USA Today Ad Meter as the #1 Super Bowl spot.
  • Buzz: It generated over 91 days of media coverage from a single 30-second spot and 400 million unpaid media impressions—a value of $28.6 million, or 11.4 times the initial investment.
  • Effectiveness: It won an EFFIE for both creativity and marketing effectiveness helping sales of Snickers increase $376m during the two-year period from 2010-2012
  • Bottom Line: Sales grew by 15.9% in the first year, increasing market share in 56 of its 58 global markets.
Number one in the Super Bowl ad ratings and still engaging today. Click on the image to view the Snickers Super Bowl ad in YouTube.

Math, Confirm Magic

After my advertising creative career, I entered academia as a professor. One of the first academic studies I did was on this very phenomenon. My research partner Michael Coolsen and found “For Brands, A Little Drama is a Good Thing.

Our research proved telling complete stories following a five act form increases Super Bowl ad ratings and YouTube shares and views. We used the “Math” of academic research to confirm the “Magic” of storytelling. A Narrative Brain isn’t just a creative preference. It’s a measurable business driver.

For years people were searching for the magic bullet that will make a Super Bowl ad a hit or a YouTube video go viral. Was it celebrities, or animals, humor or … ? We found that all that doesn’t matter, even putting the brand in the last 10 seconds of the spot or video. What matters is telling a complete story.

This aligns with Angus Fletcher’s research on the Narrative Brain which shows the human mind isn’t a logic processor. It’s a story processor. While AI is stuck in the world of probability (what usually happens), the human brain is built for narrative intuition (imagining what could happen) – our “Primal Intelligence.”

Our Human Edge

This is the approach we’re exploring in the Markets, Innovation & Design program at Bucknell University. We’re not “anti-data.” We’re “pro-human integration.” The spreadsheet is the starting line, not the finish line. We also integrate with the liberal arts, so our students don’t just learn “business.” They pull from broad disciplines such as psychology, sociology, and literature to understand the human “Territory.”

  1. Analyze the Map: Use data and analytics to see where the market is.
  2. Enter the Territory: Use empathy and observation to find the human insight.
  3. Make the Creative Leap: Move from Correlation (Map) to Conjecture ( Driver) to design a future the data hasn’t seen.

Unpredictable as a Competitive Advantage

The market is currently punishing companies that look like they can be replaced by an algorithm. The antidote to that fear is a more balanced approach to business.

We need the Map-makers to show us where we are, but we need the Narrative Designers to decide where we’re going. The most successful strategies, the ones that win the Super Bowl ads and dominate markets, aren’t found in a “best-practice” log. They’re designed by people who look at the Map and then choose to drive somewhere the Map never saw coming.

About This Post’s Creation It was developed in partnership with Gemini and Claude. AI helped bridge the market news with my last post, the core perspective, first hand experience and research insights remains my own.