The AI Agents Are Coming! So Are The Reasoning Models. Will They Take Our Jobs And How Should We Prepare?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'"

Last Fall I traveled to MIT to watch my daughter play in the NCAA volleyball tournament. On the way, we passed signs for Lexington and Concord. AI agents were on my mind. There was a sudden buzz about AI agents and how they’re coming for our jobs. The image of Paul Revere came to my mind.

Instead of warning about the Redcoats stealing munition at Concord, Revere’s of today warn of AI agents stealing our jobs. Then new AI reasoning models released causing another rise in discussion. Like Lexington Green have the first shots been fired on our jobs with reasoning AI agents?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'
AI image generated using Google ImageFX from the  prompt “Create a painting depicting Paul Revere on his midnight ride, but instead of a person it is a robotic AI agent yelling ‘The AI Agents are coming for your jobs!’.” https://labs.google/fx/tools/image-fx

What is an AI agent?

Search interest in AI agents spiked in January. If you search AI agents Google returns 216 results. Reading through many of them there are probably half as many definitions. For simplicity, I will begin by quoting AI Marketing Institute’s Paul Roetzer, “An AI agent takes action to achieve goals.”

That doesn’t sound scary. What’s driving interest and fear is adding the word autonomous. Roetzer and co-founder Mike Kaput have created a helpful Human-to-Machine Scale that depicts 4 levels of AI autonomous action.

Marketing AI Institute’s Human-to-Machine Scale:

  • Level 0 is all human.
  • Level 1 is mostly human.
  • Level 2 is half and half.
  • Level 3 is mostly machine.
  • Level 4 is all machine or full autonomy.

Full autonomy over complete jobs is certainly fear inducing! Large language model companies like OpenAI, Google, and SAAS companies integrating AI are promising increased autonomous action. Salesforce has even named their AI products Agentforce, which literally sounds like an army coming to take over our jobs! Put some red coats on them and my Paul Revere analogy really comes to life.

Every player in AI is going deep.

In September Google released a white paper “Agents” with little attention. Now, after the release of reasoning models, everyone including Venture Beat is analyzing it. In the paper, Google predicts AI agents will reason, plan, and take action. This includes interacting with external systems, making decisions, and completing tasks – AI agents acting on their own with deeper understanding.

OpenAI claims its new tool Deep Research can complete a detailed research report with references in “tens of minutes.” Something that may take a human many hours. Google’s DeepMind also has Deep Research, Perplexity has launched Deep Research, CoPilot now has Think Deeper, Grok3 has a Deep Search tool, and there’s the new Chinese company DeepSeek. Anthropic now has released what it is calling the first hybrid reasoning model. Claude 3.7 Sonnet can produce near-instant responses or extended step-by-step thinking that is made visible. The Redcoats are coming and they’re all in on deep thinking.

Graphs of Google Trends search data showing an increase in search for AI Agents and Reasoning Models.
Interest in and discussion about AI Agents and AI Reasoning Models has risen sharply. Graphs from https://trends.google.com/trends/

What is a reasoning model?

Google explains Gemini 2.0 Flash Thinking is “our enhanced reasoning model, capable of showing its thoughts to improve performance and explainability.” A definition for reasoning models may be even more difficult and contested than AI agents. This term returns 163 results in a Google search and perhaps just as many definitions.

For my definition of a reasoning model, I turn to Christopher Penn. In his “Introduction to Reasoning AI Models,” Penn explains, “AI – language models in particular – perform better the more they talk … The statistical nature of a language model is that the more talking there is, the more relevant words there are to correctly guess the next word.” Reasoning models slow down LLMs to consider more words through a process.

LLMs and reasoning models are not magic.

Penn further explains that good prompt engineering includes a chain of thought, reflection, and reward functions. Yet most people don’t use them, so reasoning models make the LLM do it automatically. I went back to MIT, not for volleyball, but for further help on this definition. The MIT Technology Review explains that these new models use using chain of thought and reinforcement learning through multiple steps.

An AI prompt framework, such as the one I created, will improve your results without reasoning. You also may not need a reasoning model for many tasks. Reasoning models cost more and use more energy. Experts like Trust Insights recommend slightly different prompting for reason models such as Problem, Relevant information, and Success Measures. Brooke Sellas of B Squared Media shared President of OpenAI Greg Brockman’s reasoning prompt of Goal, Return Format, Warnings, and Context Dump.

Many want a magical AI tool that does everything. In reality, different AI is better for different things. Penn explains generative AI is good with language, but for other tasks, traditional forms of AI like regression, classification, or even non-AI statistical models can be a better solution.

How we talk about AI matters.

Humans are attracted to the magic capabilities of AI. Folk tales like The Sorcerer’s Apprentice which you may know from Disney’s Fantasia, are about objects coming to life to do tasks for us. Reasoning models are said to have agentic behavior – the ability to make independent decisions in pursuit of a goal. Intentional or not, it sounds like angelic, bringing up mystical thoughts of angels and the supernatural.

Since the first post in my AI series, I’ve argued for maintaining human agency and keeping humans in the loop. Therefore, I want to be careful in how I talk about these new “reasoning” models that show us their “thinking.” I agree with Marc Watkin’s recent Substack “AI’s Illusion of Reason,” that the way we talk about these AI models matters.

An AI model that pauses before answering and shows the process it followed doesn’t mean it is thinking. It’s still a mathematical prediction machine. It doesn’t comprehend or understand what it is saying. Referring to ChatGPT or Gemini as it versus he or she (no matter the voice) matters.

Google Gemini 2.0 Flash Thinking
I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI “thinking.” From https://aistudio.google.com/

What’s the difference between human and AI thinking?

I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI thinking. It said, “AI can perform tasks without truly understanding the underlying concepts or the implications of its actions. It operates based on learned patterns, not genuine comprehension.” Does this raise any concerns for you as we move toward fully autonomous AI agents?

Humans need to stay in the loop. Even then, you need a human who truly understands the subject, context, field, and/or discipline. AI presents its answers in a convincing, well-written manner – even when it’s wrong. Human expertise and discernment are needed. Power without understanding can lead to Sorcerer’s Apprentice syndrome. A small mistake with an unchecked autonomous agent could escalate quickly.

In a Guardian article, Andrew Rogoyski, a director at the Institute for People-Centred AI warns of people using responses by AI deep research verbatim without performing checks on what was produced. Rogoyski says, “There’s a fundamental problem with knowledge-intensive AIs and that is it’ll take a human many hours and a lot of work to check whether the machine’s analysis is good.”

Let’s make sure 2025 is not like 1984.

I recently got the 75th anniversary edition of George Orwell’s 1984. I hadn’t read it since high school. It was the inspiration behind Apple’s 1984 Super Bowl ad – an example of the right message at the right time. It may be a message we need again.

AI isn’t right all the time and right for everything. It’s confident and convincing even when it’s wrong. No matter how magical AI’s “thinking” seems, we must think on our own. As AI agents and reasoning models advance, discernment is needed, not unthinking acceptance.

The 250th anniversary of Paul Revere’s ride and the “Shot heard ‘round the world” is in April this year. Will AI agents and reasoning models be a revolution in jobs in 2025? In my next post, “How Will AI Agents Impact Marketing Communications Jobs & Education? See Google’s AI Reasoning Model’s “Thoughts” And My Own” I take a deep dive into how AI may impact marketing and communications jobs and education. What’s your excitement or fear about AI agents and reasoning models?

This Was Human Created Content!

Marketing Communications: The Language That Drives Business Revenue

An article I tweeted  talks about the increasing emphasis on content creation for marketing: “@Kquesen: The tables turn – in social media marketers must think & act like publishers: 4 tips for building brand & audience … http://t.co/MQiGxDFe

But that is just one small example. B to B reports that marketers such as Nick Panayi from the IT services company Computer Sciences Corp. have gone all-in with content creation with an in-house department of former journalists who create branded content for their website and social media channels.

Marketers are becoming bloggers and are Tweeting and creating videos and filling Facebook pages. They are creating a lot of content. Content  with value that delivers knowledge, entertainment, something people will choose to engage with like they do newspapers, magazines, and TV. Forbes agrees saying brands such as Virgin Mobile, American Express, Marriott, L’Oreal, and Vanguard are becoming publishers and this is a vital part of their overall strategy.

You may call this content marketing, but it made me think about the overall importance of communications in business. We live in an age of customer-driven capitalism where the customer is now in charge. As Steve Denning, author of Radical Management points out in one example “… focus on customers first doesn’t hurt Whole Foods’s bottom line. The ten year share price of Whole Foods is up 330%, compared +30 percent for the S&P 500, and minus 40% for a traditionally managed supermarket chain like Safeway.” That’s consumer focused communications increasing revenue.

How many e-Books/White Papers do you get invited to download? They are generating valuable sales leads. As I highlighted in an earlier post, Forester Research reports in the book groundswell a case study where a corporate blog is credited with generating five contacts a week – contacts that represent early leads worth millions of dollars to this B to B company’s salespeople. That’s consumer focused communications increasing revenue.

A recent Bloomberg Businessweek article credits carefully worded and tested fundraising e-mails as the main source of $690 million raised online for the Obama campaign. Of hundreds of tested subject lines “Hey” was the most successful bringing in millions of dollars alone. That’s consumer focused communications increasing revenue.

Those are positive examples, but poor communications can cost corporations revenue. Poor communication contributed a great deal to Merck loosing $253 million in the Vioxx trial. The jury was confused by their scientific explanations. The Wall Street Journal reports juror John Ostrom as saying “Whenever Merck was up there, it was like wah, wah, wah. We didn’t know what the heck they were talking about.” That’s poor consumer communications losing revenue.

An Accenture study reports American and European consumers returned over $25 billion in electronics in 2007. Between 60%–85% had nothing wrong ($15.2 and $21.5 billion). Why? Confusing interfaces, features difficult to access, no customer education, and weak documentation. That’s poor consumer communications losing revenue.

In 2006, a disgruntled customer used YouTube and Twitter to spread a music video about United Airline’s mishandling of his $3,500 guitar. Within a week the video received 3 million views (12.5 million by 2012) and coverage in CNN, The Wall Street Journal, BBC and the CBS Morning Show. Fast Company reported that Carroll contacted United for nine months with calls and emails, but only after the video’s success and United’s stock price drop of 10% ($180 million) did the company try to make things right. That’s poor consumer communications losing revenue.

Then there is the tweet that sent the Dow average down 145 points in Spring of 2013. Hackers used @AP to spread a rumor that two explosions had gone off at the White House, injuring the president. This caused a two-minute selling spree in which the Dow stocks dropped $200 billion in value, which emphasized the power of social media content on the financial industry.

Of course no article about communications would be complete without a reference to Apple – the World’s Most Powerful Brand valued at $87.1 billion. In an Entrepreneur article “Steve Jobs and the Seven Rules of Success,” six of the seven rules are communications oriented: Have passion, deliver vision, make connections, create experiences, master messages, and sell dreams.

Jeffrey Rohrs takes this concept to the next level in his latest book Audience: Marketing in the Age of Subscribers, Fans & Followers. Communications, through publishing content, is how you build audience and proprietary audience is a valuable business asset. Whether you are a CEO, CMO, marketer, or entrepreneur communications can be a competitive advantage. Do you believe that what we say and how we say it matters to the bottom line?