The AI Agents Are Coming! So Are The Reasoning Models. Will They Take Our Jobs And How Should We Prepare?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'"

Last Fall I traveled to MIT to watch my daughter play in the NCAA volleyball tournament. On the way, we passed signs for Lexington and Concord. AI agents were on my mind. There was a sudden buzz about AI agents and how they’re coming for our jobs. The image of Paul Revere came to my mind.

Instead of warning about the Redcoats stealing munition at Concord, Revere’s of today warn of AI agents stealing our jobs. Then new AI reasoning models released causing another rise in discussion. Like Lexington Green have the first shots been fired on our jobs with reasoning AI agents?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'
AI image generated using Google ImageFX from the  prompt “Create a painting depicting Paul Revere on his midnight ride, but instead of a person it is a robotic AI agent yelling ‘The AI Agents are coming for your jobs!’.” https://labs.google/fx/tools/image-fx

What is an AI agent?

Search interest in AI agents spiked in January. If you search AI agents Google returns 216 results. Reading through many of them there are probably half as many definitions. For simplicity, I will begin by quoting AI Marketing Institute’s Paul Roetzer, “An AI agent takes action to achieve goals.”

That doesn’t sound scary. What’s driving interest and fear is adding the word autonomous. Roetzer and co-founder Mike Kaput have created a helpful Human-to-Machine Scale that depicts 4 levels of AI autonomous action.

Marketing AI Institute’s Human-to-Machine Scale:

  • Level 0 is all human.
  • Level 1 is mostly human.
  • Level 2 is half and half.
  • Level 3 is mostly machine.
  • Level 4 is all machine or full autonomy.

Full autonomy over complete jobs is certainly fear inducing! Large language model companies like OpenAI, Google, and SAAS companies integrating AI are promising increased autonomous action. Salesforce has even named their AI products Agentforce, which literally sounds like an army coming to take over our jobs! Put some red coats on them and my Paul Revere analogy really comes to life.

Every player in AI is going deep.

In September Google released a white paper “Agents” with little attention. Now, after the release of reasoning models, everyone including Venture Beat is analyzing it. In the paper, Google predicts AI agents will reason, plan, and take action. This includes interacting with external systems, making decisions, and completing tasks – AI agents acting on their own with deeper understanding.

OpenAI claims its new tool Deep Research can complete a detailed research report with references in “tens of minutes.” Something that may take a human many hours. Google’s DeepMind also has Deep Research, Perplexity has launched Deep Research, CoPilot now has Think Deeper, Grok3 has a Deep Search tool, and there’s the new Chinese company DeepSeek. Anthropic now has released what it is calling the first hybrid reasoning model. Claude 3.7 Sonnet can produce near-instant responses or extended step-by-step thinking that is made visible. The Redcoats are coming and they’re all in on deep thinking.

Graphs of Google Trends search data showing an increase in search for AI Agents and Reasoning Models.
Interest in and discussion about AI Agents and AI Reasoning Models has risen sharply. Graphs from https://trends.google.com/trends/

What is a reasoning model?

Google explains Gemini 2.0 Flash Thinking is “our enhanced reasoning model, capable of showing its thoughts to improve performance and explainability.” A definition for reasoning models may be even more difficult and contested than AI agents. This term returns 163 results in a Google search and perhaps just as many definitions.

For my definition of a reasoning model, I turn to Christopher Penn. In his “Introduction to Reasoning AI Models,” Penn explains, “AI – language models in particular – perform better the more they talk … The statistical nature of a language model is that the more talking there is, the more relevant words there are to correctly guess the next word.” Reasoning models slow down LLMs to consider more words through a process.

LLMs and reasoning models are not magic.

Penn further explains that good prompt engineering includes a chain of thought, reflection, and reward functions. Yet most people don’t use them, so reasoning models make the LLM do it automatically. I went back to MIT, not for volleyball, but for further help on this definition. The MIT Technology Review explains that these new models use using chain of thought and reinforcement learning through multiple steps.

An AI prompt framework, such as the one I created, will improve your results without reasoning. You also may not need a reasoning model for many tasks. Reasoning models cost more and use more energy. Experts like Trust Insights recommend slightly different prompting for reason models such as Problem, Relevant information, and Success Measures. Brooke Sellas of B Squared Media shared President of OpenAI Greg Brockman’s reasoning prompt of Goal, Return Format, Warnings, and Context Dump.

Many want a magical AI tool that does everything. In reality, different AI is better for different things. Penn explains generative AI is good with language, but for other tasks, traditional forms of AI like regression, classification, or even non-AI statistical models can be a better solution.

How we talk about AI matters.

Humans are attracted to the magic capabilities of AI. Folk tales like The Sorcerer’s Apprentice which you may know from Disney’s Fantasia, are about objects coming to life to do tasks for us. Reasoning models are said to have agentic behavior – the ability to make independent decisions in pursuit of a goal. Intentional or not, it sounds like angelic, bringing up mystical thoughts of angels and the supernatural.

Since the first post in my AI series, I’ve argued for maintaining human agency and keeping humans in the loop. Therefore, I want to be careful in how I talk about these new “reasoning” models that show us their “thinking.” I agree with Marc Watkin’s recent Substack “AI’s Illusion of Reason,” that the way we talk about these AI models matters.

An AI model that pauses before answering and shows the process it followed doesn’t mean it is thinking. It’s still a mathematical prediction machine. It doesn’t comprehend or understand what it is saying. Referring to ChatGPT or Gemini as it versus he or she (no matter the voice) matters.

Google Gemini 2.0 Flash Thinking
I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI “thinking.” From https://aistudio.google.com/

What’s the difference between human and AI thinking?

I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI thinking. It said, “AI can perform tasks without truly understanding the underlying concepts or the implications of its actions. It operates based on learned patterns, not genuine comprehension.” Does this raise any concerns for you as we move toward fully autonomous AI agents?

Humans need to stay in the loop. Even then, you need a human who truly understands the subject, context, field, and/or discipline. AI presents its answers in a convincing, well-written manner – even when it’s wrong. Human expertise and discernment are needed. Power without understanding can lead to Sorcerer’s Apprentice syndrome. A small mistake with an unchecked autonomous agent could escalate quickly.

In a Guardian article, Andrew Rogoyski, a director at the Institute for People-Centred AI warns of people using responses by AI deep research verbatim without performing checks on what was produced. Rogoyski says, “There’s a fundamental problem with knowledge-intensive AIs and that is it’ll take a human many hours and a lot of work to check whether the machine’s analysis is good.”

Let’s make sure 2025 is not like 1984.

I recently got the 75th anniversary edition of George Orwell’s 1984. I hadn’t read it since high school. It was the inspiration behind Apple’s 1984 Super Bowl ad – an example of the right message at the right time. It may be a message we need again.

AI isn’t right all the time and right for everything. It’s confident and convincing even when it’s wrong. No matter how magical AI’s “thinking” seems, we must think on our own. As AI agents and reasoning models advance, discernment is needed, not unthinking acceptance.

The 250th anniversary of Paul Revere’s ride and the “Shot heard ‘round the world” is in April this year. Will AI agents and reasoning models be a revolution in jobs in 2025? In my next post, “How Will AI Agents Impact Marketing Communications Jobs & Education? See Google’s AI Reasoning Model’s “Thoughts” And My Own” I take a deep dive into how AI may impact marketing and communications jobs and education. What’s your excitement or fear about AI agents and reasoning models?

This Was Human Created Content!

Beyond AI Bans: An End of Year AI Integration Pep Talk for Educators.

AI image showing a university professor burning AI inspired by the book Fahrenheit 451.

In December 2022, my first experience with AI was using ChatGPT to write a blog article about social media marketing. I’d been practicing and teaching social media for over a decade, yet ChatGPT wrote an impressive and scary good article in less than a minute – something that may have take me hours!

How did you feel after your first use of ChatGPT? Since then I’ve had ups and downs with Generative AI. From full embrace and cautious integration to dystopian fear and overt avoidance. It’s been a long journey, but I’ve learned much along the way.

The end of the year is a time for reflection.

What I find I need at the end of a long hard year is a pep talk. Anyone else? December alone gifted us “12 days of OpenAI” and major updates from most AI companies like Google, Anthropic, Perplexity, Meta, Apple, Microsoft, IBM, and xAI. I’m still processing what happened in Fall classes and have just two weeks to update courses for Spring.

I can relate to what AI expert Marc Watkins says in his latest Substack,

“I need a reset. Truly, we all do. For the past two years, educators have been asked to reevaluate their teaching and assessments in the wake of ChatGPT, adopt or refuse it, develop policies, and become AI literate. Except generative AI isn’t a normal or novel development within our field of study we can attend some conferences or webinars to understand its impact to keep up with it. None of this has been normal…”

University faculty are woefully behind.

I’ve accomplished much since Fall 2022: Two books, four research articles, three conference presentations, a top teaching paper award, and multiple AI presentations to professionals and faculty. Yet, negatives have me losing sight of the positives.

This fall my LinkedIn feed felt full of posts and comments about how far behind university professors are in AI. I know critiques are valid. In my first adjunct appointment in 2009, a media professor still didn’t teach the Internet because “it was a fad.” Like any profession dinosaurs exist.

University faculty are leading AI adoption.

However, the profs I mostly interact with are working hard to learn and keep up. For every head-in-the-sand professor, there are plenty trying to keep their heads above water with the pace of AI change. My workload has increased with AI not decreased.

So it’s hard to read comments that generalize us all as behind and advocate for replacing us with AI teaching agents. The profs I follow, like Ethan Molick and Marc Watkins, aren’t just teaching but innovating AI in education and their professional disciplines.

Professors are old and boring.

Despite many more positive comments and evidence of grads excelling, human tendency is to focus on the negative. Years ago, I got a student comment,

“I can’t believe someone old enough to be my dad is teaching social media.”

Another student once told me I need to update my headshot because I don’t look like the website photo anymore. Then there’s the student who said my voice is monotone and boring. Ouch! Despite being in the minority, those comments still hurt and I have trouble forgetting them years later.

Professors have wisdom from experience.

Does age equate to being behind? I have a much bigger picture of the world and have lived through many waves of tech advancements. I’ve also spent nearly two decades practicing marketing and now a decade researching and teaching it. A week ago I received this comment from a student’s internship report,

“My academic background in marketing, particularly courses in social media marketing and digital, laid a solid foundation for this internship. Concepts learned in these courses proved instrumental in creating effective social media posts. Without these courses, my social content would have not been as effective or efficient.”

Great right? Yes, but I still struggle to get the negative out of my head. I know I’m not auditioning for America’s Got Talent, I’m an educator not an entertainer, so why can’t I let it go? Human brains have a negative bias. We all tend to engage, emphasize, and focus on the negative – something social media algorithms take advantage of to keep us scrolling.

So thanks to the grad from two years ago who recently gave me a LinkedIn shout-out for my project management software and HubSpot certificate integrations preparing him well. I also appreciate the student graduating this Spring who has had two internships and has already been hired into her dream sports marketing job. She thanked me for what she learned in my digital marketing and other classes to get her there.

We need grace, humility, and confidence.

Constructive criticism is key to learning and advancement, but you also can’t take it too much to heart. You’ll either be so discouraged you give up or you’ll become too timid to experiment for fear of the negative. I am in that moment right now.

I apologize to students and professionals in my field for the ways I was behind in AI advancement or days I wasn’t always engaging. Hopefully, there is room for grace. I’m also humble enough to take the things I can improve upon and implement them in this short window before next semester. To do this I need a boost of confidence.

So this is a pep talk to those profs and professionals who don’t have their head in the sand. You’re trying to keep your head above the water. I’m striving for humility to learn from critiques, grace for my failings, and confidence to head into the Spring semester – with the audacity to teach digital and social media marketing in my early 50s.

AI image showing a university professor burning AI inspired by the book Fahrenheit 451.
AI image generated using Google ImageFX from a prompt to show a university professor burning AI inspired by the book Fahrenheit 451. https://labs.google/fx/tools/image-fx

We need to be more human, more bold.

Speaking of audacious. It’s the motivation for my main article image generated by Google’s ImageFX. My prompt? Show a university professor burning AI inspired by Fahrenheit 451. My human fireworks is to not become replaced by AI teaching agents or young YouTubers selling top 10 strategies for social media success. Marketing thought leader Mark Schaefer inspired the image saying,

“AI has helped create a marketing pandemic of dull. It’s not your fault. Your company probably rewards you for being boring. You’re Google-sufficient and optimized. They’re trying to keep you in their box. But the AI bots are coming. You need to do something, and you need to do it now. It’s time to unleash the HUMAN fireworks in your content. There is no choice. You need to be audacious.”

Thanks for leading us to the future Mark (someone older than me). This is my audacious post that couldn’t be written by AI. AI can’t explain what it feels like to be a professor at this moment or a professional fearing their job loss. AI can’t know what it is to fear its own adoption or know what it is to have grace, humility, and confidence. Google’s AI Overview did give me a nice definition though,

“A state of being confident in one’s abilities while also acknowledging limitations and approaching situations with kindness and respect.”

In bold confidence we also need caution.

While we have no choice in adopting AI, we have a choice in how. Human agency still exists. I don’t want to make the mistakes we made with social media. Have you read Haidt’s book, The Anxious Generation?

Between my period of AI avoidance (pushing off meetings with faculty development) to AI embrace (agreeing to a 5 part AI integration workshop), I created a framework and process to strategically apply AI.

“Move fast and break things” may have helped develop AI, but I’d rather not. A benefit of academia I didn’t have in the fast-paced ad agency world is time for reflection. Marketing success is based on frameworks and processes. I needed that for integrating AI. The result was my summer AI blog series:

  1. Artificial Intelligence Use: A Framework For Determining What Tasks To Outsource To AI [Template]
  2. AI Task Framework: Examples of What I’d Outsource To AI And What I Wouldn’t.
  3. AI Prompt Framework: Improve Results With This Framework And Your Expertise [Template].
  4. More Than Prompt Engineers: Careers With AI Require Subject Matter Expertise [Infographic].
  5. Joy Interrupted: AI Can Distract From Opportunities For Learning And Human Connection.

How I integrated AI in Fall classes.

Coming out of summer I went through every class and assignment to specifically look for places where I felt AI would be helpful for student learning and where it would not. I tried AI for tasks in my assignments and shared what I found with students.

Example of how I gave students specific ways to use AI for one assignment.
Example of how I gave students specific ways to use AI for one assignment.

Each assignment had an AI section giving students specific aspects of the assignment to use AI and how. There was no general ban, but also no OK for all-out use. Using AI for everything shortchanges the learning process as the infographic below illustrates.

This graphic shows that in stages of learning you go through attention, encoding, storage, and retrieval. You need your brain to learn this process not just use AI for the process.
Click the image for a downloadable PDF of this graphic.

I also had a consistent general AI statement on my syllabi (see below). I directed students on when and how to cite AI, and what AI to use with links and directions to use it. I sent them to Copilot for convenience and financial considerations as all students had access to GPT-4 and DALL-E 3 free with their university Microsoft 365 account.

Beyond AI-specific uses in assignments, I had a general AI use policy.

I cautioned about AI copyright issues. I also didn’t want them using AI to complete an entire assignment – why I use TurnItIn’s AI checker. I never used it solely, but academia isn’t the only one using AI detection. A digital marketing professional guest speaker last term told students they use AI in many ways but use AI detectors for their writers. If a client is paying for human-created content, they want to ensure it.

Student uses of AI in assignments.

AI helped students brainstorm and express their ideas. Groups in Integrated Marketing Communications created campaigns for brands like Qdoba. In a class with few graphic design or art students, DALL-E through Copilot enabled them to create customized storyboards of their TV ads and YouTube bumper ads.

A custom storyboard for the Qdoba student team's IMC campaign using DALL-E via Copilot.
A custom storyboard for the Qdoba student team’s IMC campaign using DALL-E via Copilot.

We talked about AI content being great to sell ideas but there may be copyright issues publishing it. There’s also a potential consumer backlash as highlighted in recent Adage articles and Harris Polls.

Example Copilot prompt to find social media influencers.
Students used Copilot to find influencers for their brand social media projects following the prompt framework below.

In social media marketing, students used AI to generate variations of social content captions. Our social media simulation requires many organic posts that must vary for engagement and reach (as with real social posts). Students wrote the main message but let AI create versions to word counts for each social platform. For a brand’s social strategies, they used AI to research influencers, get hashtag ideas, and create images to mock up brand social media posts.

I also taught them prompts to get better results. Using the prompt framework below got me and my students much better results. I heard from colleagues at other universities who are using this framework for their students and getting better results as well.

AI Prompt Framework Template with 1. Task/Goal 2. AI Persona 3. AI Audience 4. AI Task 5. AI Data 6. Evaluate Results.
Click the image to download a PDF of this AI Prompt Framework Template.

What’s to come for the new year?

In my next post, I’ll share my plans for the Spring. Recent AI developments have opened up more possibilities. I’ll explain how I’m using NotebookLM as an AI tutor for one class. I’ll share how I’m going beyond Copilot to leverage new AI capabilities in Adobe Express and Google’s ImageFX.

I’ll also get deeper into new multimodal capabilities of AI with videos exploring live audio interactions in NotebookLM’s Audio Overview and a demonstration of live video conversations with Gemini 2.0 as it “sees” what‘s on my screen.

Banning AI and being behind in AI is the furthest from my mind. I want contribute to how AI can and should (or should not) advance marketing practice and teaching to better prepare us all for the AI revolution.

What have been your struggles and successes with AI?

For my next post on AI see “AI’s Multimodal Future Is Here. Integrating New AI Capabilities Such As NotebookLM In The Classroom.”

100% Human Created!