The AI Agents Are Coming! So Are The Reasoning Models. Will They Take Our Jobs And How Should We Prepare?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'"

Last Fall I traveled to MIT to watch my daughter play in the NCAA volleyball tournament. On the way, we passed signs for Lexington and Concord. AI agents were on my mind. There was a sudden buzz about AI agents and how they’re coming for our jobs. The image of Paul Revere came to my mind.

Instead of warning about the Redcoats stealing munition at Concord, Revere’s of today warn of AI agents stealing our jobs. Then new AI reasoning models released causing another rise in discussion. Like Lexington Green have the first shots been fired on our jobs with reasoning AI agents?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'
AI image generated using Google ImageFX from the  prompt “Create a painting depicting Paul Revere on his midnight ride, but instead of a person it is a robotic AI agent yelling ‘The AI Agents are coming for your jobs!’.” https://labs.google/fx/tools/image-fx

What is an AI agent?

Search interest in AI agents spiked in January. If you search AI agents Google returns 216 results. Reading through many of them there are probably half as many definitions. For simplicity, I will begin by quoting AI Marketing Institute’s Paul Roetzer, “An AI agent takes action to achieve goals.”

That doesn’t sound scary. What’s driving interest and fear is adding the word autonomous. Roetzer and co-founder Mike Kaput have created a helpful Human-to-Machine Scale that depicts 4 levels of AI autonomous action.

Marketing AI Institute’s Human-to-Machine Scale:

  • Level 0 is all human.
  • Level 1 is mostly human.
  • Level 2 is half and half.
  • Level 3 is mostly machine.
  • Level 4 is all machine or full autonomy.

Full autonomy over complete jobs is certainly fear inducing! Large language model companies like OpenAI, Google, and SAAS companies integrating AI are promising increased autonomous action. Salesforce has even named their AI products Agentforce, which literally sounds like an army coming to take over our jobs! Put some red coats on them and my Paul Revere analogy really comes to life.

Every player in AI is going deep.

In September Google released a white paper “Agents” with little attention. Now, after the release of reasoning models, everyone including Venture Beat is analyzing it. In the paper, Google predicts AI agents will reason, plan, and take action. This includes interacting with external systems, making decisions, and completing tasks – AI agents acting on their own with deeper understanding.

OpenAI claims its new tool Deep Research can complete a detailed research report with references in “tens of minutes.” Something that may take a human many hours. Google’s DeepMind also has Deep Research, Perplexity has launched Deep Research, CoPilot now has Think Deeper, Grok3 has a Deep Search tool, and there’s the new Chinese company DeepSeek. Anthropic has not released a “deep” tool and instead recommends chain-of-thought (CoT) prompting in Claude for more complex tasks like research, analysis, or problem-solving to help reduce errors. The Redcoats are coming and they’re all in on deep thinking.

Graphs of Google Trends search data showing an increase in search for AI Agents and Reasoning Models.
Interest in and discussion about AI Agents and AI Reasoning Models has risen sharply. Graphs from https://trends.google.com/trends/

What is a reasoning model?

Google explains Gemini 2.0 Flash Thinking is “our enhanced reasoning model, capable of showing its thoughts to improve performance and explainability.” A definition for reasoning models may be even more difficult and contested than AI agents. This term returns 163 results in a Google search and perhaps just as many definitions.

For my definition of a reasoning model, I turn to Christopher Penn. In his “Introduction to Reasoning AI Models,” Penn explains, “AI – language models in particular – perform better the more they talk … The statistical nature of a language model is that the more talking there is, the more relevant words there are to correctly guess the next word.” Reasoning models slow down LLMs to consider more words through a process.

LLMs and reasoning models are not magic.

Penn further explains that good prompt engineering includes a chain of thought, reflection, and reward functions. Yet most people don’t use them, so reasoning models make the LLM do it automatically. I went back to MIT, not for volleyball, but for further help on this definition. The MIT Technology Review explains that these new models use using chain of thought and reinforcement learning through multiple steps.

An AI prompt framework, such as the one I created, will improve your results without reasoning. You also may not need a reasoning model for many tasks. Reasoning models cost more and use more energy. Experts like Trust Insights recommend slightly different prompting for reason models such as Problem, Relevant information, and Success Measures. Brooke Sellas of B Squared Media shared President of OpenAI Greg Brockman’s reasoning prompt of Goal, Return Format, Warnings, and Context Dump.

Many want a magical AI tool that does everything. In reality, different AI is better for different things. Penn explains generative AI is good with language, but for other tasks, traditional forms of AI like regression, classification, or even non-AI statistical models can be a better solution.

How we talk about AI matters.

Humans are attracted to the magic capabilities of AI. Folk tales like The Sorcerer’s Apprentice which you may know from Disney’s Fantasia, are about objects coming to life to do tasks for us. Reasoning models are said to have agentic behavior – the ability to make independent decisions in pursuit of a goal. Intentional or not, it sounds like angelic, bringing up mystical thoughts of angels and the supernatural.

Since the first post in my AI series, I’ve argued for maintaining human agency and keeping humans in the loop. Therefore, I want to be careful in how I talk about these new “reasoning” models that show us their “thinking.” I agree with Marc Watkin’s recent Substack “AI’s Illusion of Reason,” that the way we talk about these AI models matters.

An AI model that pauses before answering and shows the process it followed doesn’t mean it is thinking. It’s still a mathematical prediction machine. It doesn’t comprehend or understand what it is saying. Referring to ChatGPT or Gemini as it versus he or she (no matter the voice) matters.

Google Gemini 2.0 Flash Thinking
I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI “thinking.” From https://aistudio.google.com/

What’s the difference between human and AI thinking?

I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI thinking. It said, “AI can perform tasks without truly understanding the underlying concepts or the implications of its actions. It operates based on learned patterns not genuine comprehension.” Does this raise any concerns for you as we move toward fully autonomous AI agents?

Humans need to stay in the loop. Even then, you need a human who truly understands the subject, context, field, and/or discipline. AI presents its answers in a convincing well-written manner – even when it’s wrong. Human expertise and discernment are needed. Power without understanding can lead to Sorcerer’s Apprentice syndrome. A small mistake with an unchecked autonomous agent could escalate quickly.

In a Guardian article, Andrew Rogoyski, a director at the Institute for People-Centred AI warns of people using responses by AI deep research verbatim without performing checks on what was produced. Rogoyski says, “There’s a fundamental problem with knowledge-intensive AIs and that is it’ll take a human many hours and a lot of work to check whether the machine’s analysis is good.”

Let’s make sure 2025 is not like 1984.

I recently got the 75th anniversary edition of George Orwell’s 1984. I hadn’t read it since high school. It was the inspiration behind Apple’s 1984 Super Bowl ad – an example of the right message at the right time. It may be a message we need again.

AI isn’t right all the time and right for everything. It’s confident and convincing even when it’s wrong. No matter how magical AI’s “thinking” seems we must think on our own. As AI agents and reasoning models advance discernment is needed not unthinking acceptance.

The 250th anniversary of Paul Revere’s ride and the “Shot heard ‘round the world” is in April this year. Will AI agents and reasoning models be a revolution in jobs in 2025? In my next post, I take a deep dive into how AI may impact marketing and communications jobs and education. What’s your excitement or fear about AI agents and reasoning models?

This Was Human Created Content!

AI’s Multimodal Future Is Here. Integrating New AI Capabilities Such As NotebookLM In The Classroom.

AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx

In my last post, I needed a pep talk. In teaching digital and social media marketing I’m used to scrambling to keep up with innovations. But AI is a whole other pace. It’s as if I’m trying to keep up with Usain Bolt when I’m used to running marathons.

Like the marathon I signed up for in July, November comes quickly. No matter how training goes the start time comes, the horn goes off, and you run. Here comes the Spring semester. No matter the number of AI updates dropped in December I need to show up ready to go in early January.

If I want to make a difference and have an influence on how AI impacts my discipline and teaching, I don’t have a choice. I can relate to what AI expert Ethan Molick said in his latest Substack,

“This isn’t steady progress – we’re watching AI take uneven leaps past our ability to easily gauge its implications. And this suggests that the opportunity to shape how these technologies transform your field exists now when the situation is fluid, and not after the transformation is complete.”

The other morning, when I should’ve been finishing Fall grades, I spent a couple of hours exploring AI updates and planning how I’ll advance AI integration for Spring. Instead of AI bans (illustrated by the Fahrenheit 451 inspired image of my last post), I’m going deeper with how we can train AI to be our teaching friend, not foe.

AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx
AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx

NotebookLM opens up teaching possibilities.

A lot of new AI updates came this Fall. One that caught my eye was Google’s NotebookLM. In a NotebookLM post, I explained how I was blown away by its Audio Overview of my academic research that it turned into an engaging podcast of two hosts explaining the implications for social media managers.

I see potential to integrate it into my Spring Digital Marketing course. NotebookLM is described as a virtual research assistant –  an AI tool to help you explore and take notes about a source or sources that you upload. Each project you work on is saved in a Notebook that you title.

These are the various notebooks I’ve used so far for research and the new course notebook.
The various notebooks I’ve used so far for research and for my Digital Marketing class.

Whatever reference you upload or link, NotebookLM becomes an expert on that information. It uses your sources to answer questions and complete requests. Responses include clickable citations that take you to where the information came from in sources.

As a Google Workspace for Education user, uploads, queries, and responses are not reviewed by human reviewers or used to train AI models. If you use your personal Google account and choose to provide feedback, human reviewers may see what you submit. To learn more click here.

Source files can be Google Docs, Google Slides, PDFs, Text files, Web URLs, Copy-pasted text, public YouTube video URLs, and Audio files. Each can contain up to 500,000 words or 200MB files. Each notebook can contain up to 50 sources. Added up NotebookLM’s context window is large compared to other models. ChatGPT 4o’s context window is roughly 96,000 words.

When you upload to NotebookLM, it creates an overview summarizing sources, key topics, and suggested questions. It also has a set of standard documents with an FAQ, Study Guide, Table of Contents, Timeline, or Briefing Doc. An impressive feature is the Audio Overview which generates an audio file of two podcast hosts explaining your source or sources.

NotebookLM as an AI tutor.

I plan on using NotebookLM as an AI tutor for students in my Spring Digital Marketing course. I like the open-source text I’ve been using for years, but the author has stopped updates. The strategic process and concepts are sound, so I update content with outside reading and in-class instruction.

I tested NotebookLM creating a notebook for Digital Marketing course resources. First, I uploaded the PDF of the text. Then, I added website links to six digital marketing websites that I use for assigned readings and in-class teaching. Finally, I added my blog. I plan to show students how to create theirs at the beginning of the semester.

This is my notebook for Digital Marketing. I was impressed with asking it questions that I often get from students about assignments.
This is my notebook for Digital Marketing. I was impressed with the answers it gave to questions I often get from students.

AI may not be accurate 100% of the time, but controlling the sources seems to help and puts less pressure on crafting a perfect prompt. My discipline knowledge knows when it gets something wrong. I tested my Digital Marketing NotebookLM asking questions on how to complete main course assignments such as personal branding blogs, email, SEO, and content audits. I haven’t noticed any wrong answers thus far.

Important note about copyright.

I’m testing NotebookLM in this class because my main text is open source and all the websites I link to are publicly published sites (not behind paywalls). Google is clear about its copyright policy,

“Do not share copyrighted content without authorization or provide links to sites where people can obtain unauthorized downloads of copyrighted content.”

We should set a good example and educate students by not uploading copyrighted books or information only accessible through subscriptions or library databases. Below is my general AI policy for the course.

The policy carves out acceptable and helpful uses of AI while explaining the ways AI should not be used.
This policy carves out acceptable/helpful AI use while explaining ways AI shouldn’t be used.

In completing final reports students will access information behind paywalls such as Mintel reports. They’ll add the information and cite it as they’ve done in the past. The goal isn’t to use NotebookLM to complete their assignments for them. The goal is to give them a resource to better understand how to complete their assignments.

NotebookLM as a study tool.

I see NotebookLM as a positive tool for student learning if used as a study guide, reinforcement, or tutor. It would have a negative impact if used to simply replace reading and listening. What’s missed when you use AI in the wrong way is depicted in an infographic I created for a previous blog post on the importance of subject matter expertise when using AI.

For a website assignment, my course NotebookLM gave a nice summary of the process and best practices to follow. That’s something students often struggle to find in the text and other sources. The assignment requires pulling from multiple chapters and resources. The notebook summary included direct links to the information from various text chapters and digital marketing blogs. I also tested its accuracy with questions about an email assignment and had it create a useful study guide.

This will be so helpful for an assignment that student often miss steps and best practices as it draws from multiple parts of the text.
Answering questions will be helpful in assignments where students often miss steps and best practices that draw from multiple parts of the text and readings.

Students can create audio overviews of podcast hosts talking about a topic drawing from the sources. Impressively, when I asked for an Audio Overview explaining the value of a personal professional blog assignment to students it understood the student’s perspective of thinking blogs are outdated. It began, “As a student, I know you’re thinking blogs are outdated, but personal professional blogs are a great …” The Audio Overview also adjusted the text process for businesses and applied it to a personal branding perspective.

Going beyond Copilot in other areas.

I also plan on students leveraging new AI capabilities in Adobe Express and Google’s ImageFX in multiple classes. Our students have free access to Adobe Creative Suite where new AI capabilities go beyond Firefly generated images. In Express you can give it text prompts to create mockups of Instagram and Facebook posts, Instagram stories, YouTube thumbnails, etc.

Students' ideas will be able to be expressed even better with Abobe’s new text to create AI interface in Adobe Express along with its image creation capabilities with Firefly.
Students’ ideas can be expressed better with the text to create AI interface in Adobe Express along with the image creation capabilities of Firefly.

AI’s multimodal future is here.

That other morning I also dove deeper into new AI multimodal capabilities. It was so remarkable I recorded videos of my experience. I explored new live audio interactions in NotebookLM and created a demonstration of what’s possible with Google’s Gemini 2.0 multimodal live video.

I was blown away when testing the new ability to “Join” the conversation of the podcast hosts in NotebookLM’s Audio Overview. While the hosts explained the value of a personal professional blog, I interrupted asking questions with my voice.

 

Near the beginning, the hosts tell students to write about their unique skills. I clicked a “Join” button and they said something like,

“Looks like someone wants to talk.” I asked, “How do you know your unique skills?” They said “Good question,” gave good tips, and continued with the main subject. Later I interrupted and asked, “Can you summarize what you have covered so far?” They said sure, gave a nice summary, and then picked back up where they left off.

Finally, I interrupted to ask a common student question, “What if I’m nervous about publishing a public blog?” The hosts reassured me saying people value honesty and personality, not perfection. What really impressed me was the hosts answering questions about things not specifically in the sources. They could apply concepts from the sources to understand the unique perspective of a given audience.

Multimodal AI as a live co-worker.

This last demonstration of the new multimodal capabilities of AI is for my own use. With Gemini 2.0 in my Google AI Studio account, I could interact in real time using text, voice, video, or screen sharing.

The video below is a demonstration of what’s possible in live video and conversations with Gemini 2.0 as it “sees” what‘s on my screen. I had a conversation with it to get feedback on the outline for my new five-part AI integration workshop I’m planning this Spring for faculty on campus.

Writing the last two blog posts was time well spent.

Planning what I’ll do in the Spring and writing these last two blog posts has taken me two-three days. Because it was 100% human created there was a struggle and a time commitment. But that is how I learn. This knowledge is in my memory so I can explain it, apply it, or answer questions.

Talking to Gemini was helpful, but it doesn’t compare to the conversations I’ve had with colleagues. AI doesn’t know what it feels like to be a professor, professional, or human in this unprecedented moment. Let me know how you’re moving beyond AI bans and where you’re executing caution.

I have a lot of work to do to implement these ideas. That starting horn for the new semester is approaching fast.

100% Human Created!