How Will AI Agents Impact Marketing Communications Jobs & Education? See Google’s AI Reasoning Model’s “Thoughts” And My Own.

AI image generated using Google ImageFX from the prompt “Create a painting depicting the British army in red coats as AI robots coming into town to take people's jobs." https://labs.google/fx/tools/image-fx

In my last post, I warned of the AI agents coming to take our jobs like Paul Revere warning of the British coming. Large language model companies like OpenAI, Google, and SAAS companies integrating AI are promising increased autonomous action. Salesforce has even named their AI products Agentforce, which literally sounds like an army coming to take over our jobs!

Whether you are in marketing. Advertising, PR, or Corporate Communications or are a professor in these areas it is important to remember that AI agents and the new reasoning models are not magical or human. They are simply really good prediction machines. But they are so good that AI will increasingly take parts of our jobs now and potentially replace entire jobs in the not-too-distant future.

But they are not good at everything and not always right. That’s why you need to be involved in determining how AI will be used in your job. Don’t let AI happen to you. Make AI work for you.

AI image generated using Google ImageFX from the prompt “Create a painting depicting the British army in red coats as AI robots coming into town to take people’s jobs.” https://labs.google/fx/tools/image-fx 

Productivity gains are already happening with AI.

Ethan Mollick, author of Co-Intelligence: Living and Working with AI recently shared a study that found 30% of U.S. workers are using AI every day and that it is tripling their productivity (reducing a 90-minute task to 30 minutes). If you are not in that 30% there is still time to catch up. In all honesty, as much as I write about AI and implement it in my classes I don’t use it as much as I should for my everyday tasks.

That’s why I turned to Gemini for help with this post. I wanted to test a new reasoning model and see how it thinks but also use it as a research assistant. Writing an article like this takes a lot of time. In addition to testing the new Gemini “reasoning” model, I was looking for time savings in researching how AI agents may impact marcom jobs.

In this post, I look under the hood to see how AI crafts its responses while seeing what Google’s new reasoning model “thinks” about the future of marketing related careers. Will AI agents take our jobs? If so, how soon? For my test, I gave Gemini 2.0 Flash Thinking a prompt that I know worries many in my field. Below is my prompt. I wanted a brutally honest assessment.

I asked Google’s reasoning model Gemini 2.0 Flash to give me a brutally honest look at the future of marketing jobs and how they will be impacted.” From https://aistudio.google.com/
I asked Google’s reasoning model Gemini 2.0 Flash Thinking to give me a brutally honest look at the future of marketing jobs and how they will be impacted. https://aistudio.google.com/

What does AI think about AI agents taking our jobs?

First, let’s get to know the reasoning model I used. Google explains it by saying, “the Gemini 2.0 Flash Thinking model is an experimental model that’s trained to generate the “thinking process” the model goes through as part of its response. As a result, the Flash Thinking model is capable of stronger reasoning capabilities in its responses than the Gemini 2.0 model.

How do you see its thinking? In the screen capture of my prompt above you have an option to click on “Expand to view model thoughts” before you read the response. I did this to see its chain of thought and include the thought process in the screen capture below.

Gemini took a 10-step process to get to the final answer:

  1. Acknowledge the User’s Need
  2. Frame the Initial Message
  3. Structure the Timeline
  4. Brainstorm Areas of Impact (Current & Future)
  5. Assign Percentage of Impact – Now
  6. Incrementally Increase Percentages Over Time
  7. Directly Address Jobs Replacement – Hard Truths
  8. Focus on Skill Sets Needed for Survival and Success
  9. Maintain a “Brutal but Constructive” Tone
  10. Refine and Sharpen Language.
I asked Google’s reasoning model Gemini 2.0 Flash to give me a brutally honest look at the future of marketing jobs and how they will be impacted.” From https://aistudio.google.com/
Google’s Gemini 2.0 reasoning model showed me the thinking process for responding to my prompt. https://aistudio.google.com/

Seeing AI’s thought process and its self-correction.

Before my brutally honest prompt, I submitted a prompt to get an honest, yet reassuring answer to the same question. In the screen capture below you can see how numbers 1 and 2 in the thinking process varied from above. I imagine that is how I think when writing for different audiences. That is why tools such as personas are great for marketing professionals in crafting content.

In that first prompt, I also saw an example of how it “self-corrected” in the process. An initial prediction of AI automating 50% of marketing content within a year was second guessed as Gemini talked to itself saying “That’s likely too high and broad. AI can automate some content creation tasks like basic … but not complex storytelling, brand voice development, or strategic content planning.” This self-correction resulted in it dropping that number down to 20-30%.

I asked Google’s reasoning model Gemini 2.0 Flash to give me a brutally honest look at the future of marketing jobs and how they will be impacted.” From https://aistudio.google.com/
Gemini 2.0 Flash Thinking showed how it self corrected a prediction about AI taking on 50% of content marketing tasks next year. https://aistudio.google.com/

Now let’s get to its final response. How worried should we be as professional marketers or communications professionals that support marketing? What should we be doing to prepare ourselves and our students for this revolution? The response is broken into three “Brutal Truths.” From my research and study over the years, most of this feels accurate. Honestly, much of the first category is already happening and has been done for years by other forms of AI. So it is not surprising to me.

Brutal Truth 1: Some parts of your job will be replaced and some jobs will be eliminated.

Below is the screen capture of Gemini’s response. It predicts 5-20% of all tasks will be outsourced to AI in an “efficiency overhaul.” This includes mundane and repetitive tasks, basic content creation, and customer segmentation plus lower-tier performance reporting and analytics. This fits what I know.

In the last two years, we’ve seen more basic content creation being done by AI whether through LLMs like ChatGPT or AI integrations into SAAS platforms such as Owly Writer in Hootsuite. For customer segmentation, I can see AI helping with data collection, but overall segmenting audiences requires more human insight.

The final one is not a surprise. Creating auto-generated reports off previously set-up dashboards has been around for years. The important part is knowing what KPIs are important in the first place – the realm of a seasoned human strategist. The new aspect may be auto-generating the initial language around the reports and a prompt overlay. But I still would not rely on AI to understand the full context.

I asked Google’s reasoning model Gemini 2.0 Flash to give me a brutally honest look at the future of marketing jobs and how they will be impacted.” From https://aistudio.google.com/
Google’s reasoning model Gemini 2.0 Flash Thinking’s brutally honest truth number one about the future of marketing jobs and how they will be impacted. https://aistudio.google.com/

Brutal Truth 2: The demand shift is dramatic. Adapt or fade.

Below is the screen capture of Gemini’s second brutal truth which is that the demand shift will be dramatic. Gemini tells us to “adapt or fade.” After the brutal message, it does try to quickly reassure us saying that marketing isn’t going away. But don’t feel too good about that reassurance because it is followed up with an all-caps pronouncement that it is changing RADICALLY.

Obviously, you want to position yourself to be in one of the high demand areas such as strategic marketing visionaries (AI-augmented), creative directors and brand storytellers (AI-guided), data-driven insight interpreters and storytellers, AI marketing technologists and integrators, ethical AI marketing guardians, and human-connection and empathy experts. At first glance, I feel competent in many of these areas and confident in teaching my students these higher level skills.

Once again, this doesn’t surprise me. My revelation in AI came when I stopped thinking of it as this all-or-nothing entity. The big scary redcoats coming became more manageable when I broke down my job into tasks and reclaimed my human agency to intentionally decide what to use AI for and what not to use it for. What I learned I put into my AI Use Framework. It helped me and can help you break down anything into single tasks and their goals.

I asked Google’s reasoning model Gemini 2.0 Flash to give me a brutally honest look at the future of marketing jobs and how they will be impacted.” From https://aistudio.google.com/
Google’s reasoning model Gemini 2.0 Flash Thinking’s brutally honest truth number two about the future of marketing jobs and how they will be impacted. https://aistudio.google.com/

Whether you follow my framework or not, I encourage everyone to do this exercise of breaking down your job into tasks and intentionally finding the things that can easily be automated by AI. You will be surprised at what you won’t mind giving to AI to spend more time on what you enjoy more anyway. You’ll also discover things that could be automated but should be kept for humans because the goal is to build relationships and relationships can’t be automated.

The high-demand future list looks accurate. Those are all uniquely human-based skills even if parts become AI-augmented or AI-guided. The key is to make this shift yourself now. If you don’t AI will become the thing that happens to you, not the thing that you help shape and influence. Quickly find those tasks that can and should be outsourced to AI and then start using it. Just don’t trust it for everything. No matter how confident it sounds, it doesn’t always get everything right. Use your discipline expertise to discern and verify results.

Brutal Truth 3: Upskilling is not optional. It is survival.

The third brutal truth reinforces what I said above. Upskilling is not an option. It’s about survival. AI innovation is coming quicker than any other technology revolution. You can’t opt out (unless you’re retiring this year). Thus, you need to become AI literate, focus on strategy and creative thinking, embrace data, learn to work with AI, and specialize strategically.

I am not a history professor or war strategy expert, but I’ll make one final connection to the theme of my last two posts. Some factors that contributed to the colonists winning the American Revolution include being familiar with their home territory (your discipline), strong motivation (defend your livelihood), and fighting for something they believed in (human ability and agency).

The Continental Army was also willing to move away from traditional methods of battle. Your discipline, whether it’s marketing, communications, advertising, PR, teaching, or something else, may have a long tradition of doing things a certain way. Now is the time to find new methods to remain relevant and keep humans in the loop in light of the AI revolution.

I asked Google’s reasoning model Gemini 2.0 Flash to give me a brutally honest look at the future of marketing jobs and how they will be impacted.” From https://aistudio.google.com/
Google’s reasoning model Gemini 2.0 Flash Thinking brutally honest truth number three about the future of marketing jobs and how they will be impacted. https://aistudio.google.com/

I’m trusting AI for these predictions, but I’ve been studying AI since 2022 and they seem accurate. They also match a similar prompt I tried in Anthropic’s Claude 3.7 and what SmarterX’s custom GPT JobsGPT 2.0. predicts. I shared JobsGPT with my AI use framework to help break down jobs into tasks to decide what to outsource to AI. The new feature forecasts AI jobs by industry, profession, or college major by job title, description, and skills required – helpful for professors’ curriculum and professionals’ upskilling.

I asked JOBGPT 2.0 by SmarterX to forecast new jobs that could emerge for marketing majors as AI reshapes the industry from https://chatgpt.com/g/g-wg93fVwAj-jobsgpt-by-smarterx-ai
I asked JobsGPT 2.0 by SmarterX to forecast new jobs that could emerge for marketing majors as AI reshapes the industry from https://chatgpt.com/g/g-wg93fVwAj-jobsgpt-by-smarterx-ai

I feel good about what I’m doing in my classes. I’ve always focused on higher-level strategic thinking and creativity focused on human insight and emotions through storytelling. Now I’m teaching students how to integrate AI into marketing, communications, and learning tasks. What can you do to help prepare for this future?

I asked Anthropic's Claude 3.7 to forecast how marketintg related jobs will change with AI agents and make recommendations for professors. https://claude.ai/
I asked Anthropic’s Claude 3.7 to forecast how marketing-related jobs will change with AI agents and make recommendations for professors. https://claude.ai/

This Was 50% Human Created Content!

The AI Agents Are Coming! So Are The Reasoning Models. Will They Take Our Jobs And How Should We Prepare?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'"

Last Fall I traveled to MIT to watch my daughter play in the NCAA volleyball tournament. On the way, we passed signs for Lexington and Concord. AI agents were on my mind. There was a sudden buzz about AI agents and how they’re coming for our jobs. The image of Paul Revere came to my mind.

Instead of warning about the Redcoats stealing munition at Concord, Revere’s of today warn of AI agents stealing our jobs. Then new AI reasoning models released causing another rise in discussion. Like Lexington Green have the first shots been fired on our jobs with reasoning AI agents?

AI image generated using Google ImageFX from a prompt “Create a digital painting depicting Paul Revere on his midnight ride, but instead of a person riding the horse it is a futuristic robotic AI agent yelling 'The AI Agents are coming for your jobs!'
AI image generated using Google ImageFX from the  prompt “Create a painting depicting Paul Revere on his midnight ride, but instead of a person it is a robotic AI agent yelling ‘The AI Agents are coming for your jobs!’.” https://labs.google/fx/tools/image-fx

What is an AI agent?

Search interest in AI agents spiked in January. If you search AI agents Google returns 216 results. Reading through many of them there are probably half as many definitions. For simplicity, I will begin by quoting AI Marketing Institute’s Paul Roetzer, “An AI agent takes action to achieve goals.”

That doesn’t sound scary. What’s driving interest and fear is adding the word autonomous. Roetzer and co-founder Mike Kaput have created a helpful Human-to-Machine Scale that depicts 4 levels of AI autonomous action.

Marketing AI Institute’s Human-to-Machine Scale:

  • Level 0 is all human.
  • Level 1 is mostly human.
  • Level 2 is half and half.
  • Level 3 is mostly machine.
  • Level 4 is all machine or full autonomy.

Full autonomy over complete jobs is certainly fear inducing! Large language model companies like OpenAI, Google, and SAAS companies integrating AI are promising increased autonomous action. Salesforce has even named their AI products Agentforce, which literally sounds like an army coming to take over our jobs! Put some red coats on them and my Paul Revere analogy really comes to life.

Every player in AI is going deep.

In September Google released a white paper “Agents” with little attention. Now, after the release of reasoning models, everyone including Venture Beat is analyzing it. In the paper, Google predicts AI agents will reason, plan, and take action. This includes interacting with external systems, making decisions, and completing tasks – AI agents acting on their own with deeper understanding.

OpenAI claims its new tool Deep Research can complete a detailed research report with references in “tens of minutes.” Something that may take a human many hours. Google’s DeepMind also has Deep Research, Perplexity has launched Deep Research, CoPilot now has Think Deeper, Grok3 has a Deep Search tool, and there’s the new Chinese company DeepSeek. Anthropic now has released what it is calling the first hybrid reasoning model. Claude 3.7 Sonnet can produce near-instant responses or extended step-by-step thinking that is made visible. The Redcoats are coming and they’re all in on deep thinking.

Graphs of Google Trends search data showing an increase in search for AI Agents and Reasoning Models.
Interest in and discussion about AI Agents and AI Reasoning Models has risen sharply. Graphs from https://trends.google.com/trends/

What is a reasoning model?

Google explains Gemini 2.0 Flash Thinking is “our enhanced reasoning model, capable of showing its thoughts to improve performance and explainability.” A definition for reasoning models may be even more difficult and contested than AI agents. This term returns 163 results in a Google search and perhaps just as many definitions.

For my definition of a reasoning model, I turn to Christopher Penn. In his “Introduction to Reasoning AI Models,” Penn explains, “AI – language models in particular – perform better the more they talk … The statistical nature of a language model is that the more talking there is, the more relevant words there are to correctly guess the next word.” Reasoning models slow down LLMs to consider more words through a process.

LLMs and reasoning models are not magic.

Penn further explains that good prompt engineering includes a chain of thought, reflection, and reward functions. Yet most people don’t use them, so reasoning models make the LLM do it automatically. I went back to MIT, not for volleyball, but for further help on this definition. The MIT Technology Review explains that these new models use using chain of thought and reinforcement learning through multiple steps.

An AI prompt framework, such as the one I created, will improve your results without reasoning. You also may not need a reasoning model for many tasks. Reasoning models cost more and use more energy. Experts like Trust Insights recommend slightly different prompting for reason models such as Problem, Relevant information, and Success Measures. Brooke Sellas of B Squared Media shared President of OpenAI Greg Brockman’s reasoning prompt of Goal, Return Format, Warnings, and Context Dump.

Many want a magical AI tool that does everything. In reality, different AI is better for different things. Penn explains generative AI is good with language, but for other tasks, traditional forms of AI like regression, classification, or even non-AI statistical models can be a better solution.

How we talk about AI matters.

Humans are attracted to the magic capabilities of AI. Folk tales like The Sorcerer’s Apprentice which you may know from Disney’s Fantasia, are about objects coming to life to do tasks for us. Reasoning models are said to have agentic behavior – the ability to make independent decisions in pursuit of a goal. Intentional or not, it sounds like angelic, bringing up mystical thoughts of angels and the supernatural.

Since the first post in my AI series, I’ve argued for maintaining human agency and keeping humans in the loop. Therefore, I want to be careful in how I talk about these new “reasoning” models that show us their “thinking.” I agree with Marc Watkin’s recent Substack “AI’s Illusion of Reason,” that the way we talk about these AI models matters.

An AI model that pauses before answering and shows the process it followed doesn’t mean it is thinking. It’s still a mathematical prediction machine. It doesn’t comprehend or understand what it is saying. Referring to ChatGPT or Gemini as it versus he or she (no matter the voice) matters.

Google Gemini 2.0 Flash Thinking
I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI “thinking.” From https://aistudio.google.com/

What’s the difference between human and AI thinking?

I asked Google’s reasoning model Gemini 2.0 Flash the difference between human thinking and AI thinking. It said, “AI can perform tasks without truly understanding the underlying concepts or the implications of its actions. It operates based on learned patterns not genuine comprehension.” Does this raise any concerns for you as we move toward fully autonomous AI agents?

Humans need to stay in the loop. Even then, you need a human who truly understands the subject, context, field, and/or discipline. AI presents its answers in a convincing well-written manner – even when it’s wrong. Human expertise and discernment are needed. Power without understanding can lead to Sorcerer’s Apprentice syndrome. A small mistake with an unchecked autonomous agent could escalate quickly.

In a Guardian article, Andrew Rogoyski, a director at the Institute for People-Centred AI warns of people using responses by AI deep research verbatim without performing checks on what was produced. Rogoyski says, “There’s a fundamental problem with knowledge-intensive AIs and that is it’ll take a human many hours and a lot of work to check whether the machine’s analysis is good.”

Let’s make sure 2025 is not like 1984.

I recently got the 75th anniversary edition of George Orwell’s 1984. I hadn’t read it since high school. It was the inspiration behind Apple’s 1984 Super Bowl ad – an example of the right message at the right time. It may be a message we need again.

AI isn’t right all the time and right for everything. It’s confident and convincing even when it’s wrong. No matter how magical AI’s “thinking” seems we must think on our own. As AI agents and reasoning models advance discernment is needed not unthinking acceptance.

The 250th anniversary of Paul Revere’s ride and the “Shot heard ‘round the world” is in April this year. Will AI agents and reasoning models be a revolution in jobs in 2025? In my next post, I take a deep dive into how AI may impact marketing and communications jobs and education. What’s your excitement or fear about AI agents and reasoning models?

This Was Human Created Content!