Is AI “Vibe Marketing” Hype or Help for Professionals and Professors?

My product idea went from sketch to photo-realistic product image, product shot with feature call outs, brand logo and tagline using Google AI Studio with Gemini 2.5 Pro, Gemini 2.0 Flash Image and OpenAI ChatGPT 4.o Image https://aistudio.google.com https://openai.com/

It’s been a month since my last post. I was looking for a topic. It found me listening to the Marketing AI podcast on my morning run Thursday. There were big model drops with Google’s Gemini 2.5 Pro and OpenAI’s ChatGPT-4o image generation. That’s big news, but my interest sparked when Paul and Mike mentioned “vibe marketing.” Huh? My first thought was how younger Survivor players talk about “vibing” with their tribe.

Can You Feel The Vibes?

Vibe marketing sounds like winging it. Trying new products and strategies based on feeling is not not something I’d embrace. I’ve taught for years the value of data based decision making in marketing. There’s an art to marketing but there’s also a science to it.

Mike and Paul explain how AI leader Andrej Karpathy posted on X about vibe coding – giving into vibes talking to AI over and over while it coded to complete projects. Others applied it to marketing. Marketers go from individual executors to orchestrators of AI systems. Mike Kaput explained, “So basically, marketers will start operating on vibes … while AI handles all the messy execution.”

To get a better handle on this concept I turned to the new Gemini 2.5 Pro which AI expert Christopher Penn says is the best AI model right now. It has surpassed other models on key benchmarks by significant margins (click for benchmarks table). Gemini 2.5 reasons through “thoughts” before responding improving performance and accuracy.

Nailing Down A Definition?

Gemini first defined vibe marketing as an established approach to creating content with a feeling to connect with consumers emotionally. Emotions are key, but rational appeals play a role. I’ve found story is a great way to deliver both as evidenced by my research and explained in my Brand Storytelling book. That’s not this new trend.

I prompted Gemini to focus on the emerging trend of AI in vibe marketing. It’s new description was “using AI tools to generate marketing ideas, content (text, images), and campaign elements that align with a specific vibe or aesthetic … for speed and automation in creating assets that embody a chosen vibe.” It is closer but still mixing the established term with the new trend.

With my background in cross-discipline creativity, innovation, and problem-solving, I thought it might be more about ideas that can go back to product design, business plans, and marketing concepts. Vibe is more about getting an idea and using AI to run with it, researching, illustrating, and iterating as it quickly gains steam combining design thinking with marketing and innovation.

Vibe Marketing In Action.

I was still fuzzy on the concept until my Integrated Marketing Communications (IMC) class later that day. Student teams complete an IMC campaign for a business. They gather market and consumer research, set objectives, budget, media mix, creative strategy, and execute digital and traditional creative with a storytelling approach.

During an in-class exercise, I asked students to apply what we learned about using PR for marketing objectives. They brainstormed a PR event based on a creative brief for Hush Puppies water-resistant leather dress shoes. While students worked, I came up with my own ideas sketching them on the board.

As evidence of the creative process I teach, I took random general information (a book I read to my kids when younger I Wish I Had Duck Feet) and combined it with specific information about the project (PR event for Hush Puppies water-resistant shoes). I sketched a person dressed for work walking in a city on a rainy day in rubber duck feet near a Hush Puppies pop-up store.

IMC focuses on marcom for problems and opportunities. But, I teach other classes that identify opportunities and come up with product ideas. Towards the end of class, we talked about duck being an actual product – a fun way to protect dress shoes.

Vibe Becomes A Product And Business.

Fun rubber shoe protector was in my mind back in my office. I let the marketing vibes roll using new AI tools to rapidly advance this idea from concept to product design, prototypes, with an outline and some basic research for business plan and a marketing plan for my entrepreneurial startup.

My product idea went from sketch to photo-realistic product image, product shot with feature call outs, brand logo and tagline using Google AI Studio with Gemini 2.5 Pro, Gemini 2.0 Flash Image and OpenAI ChatGPT 4.o Image https://aistudio.google.com https://openai.com/
My product idea went from sketch to photo-realistic product image, product shot with feature call outs, brand logo and tagline using Google AI Studio with Gemini 2.5 Pro, Gemini 2.0 Flash Image and OpenAI ChatGPT 4.o Image https://aistudio.google.com https://openai.com/

In no time, I had a photo-realistic product sketch, product name, logo, target market, positioning, price, and place (distribution) strategy. I also had a basic promotions strategy with marketing channels, marketing ideas, content with text and images, and campaign elements. I even had ideas on how to create a working prototype for investors by creating by hand, using a 3D printer, or a rubber molding prototype.

With tariffs in the news, I also wanted to consider manufacturing. Working with Gemini 2.5 Pro, I had a beginning outline of materials, fasteners, and packaging I would need and options for rubber injection or compression molding. I had Gemini look into supply chain and manufacturing partners from overseas, in North America, and in the U.S. It gave me ideas to find those partners through online platforms, industry directories, trade shows, and networking.

Gemini helped with my marketing plan, but I turned to Open AI for my product illustrations, logo, and examples of social media ads. I was inspired by Ethan Mollick’s Substack and wanted to try the new image capabilities of GPT-4o.

Gemini came up with the idea for an influencer marketing post, wrote the caption, and suggested the hashtags. I had the idea for the brand promotional post headline, subhead, and image but it wrote the promotion copy. Gemini gave me tagline suggestions, but I didn’t them so I wrote “Being Safe Has Never Been So Fun.”

GPT-4o created all images. The bottom left below was Gemini 2.0 Flash image. I couldn’t get it to do what I wanted especially with type. The top right is ChatGPT’s first attempt from my prompt on the top left. Gemini 2.5 Pro may be best all around, but ChatGPT-4o image is superior, but Google may be planning a Gemini 2.5 image model release.

My product idea went from sketch to photo-realistic Instagram influencer ad and brand product ad using Google AI Studio with Gemini 2.5 Pro, Gemini 2.0 Flash Image and OpenAI ChatGPT 4.o Image https://aistudio.google.com https://openai.com/
My product idea went from sketch to photo-realistic Instagram influencer ad and brand product ad using Google AI Studio with Gemini 2.5 Pro, Gemini 2.0 Flash Image and OpenAI ChatGPT 4.o Image https://aistudio.google.com https://openai.com/

Can Anyone Can Be A Vibe Marketer?

Yesterday was fun, but I agree with AI expert Christopher Penn that vibe marketing isn’t a magic bullet of entering a couple of prompts, walking away and it does everything. As with any trend you need to see beneath the hype. He says, the more you hand-off the more that can go wrong. Fun and vibes alone don’t make successful marketing.

Penn explains using AI well is like managing employees. I had to know how to get good work out of Gemini. I had to figure out ChatGPT was better at images. You also need discipline expertise, good data, discernment, and skills in prompting.

I got good results quickly because I worked in marketing over 15 years in product design, launches, communications campaigns, and pitches. I’ve researched and taught marketing, judged and mentored student business competitions the past decade. I’ve researched and experimented with AI for two years including AI Use Frameworks and AI Prompt Frameworks. I also ask Gemini if anyone can do vibe marketing.

Gemini indicates “You Definitely STILL Need Core Marketing Fundamentals:”

  • Strategic Thinking
  • Audience Understanding
  • Brand Knowledge
  • Critical Evaluation
  • Marketing Channel Knowledge

Gemini suggests “NEW Skills or Competencies for AI-Driven Marketing:”

  • Prompt Engineering
  • AI Tool Literacy
  • Editing and Refinement
  • Ethical Awareness
  • Data Interpretation

I asked how this impacts teaching. Gemini suggested ways to teach the foundational and new skills. But emphasized a mindset shift, “Teach them to view AI not as a threat or a magic bullet, but as a powerful collaborator. The marketers who succeed will learn to leverage AI effectively to enhance strategic thinking, creativity, and efficiency, while always maintaining critical oversight and ethical responsibility. You’re preparing them to be the pilots, not the passengers.”

I don’t see new AI tools as a replacement for marketing experts or an easy way for students to get As. There’s a lot to learn in the fundamentals and new skills to use AI tools and practice vibe marketing properly. As I’ve posted, you can’t use AI to shortchange the learning process. But I can see my students feeling the vibes of using AI to help them learn and practice concepts and projects.

Wish You Had Duck Feet?

Duck feet shoe savers may not be the best idea, but it helped me learn the concept of vibe marketing and experienced it all in one day. To advance it, I would use my expertise and involve other discipline experts to fact-check and fill in gaps with more specific data.

I also change the name. I’m not happy with Gemini’s “Quackers.” I’d write my own like “Duckies” and do a trademark search. I’d also have human designers and photographers complete final designs and images for copyright and ethical consideration.

I really enjoy teaching, but if any of the Shark Tank investors are out there and see promise in my idea, I’ll entertain investment offers.

This Post Was 100% Human Written. I did use AI in research and execution which enabled me to learn, apply, test, and refine thoughts quickly. I used Gemini to optimize my headline for engagement and SEO. Thanks to AI tools this post went from idea to research and published in record time.

AI’s Multimodal Future Is Here. Integrating New AI Capabilities Such As NotebookLM In The Classroom.

AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx

In my last post, I needed a pep talk. In teaching digital and social media marketing I’m used to scrambling to keep up with innovations. But AI is a whole other pace. It’s as if I’m trying to keep up with Usain Bolt when I’m used to running marathons.

Like the marathon I signed up for in July, November comes quickly. No matter how training goes the start time comes, the horn goes off, and you run. Here comes the Spring semester. No matter the number of AI updates dropped in December I need to show up ready to go in early January.

If I want to make a difference and have an influence on how AI impacts my discipline and teaching, I don’t have a choice. I can relate to what AI expert Ethan Molick said in his latest Substack,

“This isn’t steady progress – we’re watching AI take uneven leaps past our ability to easily gauge its implications. And this suggests that the opportunity to shape how these technologies transform your field exists now when the situation is fluid, and not after the transformation is complete.”

The other morning, when I should’ve been finishing Fall grades, I spent a couple of hours exploring AI updates and planning how I’ll advance AI integration for Spring. Instead of AI bans (illustrated by the Fahrenheit 451 inspired image of my last post), I’m going deeper with how we can train AI to be our teaching friend, not foe.

AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx
AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx

NotebookLM opens up teaching possibilities.

A lot of new AI updates came this Fall. One that caught my eye was Google’s NotebookLM. In a NotebookLM post, I explained how I was blown away by its Audio Overview of my academic research that it turned into an engaging podcast of two hosts explaining the implications for social media managers.

I see potential to integrate it into my Spring Digital Marketing course. NotebookLM is described as a virtual research assistant –  an AI tool to help you explore and take notes about a source or sources that you upload. Each project you work on is saved in a Notebook that you title.

These are the various notebooks I’ve used so far for research and the new course notebook.
The various notebooks I’ve used so far for research and for my Digital Marketing class.

Whatever reference you upload or link, NotebookLM becomes an expert on that information. It uses your sources to answer questions and complete requests. Responses include clickable citations that take you to where the information came from in sources.

As a Google Workspace for Education user, uploads, queries, and responses are not reviewed by human reviewers or used to train AI models. If you use your personal Google account and choose to provide feedback, human reviewers may see what you submit. To learn more click here.

Source files can be Google Docs, Google Slides, PDFs, Text files, Web URLs, Copy-pasted text, public YouTube video URLs, and Audio files. Each can contain up to 500,000 words or 200MB files. Each notebook can contain up to 50 sources. Added up NotebookLM’s context window is large compared to other models. ChatGPT 4o’s context window is roughly 96,000 words.

When you upload to NotebookLM, it creates an overview summarizing sources, key topics, and suggested questions. It also has a set of standard documents with an FAQ, Study Guide, Table of Contents, Timeline, or Briefing Doc. An impressive feature is the Audio Overview which generates an audio file of two podcast hosts explaining your source or sources.

NotebookLM as an AI tutor.

I plan on using NotebookLM as an AI tutor for students in my Spring Digital Marketing course. I like the open-source text I’ve been using for years, but the author has stopped updates. The strategic process and concepts are sound, so I update content with outside reading and in-class instruction.

I tested NotebookLM creating a notebook for Digital Marketing course resources. First, I uploaded the PDF of the text. Then, I added website links to six digital marketing websites that I use for assigned readings and in-class teaching. Finally, I added my blog. I plan to show students how to create theirs at the beginning of the semester.

This is my notebook for Digital Marketing. I was impressed with asking it questions that I often get from students about assignments.
This is my notebook for Digital Marketing. I was impressed with the answers it gave to questions I often get from students.

AI may not be accurate 100% of the time, but controlling the sources seems to help and puts less pressure on crafting a perfect prompt. My discipline knowledge knows when it gets something wrong. I tested my Digital Marketing NotebookLM asking questions on how to complete main course assignments such as personal branding blogs, email, SEO, and content audits. I haven’t noticed any wrong answers thus far.

Important note about copyright.

I’m testing NotebookLM in this class because my main text is open source and all the websites I link to are publicly published sites (not behind paywalls). Google is clear about its copyright policy,

“Do not share copyrighted content without authorization or provide links to sites where people can obtain unauthorized downloads of copyrighted content.”

We should set a good example and educate students by not uploading copyrighted books or information only accessible through subscriptions or library databases. Below is my general AI policy for the course.

The policy carves out acceptable and helpful uses of AI while explaining the ways AI should not be used.
This policy carves out acceptable/helpful AI use while explaining ways AI shouldn’t be used.

In completing final reports students will access information behind paywalls such as Mintel reports. They’ll add the information and cite it as they’ve done in the past. The goal isn’t to use NotebookLM to complete their assignments for them. The goal is to give them a resource to better understand how to complete their assignments.

NotebookLM as a study tool.

I see NotebookLM as a positive tool for student learning if used as a study guide, reinforcement, or tutor. It would have a negative impact if used to simply replace reading and listening. What’s missed when you use AI in the wrong way is depicted in an infographic I created for a previous blog post on the importance of subject matter expertise when using AI.

For a website assignment, my course NotebookLM gave a nice summary of the process and best practices to follow. That’s something students often struggle to find in the text and other sources. The assignment requires pulling from multiple chapters and resources. The notebook summary included direct links to the information from various text chapters and digital marketing blogs. I also tested its accuracy with questions about an email assignment and had it create a useful study guide.

This will be so helpful for an assignment that student often miss steps and best practices as it draws from multiple parts of the text.
Answering questions will be helpful in assignments where students often miss steps and best practices that draw from multiple parts of the text and readings.

Students can create audio overviews of podcast hosts talking about a topic drawing from the sources. Impressively, when I asked for an Audio Overview explaining the value of a personal professional blog assignment to students it understood the student’s perspective of thinking blogs are outdated. It began, “As a student, I know you’re thinking blogs are outdated, but personal professional blogs are a great …” The Audio Overview also adjusted the text process for businesses and applied it to a personal branding perspective.

Going beyond Copilot in other areas.

I also plan on students leveraging new AI capabilities in Adobe Express and Google’s ImageFX in multiple classes. Our students have free access to Adobe Creative Suite where new AI capabilities go beyond Firefly generated images. In Express you can give it text prompts to create mockups of Instagram and Facebook posts, Instagram stories, YouTube thumbnails, etc.

Students' ideas will be able to be expressed even better with Abobe’s new text to create AI interface in Adobe Express along with its image creation capabilities with Firefly.
Students’ ideas can be expressed better with the text to create AI interface in Adobe Express along with the image creation capabilities of Firefly.

AI’s multimodal future is here.

That other morning I also dove deeper into new AI multimodal capabilities. It was so remarkable I recorded videos of my experience. I explored new live audio interactions in NotebookLM and created a demonstration of what’s possible with Google’s Gemini 2.0 multimodal live video.

I was blown away when testing the new ability to “Join” the conversation of the podcast hosts in NotebookLM’s Audio Overview. While the hosts explained the value of a personal professional blog, I interrupted asking questions with my voice.

 

Near the beginning, the hosts tell students to write about their unique skills. I clicked a “Join” button and they said something like,

“Looks like someone wants to talk.” I asked, “How do you know your unique skills?” They said “Good question,” gave good tips, and continued with the main subject. Later I interrupted and asked, “Can you summarize what you have covered so far?” They said sure, gave a nice summary, and then picked back up where they left off.

Finally, I interrupted to ask a common student question, “What if I’m nervous about publishing a public blog?” The hosts reassured me saying people value honesty and personality, not perfection. What really impressed me was the hosts answering questions about things not specifically in the sources. They could apply concepts from the sources to understand the unique perspective of a given audience.

Multimodal AI as a live co-worker.

This last demonstration of the new multimodal capabilities of AI is for my own use. With Gemini 2.0 in my Google AI Studio account, I could interact in real time using text, voice, video, or screen sharing.

The video below is a demonstration of what’s possible in live video and conversations with Gemini 2.0 as it “sees” what‘s on my screen. I had a conversation with it to get feedback on the outline for my new five-part AI integration workshop I’m planning this Spring for faculty on campus.

Writing the last two blog posts was time well spent.

Planning what I’ll do in the Spring and writing these last two blog posts has taken me two-three days. Because it was 100% human created there was a struggle and a time commitment. But that is how I learn. This knowledge is in my memory so I can explain it, apply it, or answer questions.

Talking to Gemini was helpful, but it doesn’t compare to the conversations I’ve had with colleagues. AI doesn’t know what it feels like to be a professor, professional, or human in this unprecedented moment. Let me know how you’re moving beyond AI bans and where you’re executing caution.

I have a lot of work to do to implement these ideas. That starting horn for the new semester is approaching fast. For my next post on AI see “The AI Agents Are Coming! So Are Reasoning Models. Will They Take Our Jobs And How Should We Prepare?”

100% Human Created!