AI’s Multimodal Future Is Here. Integrating New AI Capabilities In The Classroom.

AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx

In my last post, I needed a pep talk. In teaching digital and social media marketing I’m used to scrambling to keep up with innovations. But AI is a whole other pace. It’s as if I’m trying to keep up with Usain Bolt when I’m used to running marathons.

Like the marathon I signed up for in July, November comes quickly. No matter how training goes the start time comes, the horn goes off, and you run. Here comes the Spring semester. No matter the number of AI updates dropped in December I need to show up ready to go in early January.

If I want to make a difference and have an influence on how AI impacts my discipline and teaching, I don’t have a choice. I can relate to what AI expert Ethan Molick said in his latest Substack,

“This isn’t steady progress – we’re watching AI take uneven leaps past our ability to easily gauge its implications. And this suggests that the opportunity to shape how these technologies transform your field exists now when the situation is fluid, and not after the transformation is complete.”

The other morning, when I should’ve been finishing Fall grades, I spent a couple of hours exploring AI updates and planning how I’ll advance AI integration for Spring. Instead of AI bans (illustrated by the Fahrenheit 451 inspired image of my last post), I’m going deeper with how we can train AI to be our teaching friend, not foe.

AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx
AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx

NotebookLM opens up teaching possibilities.

A lot of new AI updates came this Fall. One that caught my eye was Google’s NotebookLM. In a NotebookLM post, I explained how I was blown away by its Audio Overview of my academic research that it turned into an engaging podcast of two hosts explaining the implications for social media managers.

I see potential to integrate it into my Spring Digital Marketing course. NotebookLM is described as a virtual research assistant –  an AI tool to help you explore and take notes about a source or sources that you upload. Each project you work on is saved in a Notebook that you title.

These are the various notebooks I’ve used so far for research and the new course notebook.
The various notebooks I’ve used so far for research and for my Digital Marketing class.

Whatever reference you upload or link, NotebookLM becomes an expert on that information. It uses your sources to answer questions and complete requests. Responses include clickable citations that take you to where the information came from in sources.

As a Google Workspace for Education user, uploads, queries, and responses are not reviewed by human reviewers or used to train AI models. If you use your personal Google account and choose to provide feedback, human reviewers may see what you submit. To learn more click here.

Source files can be Google Docs, Google Slides, PDFs, Text files, Web URLs, Copy-pasted text, public YouTube video URLs, and Audio files. Each can contain up to 500,000 words or 200MB files. Each notebook can contain up to 50 sources. Added up NotebookLM’s context window is large compared to other models. ChatGPT 4o’s context window is roughly 96,000 words.

When you upload to NotebookLM, it creates an overview summarizing sources, key topics, and suggested questions. It also has a set of standard documents with an FAQ, Study Guide, Table of Contents, Timeline, or Briefing Doc. An impressive feature is the Audio Overview which generates an audio file of two podcast hosts explaining your source or sources.

NotebookLM as an AI tutor.

I plan on using NotebookLM as an AI tutor for students in my Spring Digital Marketing course. I like the open-source text I’ve been using for years, but the author has stopped updates. The strategic process and concepts are sound, so I update content with outside reading and in-class instruction.

I tested NotebookLM creating a notebook for Digital Marketing course resources. First, I uploaded the PDF of the text. Then, I added website links to six digital marketing websites that I use for assigned readings and in-class teaching. Finally, I added my blog. I plan to show students how to create theirs at the beginning of the semester.

This is my notebook for Digital Marketing. I was impressed with asking it questions that I often get from students about assignments.
This is my notebook for Digital Marketing. I was impressed with the answers it gave to questions I often get from students.

AI may not be accurate 100% of the time, but controlling the sources seems to help and puts less pressure on crafting a perfect prompt. My discipline knowledge knows when it gets something wrong. I tested my Digital Marketing NotebookLM asking questions on how to complete main course assignments such as personal branding blogs, email, SEO, and content audits. I haven’t noticed any wrong answers thus far.

Important note about copyright.

I’m testing NotebookLM in this class because my main text is open source and all the websites I link to are publicly published sites (not behind paywalls). Google is clear about its copyright policy,

“Do not share copyrighted content without authorization or provide links to sites where people can obtain unauthorized downloads of copyrighted content.”

We should set a good example and educate students by not uploading copyrighted books or information only accessible through subscriptions or library databases. Below is my general AI policy for the course.

The policy carves out acceptable and helpful uses of AI while explaining the ways AI should not be used.
This policy carves out acceptable/helpful AI use while explaining ways AI shouldn’t be used.

In completing final reports students will access information behind paywalls such as Mintel reports. They’ll add the information and cite it as they’ve done in the past. The goal isn’t to use NotebookLM to complete their assignments for them. The goal is to give them a resource to better understand how to complete their assignments.

NotebookLM as a study tool.

I see NotebookLM as a positive tool for student learning if used as a study guide, reinforcement, or tutor. It would have a negative impact if used to simply replace reading and listening. What’s missed when you use AI in the wrong way is depicted in an infographic I created for a previous blog post on the importance of subject matter expertise when using AI.

For a website assignment, my course NotebookLM gave a nice summary of the process and best practices to follow. That’s something students often struggle to find in the text and other sources. The assignment requires pulling from multiple chapters and resources. The notebook summary included direct links to the information from various text chapters and digital marketing blogs. I also tested its accuracy with questions about an email assignment and had it create a useful study guide.

This will be so helpful for an assignment that student often miss steps and best practices as it draws from multiple parts of the text.
Answering questions will be helpful in assignments where students often miss steps and best practices that draw from multiple parts of the text and readings.

Students can create audio overviews of podcast hosts talking about a topic drawing from the sources. Impressively, when I asked for an Audio Overview explaining the value of a personal professional blog assignment to students it understood the student’s perspective of thinking blogs are outdated. It began, “As a student, I know you’re thinking blogs are outdated, but personal professional blogs are a great …” The Audio Overview also adjusted the text process for businesses and applied it to a personal branding perspective.

Going beyond Copilot in other areas.

I also plan on students leveraging new AI capabilities in Adobe Express and Google’s ImageFX in multiple classes. Our students have free access to Adobe Creative Suite where new AI capabilities go beyond Firefly generated images. In Express you can give it text prompts to create mockups of Instagram and Facebook posts, Instagram stories, YouTube thumbnails, etc.

Students' ideas will be able to be expressed even better with Abobe’s new text to create AI interface in Adobe Express along with its image creation capabilities with Firefly.
Students’ ideas can be expressed better with the text to create AI interface in Adobe Express along with the image creation capabilities of Firefly.

AI’s multimodal future is here.

That other morning I also dove deeper into new AI multimodal capabilities. It was so remarkable I recorded videos of my experience. I explored new live audio interactions in NotebookLM and created a demonstration of what’s possible with Google’s Gemini 2.0 multimodal live video.

I was blown away when testing the new ability to “Join” the conversation of the podcast hosts in NotebookLM’s Audio Overview. While the hosts explained the value of a personal professional blog, I interrupted asking questions with my voice.

 

Near the beginning, the hosts tell students to write about their unique skills. I clicked a “Join” button and they said something like,

“Looks like someone wants to talk.” I asked, “How do you know your unique skills?” They said “Good question,” gave good tips, and continued with the main subject. Later I interrupted and asked, “Can you summarize what you have covered so far?” They said sure, gave a nice summary, and then picked back up where they left off.

Finally, I interrupted to ask a common student question, “What if I’m nervous about publishing a public blog?” The hosts reassured me saying people value honesty and personality, not perfection. What really impressed me was the hosts answering questions about things not specifically in the sources. They could apply concepts from the sources to understand the unique perspective of a given audience.

Multimodal AI as a live co-worker.

This last demonstration of the new multimodal capabilities of AI is for my own use. With Gemini 2.0 in my Google AI Studio account, I could interact in real time using text, voice, video, or screen sharing.

The video below is a demonstration of what’s possible in live video and conversations with Gemini 2.0 as it “sees” what‘s on my screen. I had a conversation with it to get feedback on the outline for my new five-part AI integration workshop I’m planning this Spring for faculty on campus.

Writing the last two blog posts was time well spent.

Planning what I’ll do in the Spring and writing these last two blog posts has taken me two-three days. Because it was 100% human created there was a struggle and a time commitment. But that is how I learn. This knowledge is in my memory so I can explain it, apply it, or answer questions.

Talking to Gemini was helpful, but it doesn’t compare to the conversations I’ve had with colleagues. AI doesn’t know what it feels like to be a professor, professional, or human in this unprecedented moment. Let me know how you’re moving beyond AI bans and where you’re executing caution.

I have a lot of work to do to implement these ideas. That starting horn for the new semester is approaching fast.

100% Human Created!

AI Turned My Academic Journal Article Into An Engaging Podcast For Social Media Pros In Minutes with Google’s NotebookLM.

 I recently published academic research in the Quarterly Review of Business Disciplines with Michael Coolsen titled, “Engagement on Twitter: Connecting Consumer Social Media Gratifications and Forms of Interactivity to Brand Goals as Model for Social Media Engagement.” Exciting right?

If you’re a research geek or academic maybe. A social media manager? No way. Yet, I know the findings, specifically our Brand Consumer Goal Model for Social Media Engagement is very exciting for social media pros! So I wanted to write this blog post.

But, as you can tell by the title, an academic audience, and a professional audience are very different. Taking a complicated 25-page academic research article and translating it into a practical and concise professional blog post could take me hours.

I’ve been meaning to experiment with Google’s new AI generator tool NotebookLM so I thought I would try it. Thus, this blog post is about our research on a social media engagement framework and how I used AI to streamline my process to create it. As a bonus, I got a podcast out of it!

My co-author and I did the hard work of the research. I was okay with an AI assistant helping translate it into different media for different audiences. Click for an AI Task Framework.

Using NotebookLM.

Our study was on types of content that generate engagement on Twitter, but the real value was a proposed model for engagement. So before uploading any of the research into the AI tool, I condensed it to just the theoretical and managerial implications sections. Then I added a title, the journal citation, and saved it as a PDF.

NotebookLM uses Gemini 1.5 Pro. Google describes it as a virtual research assistant. Think of it as an AI tool to help you explore and take notes about a source or sources that you upload. Each project you work on is saved in a Notebook that you title. I titled mine “Brand Consumer Goal Model for Social Media Engagement.”

Whatever you upload NotebookLM becomes an expert on that information. It uses your sources to answer your questions or complete your requests. It responds with citations, showing you original quotes from your sources. Google says that your data is not used to train NotebookLM, so sensitive information stays private (I would still double-check before uploading).

Source files accepted include Google Docs, Google Slides, PDF, Text files, Web URLs, Copy-pasted text, YouTube URLs of public videos, and Audio files. Each source can contain up to 500,000 words, or up to 200MB for uploaded files. Each notebook can contain up to 50 sources. If you add that up NotebookLM’s context window is huge compared to other models. ChatGPT 4o’s context window is roughly 96,000 words.

When you upload a source to NotebookLM, it instantly creates an overview that summarizes all sources, pulls out key topics, and suggests questions to ask. It also has a set of standard documents you can create such as an FAQ, Study Guide, Table of Contents, Timeline, or Briefing Doc.

You can also ask it to create something else. I asked it to write a blog post about the findings of our research. You will see that below. Yet, the most impressive feature is the Audio Overview. This generates an audio file of two podcast hosts explaining your source or sources in the Notebook.

The NotebookLM dashboard gives you a variety of options to interact with your sources.

Using Audio Overviews.

There are no options for the Audio Overview so you get what it creates. But what it creates is amazing! My jaw literally dropped when I heard it. And it will give you slightly different results each time you run it.

I noticed things missing in the first audio overview such as the journal and article title and the authors’ names. I did figure out how to make adjustments by modifying my source document. Through five rounds of modifying my source document, I was able to get that information in and more.

Sometimes overviews aren’t 100% accurate. It says, “NotebookLM may still sometimes give inaccurate responses, so you may want to confirm any facts independently.” In our research article we give a hypothetical example of a running shoe brand following our model. It was not real. But in one version of Audio Overviews, the podcast hosts talk as if the company did what we said and got real results that we measured.

I was impressed that in other versions it didn’t use our example and applied the model to new ones. One time it used an organic tea company and another time a sustainable clothing brand. On the fifth attempt it even built in a commercial break for the “podcast.” This last version gave my running shoe example and added its own about a sustainable activewear brand.

What’s really interesting about the last version is that it pulled in other general knowledge about social media strategy and applied it to the new information of our study. At the end, the hosts bring up how our engagement model will help know what to say but that social media managers still need to customize the content to be appropriate for each social platform. That’s a social media best practice but not something we mention in the article.

The Audio Overview Podcast NotebookLM Created.

 

It’s amazing these podcast hosts discussed our research and explained it so well for social pros. What’s more amazing is that they are not real people! Yet NotebookLM did more. Below is the blog post it wrote. It included our diagram of the model, but had trouble getting it right. So, I replaced the image with one I created from our article.

Brand Consumer Goal Model for Social Media Engagement.

This post examines a model for social media engagement based on an October 2024 study in the Quarterly Review of Business Disciplines. “Engagement on Twitter: Connecting Consumer Social Media Gratifications and Forms of Interactivity to Brand Goals as Model for Social Media Engagement,” published by Keith Quesenberry and Mike Coolsen.

The Brand Consumer Goal Model for Social Media Engagement is a framework to help social pros create more effective plans by aligning brand goals with consumer goals. It emphasizes understanding the motivations behind consumer engagement and tailoring content accordingly.

How the Model Works

The model outlines three key brand goals:

  • Building brand community (Reach): This goal focuses on expanding the brand’s audience and increasing awareness.
  • Building brand-consumer relationships (Response): This goal aims to foster brand interaction and engagement.
  • Building brand-consumer bonds (Recognition): This goal seeks to create brand emotional connections and loyalty.

Each brand goal is associated with a corresponding consumer goal and form of social media engagement as seen in the graphic below. Consumers can manage people’s impressions and persuade others by sharing certain brand posts. They can gain information about a brand with other types of brand content for deliberation. Then they can bond with others regulating their emotions in evaluating brand posts with other fans.

Brand Consumer Goal Model for Social Media Engagement Template
Click on the graphic to download this model for social media engagement.

Here is an example

  • A sneaker brand launches a running shoe and aims to build brand community (reach) creating content that appeals to runners’ desire for impression management and persuasion. This could be sharing inspirational stories about runners breaking records encouraging retweets and brand visibility.
  • Once awareness is established, the brand could shift its focus to building relationships (response) by providing information about the shoe’s features and benefits, appealing to consumers’ information acquisition goals, and prompting replies and comments seeking further details.
  • Finally, the brand can foster brand bonds (recognition) by sharing content that resonates with runners’ social bonding and emotion regulation needs, such as posts about the challenges and rewards of training, which encourage likes and emotional connections.

Key Insights from the Study

The study found that simply using popular content types like videos or photos is not enough to guarantee success on social media. The message delivered with the content is crucial.

Marketers need to consider:

  • The target audience’s motivations for using social media
  • Buying cycle stage (awareness, consideration, purchase, loyalty)
  • Desired word-of-mouth function (sharing, deliberation, evaluation)

The Brand Consumer Goal Model for Social Media Engagement offers a strategic framework for developing effective social media campaigns. By understanding the motivations behind consumer behavior and aligning content with both brand and consumer goals, marketers can achieve better results and build stronger relationships with their target audience.

I hope you found this look at NotebookLM and the insights from our social media research helpful. In what ways do you think NotebookLM can help in your job? In what ways can the insights from the Brand Consumer Engagement Model improve your social media content strategy?

NotebookLM Could be a Great Study Tool for Students.

NotebookLM could be a great tool for student learning if used as a study guide, reinforcement, or tutor. It would have a negative impact if used to simply replace reading and listening in the first place. What’s missed when you use AI in the wrong way is depicted in the graphic below. It is from a previous post on the importance of subject matter expertise when using AI

Personally, I was fine using this tool in this way. My co-author and I did the hard work of the research. This AI assistant simply helped us translate it into different media for different audiences.

This graphic shows that in stages of learning you go through attention, encoding, storage, and retrieval. You need your brain to learn this process not just use AI for the process.
Click the image for a downloadable PDF of this graphic.

Half of This Content Was Human Created!

UPDATE: Customize Audio Overviews Before Processing.

Google released a new version of NotebookLX where you can customize the Audio Overview before processing. I was very impressed with this feature. For example, I had another academic article published about a new no tech policy in the classroom that I implemented after COVID restrictions were released.

I uploaded this academic article and before processing I Customized the Audio Overview telling NotebookXL that my target audience was college students distracted by technology in the classroom and to keep the overview shorter for their short attention spans. Here is the result:

 

UPDATE: Interrupt Audio Overview To Ask Questions With Voice.

With the latest release Google has added the ability to engage directly with the AI hosts during an Audio Overview. I’ve tried it and it works creepily well.

I created an Audio Overview of my student professional blogging assignment for personal branding. In the beginning the hosts tell students to write about their unique skills. I clicked a “Join” button and the host said, “Looks like someone wants to talk.” I asked, “How do you know your unique skills?” They said “good question,” gave good tips and continued with the main subject.

Later I interrupted and asked, “Can you summarize what you have covered so far?” They said sure, gave a nice summary and then picked back up where they left off. Finally, I asked about being nervous putting a blog out in public. The hosts reassured me that I don’t have to be perfect. People value honesty and personality. It’s not about perfection.