AI’s Multimodal Future Is Here. Integrating New AI Capabilities In The Classroom.

AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx

In my last post, I needed a pep talk. In teaching digital and social media marketing I’m used to scrambling to keep up with innovations. But AI is a whole other pace. It’s as if I’m trying to keep up with Usain Bolt when I’m used to running marathons.

Like the marathon I signed up for in July, November comes quickly. No matter how training goes the start time comes, the horn goes off, and you run. Here comes the Spring semester. No matter the number of AI updates dropped in December I need to show up ready to go in early January.

If I want to make a difference and have an influence on how AI impacts my discipline and teaching, I don’t have a choice. I can relate to what AI expert Ethan Molick said in his latest Substack,

“This isn’t steady progress – we’re watching AI take uneven leaps past our ability to easily gauge its implications. And this suggests that the opportunity to shape how these technologies transform your field exists now when the situation is fluid, and not after the transformation is complete.”

The other morning, when I should’ve been finishing Fall grades, I spent a couple of hours exploring AI updates and planning how I’ll advance AI integration for Spring. Instead of AI bans (illustrated by the Fahrenheit 451 inspired image of my last post), I’m going deeper with how we can train AI to be our teaching friend, not foe.

AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx
AI image generated using Google ImageFX from a prompt “Create an image of a professor training an AI computer chip as if it was a dog in a university classroom.” https://labs.google/fx/tools/image-fx

NotebookLM opens up teaching possibilities.

A lot of new AI updates came this Fall. One that caught my eye was Google’s NotebookLM. In a NotebookLM post, I explained how I was blown away by its Audio Overview of my academic research that it turned into an engaging podcast of two hosts explaining the implications for social media managers.

I see potential to integrate it into my Spring Digital Marketing course. NotebookLM is described as a virtual research assistant –  an AI tool to help you explore and take notes about a source or sources that you upload. Each project you work on is saved in a Notebook that you title.

These are the various notebooks I’ve used so far for research and the new course notebook.
The various notebooks I’ve used so far for research and for my Digital Marketing class.

Whatever reference you upload or link, NotebookLM becomes an expert on that information. It uses your sources to answer questions and complete requests. Responses include clickable citations that take you to where the information came from in sources.

As a Google Workspace for Education user, uploads, queries, and responses are not reviewed by human reviewers or used to train AI models. If you use your personal Google account and choose to provide feedback, human reviewers may see what you submit. To learn more click here.

Source files can be Google Docs, Google Slides, PDFs, Text files, Web URLs, Copy-pasted text, public YouTube video URLs, and Audio files. Each can contain up to 500,000 words or 200MB files. Each notebook can contain up to 50 sources. Added up NotebookLM’s context window is large compared to other models. ChatGPT 4o’s context window is roughly 96,000 words.

When you upload to NotebookLM, it creates an overview summarizing sources, key topics, and suggested questions. It also has a set of standard documents with an FAQ, Study Guide, Table of Contents, Timeline, or Briefing Doc. An impressive feature is the Audio Overview which generates an audio file of two podcast hosts explaining your source or sources.

NotebookLM as an AI tutor.

I plan on using NotebookLM as an AI tutor for students in my Spring Digital Marketing course. I like the open-source text I’ve been using for years, but the author has stopped updates. The strategic process and concepts are sound, so I update content with outside reading and in-class instruction.

I tested NotebookLM creating a notebook for Digital Marketing course resources. First, I uploaded the PDF of the text. Then, I added website links to six digital marketing websites that I use for assigned readings and in-class teaching. Finally, I added my blog. I plan to show students how to create theirs at the beginning of the semester.

This is my notebook for Digital Marketing. I was impressed with asking it questions that I often get from students about assignments.
This is my notebook for Digital Marketing. I was impressed with the answers it gave to questions I often get from students.

AI may not be accurate 100% of the time, but controlling the sources seems to help and puts less pressure on crafting a perfect prompt. My discipline knowledge knows when it gets something wrong. I tested my Digital Marketing NotebookLM asking questions on how to complete main course assignments such as personal branding blogs, email, SEO, and content audits. I haven’t noticed any wrong answers thus far.

Important note about copyright.

I’m testing NotebookLM in this class because my main text is open source and all the websites I link to are publicly published sites (not behind paywalls). Google is clear about its copyright policy,

“Do not share copyrighted content without authorization or provide links to sites where people can obtain unauthorized downloads of copyrighted content.”

We should set a good example and educate students by not uploading copyrighted books or information only accessible through subscriptions or library databases. Below is my general AI policy for the course.

The policy carves out acceptable and helpful uses of AI while explaining the ways AI should not be used.
This policy carves out acceptable/helpful AI use while explaining ways AI shouldn’t be used.

In completing final reports students will access information behind paywalls such as Mintel reports. They’ll add the information and cite it as they’ve done in the past. The goal isn’t to use NotebookLM to complete their assignments for them. The goal is to give them a resource to better understand how to complete their assignments.

NotebookLM as a study tool.

I see NotebookLM as a positive tool for student learning if used as a study guide, reinforcement, or tutor. It would have a negative impact if used to simply replace reading and listening. What’s missed when you use AI in the wrong way is depicted in an infographic I created for a previous blog post on the importance of subject matter expertise when using AI.

For a website assignment, my course NotebookLM gave a nice summary of the process and best practices to follow. That’s something students often struggle to find in the text and other sources. The assignment requires pulling from multiple chapters and resources. The notebook summary included direct links to the information from various text chapters and digital marketing blogs. I also tested its accuracy with questions about an email assignment and had it create a useful study guide.

This will be so helpful for an assignment that student often miss steps and best practices as it draws from multiple parts of the text.
Answering questions will be helpful in assignments where students often miss steps and best practices that draw from multiple parts of the text and readings.

Students can create audio overviews of podcast hosts talking about a topic drawing from the sources. Impressively, when I asked for an Audio Overview explaining the value of a personal professional blog assignment to students it understood the student’s perspective of thinking blogs are outdated. It began, “As a student, I know you’re thinking blogs are outdated, but personal professional blogs are a great …” The Audio Overview also adjusted the text process for businesses and applied it to a personal branding perspective.

Going beyond Copilot in other areas.

I also plan on students leveraging new AI capabilities in Adobe Express and Google’s ImageFX in multiple classes. Our students have free access to Adobe Creative Suite where new AI capabilities go beyond Firefly generated images. In Express you can give it text prompts to create mockups of Instagram and Facebook posts, Instagram stories, YouTube thumbnails, etc.

Students' ideas will be able to be expressed even better with Abobe’s new text to create AI interface in Adobe Express along with its image creation capabilities with Firefly.
Students’ ideas can be expressed better with the text to create AI interface in Adobe Express along with the image creation capabilities of Firefly.

AI’s multimodal future is here.

That other morning I also dove deeper into new AI multimodal capabilities. It was so remarkable I recorded videos of my experience. I explored new live audio interactions in NotebookLM and created a demonstration of what’s possible with Google’s Gemini 2.0 multimodal live video.

I was blown away when testing the new ability to “Join” the conversation of the podcast hosts in NotebookLM’s Audio Overview. While the hosts explained the value of a personal professional blog, I interrupted asking questions with my voice.

 

Near the beginning, the hosts tell students to write about their unique skills. I clicked a “Join” button and they said something like,

“Looks like someone wants to talk.” I asked, “How do you know your unique skills?” They said “Good question,” gave good tips, and continued with the main subject. Later I interrupted and asked, “Can you summarize what you have covered so far?” They said sure, gave a nice summary, and then picked back up where they left off.

Finally, I interrupted to ask a common student question, “What if I’m nervous about publishing a public blog?” The hosts reassured me saying people value honesty and personality, not perfection. What really impressed me was the hosts answering questions about things not specifically in the sources. They could apply concepts from the sources to understand the unique perspective of a given audience.

Multimodal AI as a live co-worker.

This last demonstration of the new multimodal capabilities of AI is for my own use. With Gemini 2.0 in my Google AI Studio account, I could interact in real time using text, voice, video, or screen sharing.

The video below is a demonstration of what’s possible in live video and conversations with Gemini 2.0 as it “sees” what‘s on my screen. I had a conversation with it to get feedback on the outline for my new five-part AI integration workshop I’m planning this Spring for faculty on campus.

Writing the last two blog posts was time well spent.

Planning what I’ll do in the Spring and writing these last two blog posts has taken me two-three days. Because it was 100% human created there was a struggle and a time commitment. But that is how I learn. This knowledge is in my memory so I can explain it, apply it, or answer questions.

Talking to Gemini was helpful, but it doesn’t compare to the conversations I’ve had with colleagues. AI doesn’t know what it feels like to be a professor, professional, or human in this unprecedented moment. Let me know how you’re moving beyond AI bans and where you’re executing caution.

I have a lot of work to do to implement these ideas. That starting horn for the new semester is approaching fast.

100% Human Created!

If the Medium is the Message, What Message Is Social Media Sending?

Book about technology's impact on society.

I typically focus on the positive use of social media to help organizations achieve objectives. I’ve also discussed how social media professionals must act ethically to build trust in brands and their professions. I haven’t talked about the negative aspects of social media itself.

Yet, evidence of the negative effects of social media on mental health and society is increasing. Is there something unique about social media as a technology and a form of communication that may be causing negative, unintended consequences?

Book about technology's impact on society.
I’ve been reading and revisiting some books recently on technology and society.

The Medium Is The Message.

In 1964 Marshall McLuhan first expressed the idea “The medium is the message” in Understanding Media. He said, “The ‘message’ of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.” The idea is that a message comes with any new technology or way to communicate beyond the content. The characteristics of the medium influence how the message is perceived.

In 1984 Neil Postman furthered the idea in Amusing Ourselves To Death. Postman said, “The medium is the metaphor.” He observed a connection between forms of human communication and the quality of a culture where the medium influences “the culture’s intellectual and social preoccupations.” He was concerned TV and visual entertainment, consumed in smaller bits of time, would turn journalism, education, and religion into forms of show business.

Is Social Media The Message?

A key to a successful social media strategy is understanding each social media platform has unique characteristics in the form of content (video, image, text standards, and limits) and in the algorithm that determines what posts are seen by who.

These characteristics and metrics create incentives that motivate behavior. In social media that can be engagement (likes, comments, shares, views), sales (products, services), and advertising revenue (audience size, time). The distinct characteristics and incentives encourage the creation of certain types of content and messages over others.

The message of the medium becomes what the platform and its users say is important – what increases response metrics. It could be “a curated, filtered, perfect life”; “an authentic, 100% transparent sharing of personal struggles”; or “criticisms of out-groups to signal tribe membership.”

As an exercise fill in the Table below.

Consider each social platform and the content that gets results. Are there noticeable patterns or themes? From your observations describe what you believe is the overall message the platform is sending.

Spend an hour on each social media platform and see where the algorithm takes you.

Could social media also send its own message by guiding the type of content that gets posted and disseminated? Consider the types of content that get posted and disseminated on social media versus other forms of traditional media and personal communication. What message does it send and what are the fruits of that message?

There are plenty of positives of social media. It enables us to connect with family/friends, find new communities of similar interests, promote important causes, get emotional support, and learn new information, plus it provides an outlet for self-expression and creativity.

A study found that social media can play a positive role in influencing healthy eating (like fruit and vegetable intake) when shared by peers. Yet, the same study also found that fast food advertising targeting adolescents on social media can have a negative influence on unhealthy weight and disease risks.

Negative Effects of Social Media Research.

Below is a highlight of recent studies. All research has its critics and many point out that social media isn’t the exclusive cause of all negative consequences. Social media also has a lot of positive effects on individuals, businesses, organizations, and society. But we should consider its negative effects – something more people are noticing, studying, and feeling.

People Feel Social Media Isn’t Good.

A 2022 Pew Research survey in the U.S. found:

  • 64% feel social media is a bad thing for democracy.
  • 65% believe social media has made us more divided in our political opinions.
  • 70% believe the spread of false information online is a major threat.

Political Out-Group Posts Spread More.

Research on Facebook/Twitter in Psychological And Cognitive Sciences  found:

  • Political out-group posts get shared 50% more than posts about in-groups.
  • Out-group language is shared 6.7 times more than moral-emotional language.
  • Out-group language is a very strong predictor of “angry” reactions.

False Posts Spread Faster Than The Truth.

Research in Science of verified true/false Twitter news stories found:

  • Falsehoods are 70% more likely to be retweeted than the truth.
  • It took truth posts 6 times longer to reach 1,500 people.
  • Top 1% false posts reach 1,000-10,000 people (Truth posts rarely reach 1,000).

Algorithms Incentivize Moral Outrage.

A Twitter study in Science Advances found:

  • Algorithms influence moral behavior when newsfeed algorithms determine how much social feedback posts receive.
  • Users express outrage when in ideologically extreme networks where outrage is more widespread.
  • Algorithms encourage moderate users to become less moderate with peers expressing outrage.

Social Media Affects Youth Mental Health.

A 2023 U.S. Surgeon General advisory warned social media can pose a risk to the mental health of children and adolescents. Now 95% of 13–17-year-olds use social media an average of 3.5 hours a day. While acknowleding social media benefits, the advisory warned it may also perpetuate body dissatisfaction, disordered eating, social comparison, and low self-esteem.

Adults  are especially concerned about social media’s effect on teens and children.

The advisory warns of relationships between youth social media with sleep difficulties and depression. Other highlights include:

  • Adolescents who spend more than 3 hours a day on social media double their risk of depression and anxiety.
  • 64% of adolescents are “often” or “sometimes” exposed to hate-based content through social media.
  • 46% of adolescents say social media makes them feel worse about their bodies – just 14% said it makes them feel better.

A 2023 survey of U.S. teen girls reveals 49% feel “addicted” to YouTube, 45% to TikTok, 34% to Snapchat, and 34% to Instagram. Yet another survey of teens found they believe social media provides more positives (32% mostly positive) versus negatives (9% mostly negative). They feel it’s a place for socializing and connecting with friends, expressing creativity, and feeling supported.

Bubbles, Chambers, and Bias.

Why are we seeing both positive and negative results? Social media’s unique environment can be very supportive, keeping you connected and helping you express yourself. It can also encourage you to improve your life like peers getting you to eat healthier and improve society by making people aware of important causes.

The same social media environment has also created filter bubbles and echo chambers. Technology can knowingly or unknowingly exploit human vulnerabilities that may accentuate confirmation bias and negativity bias.

  • A filter bubble is an algorithmic bias that skews or limits information someone sees on the internet or on social media.
  • An echo chamber is ideas, beliefs, or data reinforced through repetition in a closed system such as social media that doesn’t allow the free flow of alternative ideas.
  • Confirmation bias is the tendency of people to favor information that confirms their existing beliefs.
  • Negativity bias is the tendency for humans to focus more on the negative versus the positive.

Social media algorithms make it easier to produce filter bubbles that create echo chambers. Over time social media chambers lead to confirmation bias loops of negativity incentivized by engagement metrics.

A detailed article from the MIT Technology Review seems to indicate the problem is it’s difficult for AI machine learning algorithms to minimize negative human consequences when growth is the top priority. Much of what is bad for us and society seems to be what keeps us scrolling the most.

Reducing harm may go against growth objectives and current incentive structures for tech companies to produce mega revenue increases. Social media companies like Facebook, now Meta, continue to say they are doing everything they can to reduce harm despite layoffs.

Social Media Fills Our Spare Time.

While the most popular reason for using social media is to keep in touch with family and friends (57%), the second is to fill spare time (40%). What do we fill our spare time with? With a high percentage of social media revenue depending on advertising (96% of Facebook’s and 89% of Twitter’s) newsfeeds fill with what grows engagement to serve more ads to increase revenue.

That seems to be sensationalized content that stokes fears. Shocking content hacks attention playing into our negativity bias. Perhaps Postman’s prediction of everything becoming show business is true. We’re all chasing TV ratings in the form of likes, comments, and shares.

Recently Elon Musk and Mark Zuckerberg challenged each other to an MMA fight. What greater spectacle than two billionaire owners of competing social media platforms fighting each other in a PPV UFC cage match? Italy’s culture minister even said that it could happen in the Roman Colosseum. I wonder what Neil Postman would say if he were alive?

Journalism Isn’t Immune To Engagement.

As news moves online organizations chase clicks and subscribers through social media. With so many options, news subscribers increasingly seek sources based on confirmation bias. Andrey Mir in Discourse describes a shift to divisive content, “because the best way to boost subscriber rolls and produce results is to target the extremes on either end of the spectrum.”

With 50% of adults getting news from social media sites often or sometimes their stories no longer compete with just other news sites. Stories compete for clicks with the latest viral TikTok and YouTube influencers’ hot takes. A study in Nature found news headlines with negative words improved reading the article. Each negative word added increased the click-through rate by 2.3%.

Are There Legal Limits Coming?

The U.S. Supreme Court sent a case back to lower courts that would have addressed whether social media companies can be held accountable for others’ social media posts. A 1996 law known as Section 230 shields internet companies from what users post online. Lawsuits have been filed alleging that social media algorithms can lead to the radicalization of people leading to atrocities such as terrorist attacks and mass shootings.

The Supreme Court ruled there was little evidence tying Google, the parent company of YouTube, to the terrorist attack in Paris. The lower court ruled that claims were barred by the internet immunity law. Many internet companies warned that undoing or limiting Section 230 would break a lot of the internet tools we have come to depend upon.

While no legislation has passed there seems to be bipartisan support for new social media legislation this year like the Kids Online Safety Act (KOSA). KOSA would require social media companies to shield minors from dangerous content, safeguard personal information, and restrict addictive product features like endless scrolling and autoplay. Critics say KOSA would increase online surveillance and censorship.

Can Algorithms Change People’s Feelings?

A Psychological And Cognitive Sciences study found when the Facebook News Feed team tweaked the algorithm to show fewer positive posts, people’s posts became less positive. When negative posts were reduced people posted more positive posts.

Postman said we default to thinking technology is a friend. We trust it to make life better and it does. But he also warned there is a potential dark side to this friend. To avoid Postman’s fears, perhaps we need to return to McLuhan who said an artist is anyone in a professional field who grasps the implications of their actions and of new knowledge in their own time.
What do you think?

What research is there for or against the negative effects of social media on mental health and society? Should anything be done to combat the negative consequences? What can be done and who should do it?

This Was Human Created Content!