If the Medium is the Message, What Message Is Social Media Sending?

Book about technology's impact on society.

I typically focus on the positive use of social media to help organizations achieve objectives. I’ve also discussed how social media professionals must act ethically to build trust in brands and their professions. I haven’t talked about the negative aspects of social media itself.

Yet, evidence of the negative effects of social media on mental health and society is increasing. Is there something unique about social media as a technology and a form of communication that may be causing negative, unintended consequences?

Book about technology's impact on society.
I’ve been reading and revisiting some books recently on technology and society.

The Medium Is The Message.

In 1964 Marshall McLuhan first expressed the idea “The medium is the message” in Understanding Media. He said, “The ‘message’ of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.” The idea is that a message comes with any new technology or way to communicate beyond the content. The characteristics of the medium influence how the message is perceived.

In 1984 Neil Postman furthered the idea in Amusing Ourselves To Death. Postman said, “The medium is the metaphor.” He observed a connection between forms of human communication and the quality of a culture where the medium influences “the culture’s intellectual and social preoccupations.” He was concerned TV and visual entertainment, consumed in smaller bits of time, would turn journalism, education, and religion into forms of show business.

Is Social Media The Message?

A key to a successful social media strategy is understanding each social media platform has unique characteristics in the form of content (video, image, text standards, and limits) and in the algorithm that determines what posts are seen by who.

These characteristics and metrics create incentives that motivate behavior. In social media that can be engagement (likes, comments, shares, views), sales (products, services), and advertising revenue (audience size, time). The distinct characteristics and incentives encourage the creation of certain types of content and messages over others.

The message of the medium becomes what the platform and its users say is important – what increases response metrics. It could be “a curated, filtered, perfect life”; “an authentic, 100% transparent sharing of personal struggles”; or “criticisms of out-groups to signal tribe membership.”

As an exercise fill in the Table below.

Consider each social platform and the content that gets results. Are there noticeable patterns or themes? From your observations describe what you believe is the overall message the platform is sending.

Spend an hour on each social media platform and see where the algorithm takes you.

Could social media also send its own message by guiding the type of content that gets posted and disseminated? Consider the types of content that get posted and disseminated on social media versus other forms of traditional media and personal communication. What message does it send and what are the fruits of that message?

There are plenty of positives of social media. It enables us to connect with family/friends, find new communities of similar interests, promote important causes, get emotional support, and learn new information, plus it provides an outlet for self-expression and creativity.

A study found that social media can play a positive role in influencing healthy eating (like fruit and vegetable intake) when shared by peers. Yet, the same study also found that fast food advertising targeting adolescents on social media can have a negative influence on unhealthy weight and disease risks.

Negative Effects of Social Media Research.

Below is a highlight of recent studies. All research has its critics and many point out that social media isn’t the exclusive cause of all negative consequences. Social media also has a lot of positive effects on individuals, businesses, organizations, and society. But we should consider its negative effects – something more people are noticing, studying, and feeling.

People Feel Social Media Isn’t Good.

A 2022 Pew Research survey in the U.S. found:

  • 64% feel social media is a bad thing for democracy.
  • 65% believe social media has made us more divided in our political opinions.
  • 70% believe the spread of false information online is a major threat.

Political Out-Group Posts Spread More.

Research on Facebook/Twitter in Psychological And Cognitive Sciences  found:

  • Political out-group posts get shared 50% more than posts about in-groups.
  • Out-group language is shared 6.7 times more than moral-emotional language.
  • Out-group language is a very strong predictor of “angry” reactions.

False Posts Spread Faster Than The Truth.

Research in Science of verified true/false Twitter news stories found:

  • Falsehoods are 70% more likely to be retweeted than the truth.
  • It took truth posts 6 times longer to reach 1,500 people.
  • Top 1% false posts reach 1,000-10,000 people (Truth posts rarely reach 1,000).

Algorithms Incentivize Moral Outrage.

A Twitter study in Science Advances found:

  • Algorithms influence moral behavior when newsfeed algorithms determine how much social feedback posts receive.
  • Users express outrage when in ideologically extreme networks where outrage is more widespread.
  • Algorithms encourage moderate users to become less moderate with peers expressing outrage.

Social Media Affects Youth Mental Health.

A 2023 U.S. Surgeon General advisory warned social media can pose a risk to the mental health of children and adolescents. Now 95% of 13–17-year-olds use social media an average of 3.5 hours a day. While acknowleding social media benefits, the advisory warned it may also perpetuate body dissatisfaction, disordered eating, social comparison, and low self-esteem.

Adults  are especially concerned about social media’s effect on teens and children.

The advisory warns of relationships between youth social media with sleep difficulties and depression. Other highlights include:

  • Adolescents who spend more than 3 hours a day on social media double their risk of depression and anxiety.
  • 64% of adolescents are “often” or “sometimes” exposed to hate-based content through social media.
  • 46% of adolescents say social media makes them feel worse about their bodies – just 14% said it makes them feel better.

A 2023 survey of U.S. teen girls reveals 49% feel “addicted” to YouTube, 45% to TikTok, 34% to Snapchat, and 34% to Instagram. Yet another survey of teens found they believe social media provides more positives (32% mostly positive) versus negatives (9% mostly negative). They feel it’s a place for socializing and connecting with friends, expressing creativity, and feeling supported.

Bubbles, Chambers, and Bias.

Why are we seeing both positive and negative results? Social media’s unique environment can be very supportive, keeping you connected and helping you express yourself. It can also encourage you to improve your life like peers getting you to eat healthier and improve society by making people aware of important causes.

The same social media environment has also created filter bubbles and echo chambers. Technology can knowingly or unknowingly exploit human vulnerabilities that may accentuate confirmation bias and negativity bias.

  • A filter bubble is an algorithmic bias that skews or limits information someone sees on the internet or on social media.
  • An echo chamber is ideas, beliefs, or data reinforced through repetition in a closed system such as social media that doesn’t allow the free flow of alternative ideas.
  • Confirmation bias is the tendency of people to favor information that confirms their existing beliefs.
  • Negativity bias is the tendency for humans to focus more on the negative versus the positive.

Social media algorithms make it easier to produce filter bubbles that create echo chambers. Over time social media chambers lead to confirmation bias loops of negativity incentivized by engagement metrics.

A detailed article from the MIT Technology Review seems to indicate the problem is it’s difficult for AI machine learning algorithms to minimize negative human consequences when growth is the top priority. Much of what is bad for us and society seems to be what keeps us scrolling the most.

Reducing harm may go against growth objectives and current incentive structures for tech companies to produce mega revenue increases. Social media companies like Facebook, now Meta, continue to say they are doing everything they can to reduce harm despite layoffs.

Social Media Fills Our Spare Time.

While the most popular reason for using social media is to keep in touch with family and friends (57%), the second is to fill spare time (40%). What do we fill our spare time with? With a high percentage of social media revenue depending on advertising (96% of Facebook’s and 89% of Twitter’s) newsfeeds fill with what grows engagement to serve more ads to increase revenue.

That seems to be sensationalized content that stokes fears. Shocking content hacks attention playing into our negativity bias. Perhaps Postman’s prediction of everything becoming show business is true. We’re all chasing TV ratings in the form of likes, comments, and shares.

Recently Elon Musk and Mark Zuckerberg challenged each other to an MMA fight. What greater spectacle than two billionaire owners of competing social media platforms fighting each other in a PPV UFC cage match? Italy’s culture minister even said that it could happen in the Roman Colosseum. I wonder what Neil Postman would say if he were alive?

Journalism Isn’t Immune To Engagement.

As news moves online organizations chase clicks and subscribers through social media. With so many options, news subscribers increasingly seek sources based on confirmation bias. Andrey Mir in Discourse describes a shift to divisive content, “because the best way to boost subscriber rolls and produce results is to target the extremes on either end of the spectrum.”

With 50% of adults getting news from social media sites often or sometimes their stories no longer compete with just other news sites. Stories compete for clicks with the latest viral TikTok and YouTube influencers’ hot takes. A study in Nature found news headlines with negative words improved reading the article. Each negative word added increased the click-through rate by 2.3%.

Are There Legal Limits Coming?

The U.S. Supreme Court sent a case back to lower courts that would have addressed whether social media companies can be held accountable for others’ social media posts. A 1996 law known as Section 230 shields internet companies from what users post online. Lawsuits have been filed alleging that social media algorithms can lead to the radicalization of people leading to atrocities such as terrorist attacks and mass shootings.

The Supreme Court ruled there was little evidence tying Google, the parent company of YouTube, to the terrorist attack in Paris. The lower court ruled that claims were barred by the internet immunity law. Many internet companies warned that undoing or limiting Section 230 would break a lot of the internet tools we have come to depend upon.

While no legislation has passed there seems to be bipartisan support for new social media legislation this year like the Kids Online Safety Act (KOSA). KOSA would require social media companies to shield minors from dangerous content, safeguard personal information, and restrict addictive product features like endless scrolling and autoplay. Critics say KOSA would increase online surveillance and censorship.

Can Algorithms Change People’s Feelings?

A Psychological And Cognitive Sciences study found when the Facebook News Feed team tweaked the algorithm to show fewer positive posts, people’s posts became less positive. When negative posts were reduced people posted more positive posts.

Postman said we default to thinking technology is a friend. We trust it to make life better and it does. But he also warned there is a potential dark side to this friend. To avoid Postman’s fears, perhaps we need to return to McLuhan who said an artist is anyone in a professional field who grasps the implications of their actions and of new knowledge in their own time.
What do you think?

What research is there for or against the negative effects of social media on mental health and society? Should anything be done to combat the negative consequences? What can be done and who should do it?

This Was Human Created Content!

Generative AI Has Come Quick: What’s Out, What’s Coming, and What to Consider.

A table of Generative AI tool options.

ChatGPT was released to the public six months ago and quickly became the fastest application to reach 100 million users. OpenAI reached this milestone in just two months compared to TikTok’s 9 months and Instagram’s 2 ½ years.

The result of this enormous attention is that the world has quickly become aware of the advanced capabilities of generative AI. As of March 2023, 87% of consumers had heard of AI and 61% somewhat understood what generative AI is and how it works.

ChatGPT generates text from text prompts through a chatbot, but that’s not all generative AI can do. The popularity of ChatGPT also brought attention to OpenAI’s image generation tool. DALL-E 2 generates images from text prompts through a chatbot.

A table listing and describing generative AI integration in major software platforms.
Which generative AI tools will you use for digital and social media marketing?

Despite the mass attention, AI tools have been around for years.

I first wrote about AI in a 2019 post “Artificial Intelligence And Social Media. How AI Can Improve Your Job Not Steal It.” In it, I talked about how AI was being used in algorithms, automation, machine learning, natural language processing, and image recognition.

That post also talked about how AI was used in chatbots to simulate human conversion, in predictive and prescriptive analytics, and in content generation. Examples included Patern89 which has been using AI to analyze content combinations and placement for optimization since 2016. Another example was Clinch which has used AI for content automation and personalized dynamic ad content across channels for years.

Since ChatGPTs release, there’s been a race to integrate generative AI.

The race began with ChatGPT being added to Microsoft’s Bing search engine. Then Google announced plans to integrate its generative AI Bard into Google search. Other platforms quickly announced integrations with OpenAI’s ChatGPT and Google’s Bard such as Salesforce, Hootsuite, HubSpot, and Adobe. Microsoft and Google are even integrating ChaptGPT and Bard into Microsoft 365 and Google Workspace office software for writing, spreadsheets, and slides.

Yet they’re not the only options. Other generative AI tools include Jasper.ai and Copy.ai, for writing, and Midjourney and Stable Diffusion for image generation. Tools like Synthsia generates videos with human avatars and professional voiceovers from text prompts. Other examples of generative AI are summarized below.

  AI content generation tool uses:

  • Content research/Data collection
  • Brainstorming/Idea generation
  • Copywriting/Copyediting
  • Summarizing/Note taking
  • Image (photo/illustration) generation
  • Video clip/Podcast clip generation
  • Transcript generation/Automated post prep
  • Ad/Post variation generation
  • Video generation
  • Podcast/Voice over generation
  • Presentation generation

Generative AI tools come with new skills and considerations.

A new skill with these next gen tools is prompt writing. Prompts are the natural language used to ask a generative AI tool to produce something. More descriptive specific prompts produce better results like prompts that describe the tone of writing or style of an image. Yet be mindful potential of copyright issues with prompts to create text or an image in the style of a famous person without their permission.

A new consideration is the data set from which you train AI. Generative AI tools like ChatGPT are trained on data from the open internet. This is what makes it so powerful, but this is also what can lead to copyright issues and sometimes create bias or incorrect results.

Other AI tools like Jasper.ai allow you to train on a specific dataset. For example, a brand could upload all its previous materials to establish a brand voice to write new copy. Adobe’s Firefly draws from Adobe’s stock library and tracks creator images used to ensure copyright compliance.

With the explosion of AI comes limitations and cautions.

Despite the mass adoption, this technology is in its early stages. There hasn’t been a lot of testing. Regulations, laws, and professional standards have yet to be developed. HubSpot suggests the following limitations, cautions, and warnings in using generative AI tools.

  Cautions when using generative AI:

  • AI can’t conduct original research or analysis.
  • AI can get things wrong so you must fact check.
  • AI doesn’t have lived experience and human insight.
  • AI doesn’t ensure quality, strategy, and nuance.
  • AI can contain biases that are not caught by filters.
  • AI can have plagiarism and copyright issues.

Despite these cautions, alarm over societal harm, and escalating calls for regulation, the AI race is on. Even while companies, government, and scientists raise concerns, companies continue to integrate AI into mainstream products and services. Below is a sample of what’s been released or announced thus far.

Examples of Early AI Content Generation and Automation Tools in Major Platforms.

Platform Tool Function
Hootsuite OwlyWriter AI Generates social media captions from URLs in different tone or voice, content ideas from prompts, auto recreation of top posts, and calendar events copy.
HubSpot Content Assistant Generate copy for blog posts, landing pages, emails and other content from idea to outline and copy generation.
ChatSpot Conversational bot that automates CRM tasks including status updates, managing leads, finding prospects, generating reports, forecasts, and follow-up drafts.
Salesforce Einstein GPT Auto-generates sales, service, and marketing tasks, content, targeting, messaging, reporting and personalization across channels.
Adobe Firefly Generate images, fill, text effects, and recolor from text prompts plus create content, and templates and edit video with simple text prompts – some inside Creative Suite.
Sensei GenAI Automates tasks, optimizes and generates content and content variations across channels in Adobe’s Experience Cloud marketing platform.
Canva Magic Write Generates copy, outlines, lists, captions, ideas, and drafts from text prompts.
AI Image Generator Generates images from text prompts and various styles and aspect ratios.
Meta AI Sandbox Tools that generate multiple versions of text and backgrounds, plus autocropping creative assets for various ad formats on Facebook and Instagram.
Grammarly GrammarlyGo
Generates writing and revisions relevant to tone, clarity, length, and task via text prompts in documents, emails, messages, and social media.
Microsoft Microsoft 365 Copilot Generates tasks, content, documents, presentations, spreadsheets, emails, reports, summaries, updates across Word, Excel, PowerPoint, Outlook and Teams via text prompts and Business Chat.
Google Google Workspace Bard Generate drafts, replies, summaries in Gmail, drafts, summaries, proofs in Docs, images, audio and video in Slides, auto analysis in Sheets, and notes in Meet.

Do Consumers (Your Customers/Target Audience) Want AI?

Another consideration with artificial intelligence is the value consumers may put on human generated content and transparency in the use of AI. I began this article by saying that 87% of consumers are now aware of AI. In fact, 4 in 5 of them are convinced that it is the future.

Yet knowing something is the future and wanting that future are different things. The same consumer survey reveals that 3 in 5 (60%) are concerned or undecided about that future. What people are most concerned about is that AI will change what it means to be human.

As marketing communications professionals we need to stay up to date with all these technology advancements. We should use the latest tools to improve our profession and results for our business or clients. But we should also ensure that new technology is used responsibly and transparently.

Over 77% of consumers say brands should ensure biases and systems of inequality are not propagated by AI-based applications. Over 70% believe brands should disclose when they use AI to develop products, services, experiences, and content.

You Decide How To Best Use AI.

At its best, AI can help with the mundane, repetitive tasks of social media and digital marketing management. At its best, AI will enable you to focus on higher level strategic thinking. At its best, AI will not replace humans, but enable us to be more human.

It’s been 6 months since generative AI was brought to mainstream awareness. Companies are rushing to integrate this technology into everything they do. While we wait for regulations, laws, and professional standards to catch up, let’s use our own judgment in deciding when, where, and how best to use it.

For my latest insights into AI, I began a blog series in Summer 2024 with

Artificial Intelligence Use: A Framework For Determining What Tasks to Outsource To AI [Template]

This Was Human Created Content!