AI and Social Media #Blog 8
Last week, I came across a short video on Instagram that looked like a real news report about a political issue in Canada. It had subtitles, a serious tone, and even what looked like a credible news logo. For a few seconds, I believed it. But something felt slightly off — the voice sounded too smooth, and the visuals didn’t quite match. That’s when I realized: it was probably AI-generated. That moment honestly made me uncomfortable. If I almost believed it without thinking, how often am I scrolling past content like this every day?
When misinformation feels real
AI-generated content is becoming harder to recognize, especially on platforms like TikTok and Instagram, where information moves quickly. According to research from The Dais (2025), exposure to deepfake and AI-generated content is especially high among users of these platforms. This means misinformation isn’t something rare — it’s part of our everyday online experience.
What makes this more connecting is how real it feels. AI content often mimics familiar formats like news clips, influencer videos, or educational content. Because of this, it doesn’t immediately trigger skepticism.
Photo by Gabriele Malaspina on Unsplash
Why this matters for digital citizenship
What surprised me most from the readings is that even when platforms label AI-generated content, it doesn’t significantly change how people trust or share it (The Dais, 2025). I used to think that adding a simple label like “AI-generated” would solve the problem. But now I realize that the issue is deeper — it’s about how we think, react, and engage online.
if people continue to believe or share misleading content, it can shape public opinion in subtle ways. Over time, this can reduce trust not only in social media, but also in real journalism and public institutions. MediaSmarts (2025) highlights that concern about AI-driven misinformation is growing in Canada, which shows this is not just a future problem — it is already happening.
A simple idea: “Pause, Check, Verify”
To deal with this, I think media literacy needs to be more practical and interactive, not just theoretical. One idea I would include in a classroom or workplace setting is a short activity called:
- Pause: Don’t immediately truest or share the content
- Check: Look for signs of AI (unnatural voice, strange visuals, missing sources)
- Verify: Search for the original source or confirm with reliable news
Students could work in small groups and analyze a viral post, then decide whether they would trust it or not. I think this kind of activity is important because it trains people to slow down — something we rarely do on social media.
How misinformation spreads so easily
Another Another thing I’ve started to notice is how quickly emotions drive sharing. If something is shocking, funny, or controversial, we’re more likely to react instantly instead of thinking critically. This connects to what we learned about digital citizenship — being responsible online isn’t just about what we post, but also what we choose to believe and share.
Why education matters more than technology
At first, I thought the solution to AI misinformation would be better technology, like detection tools or warning labels. But after reading the Canadian research, I think education is even more important. The Dais (2025) argues that Canadians need stronger AI literacy skills, starting from a young age. Also, their research on youth privacy shows that young people are especially vulnerable in digital environments shaped by AI. This means schools and educators play a key role in preparing students, not just protecting them.
Final reflection
AI is not just changing how content is created — it’s changing how we trust information. My experience with that TikTok video made me realize how easy it is to be misled, even when you think you’re being careful.
For me, digital citizenship now means something more active: slowing down, questioning what I see, and taking responsibility before sharing anything. In a world where AI can create almost anything, being critical is no longer optional — it’s essential.
References
MediaSmarts. (2025). “Wait… What?” Media Literacy Week highlights growing concern over AI-driven misinformation. https://mediasmarts.ca/about-us/press-centre/wait-what-media-literacy-week-highlights-growing-concern-over-ai-driven-misinformation
The Dais. (2025). Human or AI? Evaluating labels on AI-generated social media content. https://dais.ca/reports/human-or-ai/
The Dais. (2025). Submission to the consultation on Canada’s renewed AI strategy. https://dais.ca/reports/generation-ai-safeguarding-youth-privacy-in-the-age-of-generative-artificial-intelligence/
The Dais. (2025). (Gen)eration AI: Safeguarding youth privacy in the age of generative artificial intelligence. https://dais.ca/reports/generation-ai-safeguarding-youth-privacy-in-the-age-of-generative-artificial-intelligence/
Collaborative Review.
Thank you for your thought-provoking blog post.
It would be interesting to see the Instagram video you mentioned in the first paragraph of your blog post, or a similar one. It would highlight and reiterate the great point you made about how unconsciously we scroll past many posts without realizing they are AI-reinforced. It would further align with the point you mentioned that it is becoming harder to differentiate between real and AI-generated content. I wonder whether embedding a poll could further support your points by asking people whether the shared content is AI-generated or real.
I appreciated your mentioning how emotions drive sharing. It could be argued that AI content could be geared towards evoking emotions. I wonder if there are any researchers, articles or findings about this. His could echo the concerns about AI-generated content you mentioned in your blog.
You had a few typos; however, your blog was well written, had a natural flow, was interesting and referenced current research and informational resources. Your idea of “Pause, Check, Verify” is simple, practical and would be easy to remember by students of all ages, and I would also say, adults. Also, having students discuss it would give them the opportunity to reflect as a group and expand their own knowledge. I think it would make a great addition to digital literacy.
Hi I enjoyed reading your blog. I liked how you started telling your own story because it made the problem feel more real and relatable. Your point about how AI content is familiar and therefore more believable was particularly strong because it makes a lot of sense as to why people wouldn’t question it right away. I also liked your “Pause, Check, Verify” suggestion because it’s a simple action people can take when scrolling by their feeds. This is a great reminder about how digital citizenship today and how it’s really about slowing down and being more mindful of what we trust and share.