Can Media Literacy Save Us from the Illusions of AI?
Luna Tian
Today, the question we face may not be “What can we still trust?” but rather, “How can we live responsibly in an age where truth itself is unstable?”
AI, like every other transformative technology, has no reverse gear. Once it starts, it doesn’t stop. The line between the real and the artificial will only continue to blur as AI-generated content multiplies.
Perhaps media literacy programs, which we keep reading about, embody a belief — that education can help people reclaim control over truth.
But maybe the point is not about control, but about coexistence.
I. The Pope in a Puffer Jacket
I still remember one of the most striking pieces of news from March 2023:
Pope Francis appeared online wearing a white Balenciaga puffer coat.
Fake photos of Pope Francis in a puffer jacket.
The photo made me laugh. There he was, calm and dignified, walking down the street in that voluminous, glossy jacket — a sharp contrast to his usual modest robes. The coat was beautiful, its fabric full and rich in texture.
Within hours, the image went viral, shared millions of times. Some people laughed like I did; some discussed the Catholic Church’s public image strategy; others admired the outfit, even asking me whether I’d consider buying one.
For a moment, I honestly thought it was part of a marketing campaign.
Later, I learned that the image was created by a construction worker in Chicago. He had simply typed a few words into Midjourney — “Pope,” “Balenciaga,” “street photo.” Seconds later, the algorithm produced it: the lighting perfect, the texture of the coat realistic, even the Pope’s slightly hunched posture eerily convincing.
Three days later, fact-checking organizations confirmed that it was AI-generated.
But by then, the image had already been reprinted by thousands of media outlets, discussed endlessly online. In some sense, it had already become an event.
I noticed one user on Twitter who asked:
“If even this is fake, what can we still believe in?”
At the time, I didn’t have an answer.
Eight months later, UNESCO published the Guidelines for the Governance of Digital Platforms — a document that tried to impose order on our chaotic digital landscape.
Its core recommendations could be summarized as three principles:
- Foster critical thinking
- Promote fact-checking
- Urge platforms to label AI-generated content
Learn to distinguish the true from the false — and you can remain clear-minded amid AI’s illusions. That was its logic.
The first time I read this document was in May last year, while finishing a term paper. It was nearly summer; everything felt heavy; I was in a bad mood — and, as luck would have it, it was raining.
Still, I found the document well written. I’ve always had a soft spot for texts filled with goodwill and idealism.
But I stopped at page nineteen.
There, it said: platforms should be urged to label AI-generated content. Not required, not mandated, but urged — a word so polite it’s nearly transparent.
That night, out of boredom, I opened ChatGPT and asked it to generate a short paragraph about media literacy, citing three academic papers.
It responded instantly: eloquent prose, coherent structure, three papers listed with author names, titles, and journal names.
When I checked the references, none of them existed.
It wasn’t the first time, and yet I couldn’t help laughing. The authors’ names were real; the journals were real; the paper titles sounded perfectly plausible — but the works themselves had never been written.
I asked the AI to describe these “papers.”
Its made-up arguments sometimes sounded more persuasive than real scholarship — as if they existed somewhere in a parallel universe, waiting to be cited. The thought sent a shiver through me.
So, is our biggest challenge still “how to tell true from false”?
I think not. The real question is: what counts as truth?
II. The Illusion of Stability
If we look back — and “back” may mean only two years ago now — UNESCO’s guidelines were born in a peculiar moment of history.
Let’s return to 2023: ChatGPT had just entered the public eye; Midjourney images were flooding social media; governments worldwide were scrambling to respond.
UNESCO’s chosen path was the familiar one: education.
“Empowering users — including through media and information literacy education — is fundamental to countering disinformation and promoting healthy online participation,”
the document reads on page eleven.
It’s a familiar logic.
It assumes the problem lies with the user — not as a form of blame, but as a premise: if people can be made smarter and more critical, then they can resist misinformation.
Learn critical thinking, practice fact-checking, and you’ll survive the data flood.
But the language of the document is telling. Words like “should,” “encourage,” and “promote” abound, while “must” and “require” are scarce.
When it comes to platform responsibility, the tone becomes almost evasive.
In discussing algorithmic transparency, the text suggests that platforms “should consider” disclosing information about their recommendation systems.
Should consider — meaning, of course, they could also not consider it.
I closed my laptop and went for a walk, still thinking about what the document didn’t say.
What happens when there are no penalties, no oversight bodies, no accountability?
What if platforms simply choose not to label AI content?
If algorithms remain opaque, who is responsible?
Unfortunately, the document is silent on that.
I recalled a Zoom conference I once attended. A policymaker said on stage:
“We must empower every citizen to distinguish truth from falsehood.”
Thunderous applause followed.
During the Q&A, a woman introduced herself as a high school teacher and asked:
“If my students barely have time to finish their homework, how much time should they spend verifying the news every day?”
Moments later, I logged out.
The panelists praised her “excellent question,” promised to “add more class hours,” and moved on.
I stood by the library window. It was already getting dark.
Let’s imagine, then, a thought experiment.
Suppose you are a responsible citizen.
You follow every UNESCO recommendation.
One day, you read a piece of news: a certain government is secretly developing a new weapon.
You check the author’s profile — it looks legitimate.
You search for corroboration — several small media outlets have reported the same.
You find documents that look official.
You cross-check multiple sources. Everything aligns.
So, you share it.
Three days later, a fact-checking organization announces that it was a coordinated disinformation campaign.
Those “small media outlets” were fabricated. The “official documents” were AI-generated.
The “ordinary user” was a bot.
The entire event was an AI-driven, multilayered illusion.
As I once wrote in another essay, “Things That Look Like News,” — a text that looks like journalism feels trustworthy. Likewise, something that looks like truth can bypass our verification instincts.
You did everything right. You followed every guideline.
And still, you were fooled.
Because the rules of the game have changed.
III. After the Enlightenment
The French philosopher Antoinette Rouvroy once introduced a concept called “algorithmic governmentality” during a talk at the Society of the Query conference.
She explained it simply:
In the digital age, power no longer operates by issuing orders; it operates by shaping environments.
Platforms don’t have to tell you what to see — they just adjust the algorithms, and you’ll naturally see what they want you to see.
So, does media literacy still matter?
Maybe.
But if it only teaches you how to find your way out of a maze without ever asking why the maze exists, it risks becoming another tool of governance.
Education should teach us to question and to critique.
What does an ideal human being look like?
Perhaps rational, autonomous, capable of independent judgment — the Enlightenment model of Descartes’ “I think, therefore I am” and Kant’s “Dare to know.”
In that tradition, humans are subjects; technology is a tool. The boundary between them is clear.
But does that boundary still exist?
These days, I have new habits that I never used to have.
When I hit a wall in writing, I used to email a trusted friend or text a kind professor. We’d discuss ideas, and I’d often come away inspired.
It slowed my writing, but those conversations were warm and human. I remember the office hours that turned into dinners at favorite restaurants. I’ve never been a solitary writer; I like the feeling of thinking with others.
ChatGPT has changed that.
Although I avoid using AI in my essays or academic work, I still appreciate its instant feedback — the kind that comes without judgment or social weight.
Sometimes I ask it questions, seek suggestions, explore threads of thought.
But mid-conversation, I often pause and wonder: Who is writing this?
Is it me, or the AI?
Or have we, somehow, co-authored something?
I recall reading the literary theorist N. Katherine Hayles, who developed the concept of the “posthuman.”
She observed that our thinking, memory, and judgment are now so deeply embedded in technical systems that it’s nearly impossible to say where the human ends and the machine begins.
As someone who grew up reading science fiction, I once found that idea exciting.
Now that I’m living it, I’m not sure whether to call it a privilege or a tragedy.
Hayles wrote:
“The question is not how to resist AI’s influence, but how to remain ethical, reflective, and responsible within a state of coexistence.”
Traditional media literacy teaches us to be better independent judges.
But what if judgment itself is no longer an independent act?
If AI not only transmits information but shapes how we perceive the world — does “independent judgment” still mean anything?
A few days ago, I called a classmate who studies the philosophy of AI.
I told him about my anxiety — about how overwhelming this all feels — and asked what troubled him most about large language models.
He thought for a while and said:
“They’re not reproducing human knowledge. They’re generating a new kind of knowledge.”
“What do you mean?” I asked.
“Traditionally,” he said, “knowledge comes from evidence, experience, reasoning. A statement is true because it corresponds to reality.
But GPT doesn’t work that way. It calculates statistical relationships between words. It predicts which sentences sound plausible.
The result is something neither true nor false — just apparently believable.”
I told him about the fake citations ChatGPT had produced for me.
He said, “That’s not deception in the traditional sense. It’s a new form of production. It’s not giving you lies — it’s giving you possibilities.”
IV. What Do We Need?
There’s no simple answer.
But perhaps we can begin by reframing the question.
Not “Is this true or false?” — but
“How was this created?”
Not “How can I identify AI content?” — but
“How is AI shaping the way I think?”
Not “How do I protect myself?” — but
“How do we collectively build a more transparent environment?”
This requires a new kind of awareness.
First, a reflective mindset.
It may be hard to reject AI entirely in daily life, but staying conscious while using it is still possible.
When you ask ChatGPT to help write an email, when you accept algorithmic news recommendations, when you feel convinced by a well-written post — pause. Step back. Ask yourself:
Why does this feel trustworthy?
Because it makes sense? Or because it aligns with what I already want to believe?
It’s not about one-time fact-checking. It’s about cultivating a state of ongoing dialogue — a kind of awareness.
Even if you’re co-creating meaning with AI, you can still remain conscious of that process.
Then, there’s systems thinking.
Information doesn’t appear out of nowhere. It is produced, filtered, and circulated within complex technological, economic, and political systems.
When you read a headline that begins with “Studies show…”, don’t just ask whether the study exists — ask:
Who funded this study?
Where did the data come from?
Why is this narrative being amplified now?
Some technical literacy is also helpful.
You don’t need to learn how to code, but understanding the basics helps:
- How do recommendation algorithms determine what we see?
- Why do language models hallucinate?
- How does training data shape AI bias?
We don’t need technical mastery.
We need power literacy — to understand who shapes our information environment, and how.
But most important of all: we must shift from individual responsibility to collective action.
No matter how smart you are as an individual, you can’t fight systemic manipulation alone.
You may learn to detect fake news. But if an algorithm is pushing carefully designed disinformation to a billion users every day — your personal resistance won’t be enough.
We need a concept of digital citizenship.
The right to know what algorithms are doing to you.
The right to control how your data is used.
The right to demand explanations when AI systems make decisions that affect your life.
The right to opt out of personalized recommendations.
These aren’t gifts from tech companies.
They are rights — to be won through legislation, regulation, and collective negotiation.
The EU is already experimenting.
The AI Act passed in 2024 mandates that high-risk AI systems must be explainable and human-supervised.
The Digital Services Act gives users the right to opt out of personalization, and forces major platforms to explain their algorithmic logic to regulators.
These laws are imperfect, their enforcement full of challenges.
But they acknowledge one essential truth: we cannot place all responsibility on the user.
Some say the EU is too conservative.
But I think there’s value in trying to pull the reins.
Of course, legislation always lags behind.
By the time we’re regulating GPT-4, GPT-5 is already here.
But that’s not a reason to give up — it’s a reason to act faster, and smarter.
The hardest question to ignore is: what responsibility do the platforms bear?
Why are we always talking about “educating users,”
but almost never about regulating platforms?
If a food company sells toxic food, we don’t say, “Consumers should learn to detect poison.”
But in the digital world, we’ve accepted a strange logic: platforms can design systems that spread lies — then blame the user for believing them.
That logic must be challenged.
Algorithms decide what gets seen, amplified, and recommended.
If that’s editorial power, then editorial responsibility must follow.
Concretely:
- Platforms must make recommendation algorithms transparent to regulators.
- If disinformation causes real-world harm, platforms should be held legally accountable.
- Users should have the right to opt out of personalization.
As for “AI-generated content must be labeled” — honestly, that may be the most difficult goal of all.
There are enormous technical challenges.
Who decides what counts as “AI-generated”?
What if a text is 50% AI-written, 50% human-edited — does it count?
And what about enforcement? Anonymous uploads? Cross-border platforms?
Maybe perfection isn’t the goal.
Maybe the goal is procedural justice.
In high-risk domains — journalism, medicine, legal documents — disclosure of AI use should be mandatory.
Platforms can offer creators labeling tools.
We can invest in detection technologies — though it may become an endless arms race.
The point is not to design a perfect system,
but to stop offloading all the burden onto individuals.
V. Living in the Flow
As I write this, it’s not raining — but it is windy.
The temperature dropped nearly five degrees today.
I might need my down jacket tomorrow.
Speaking of jackets…
That image of the Pope in the puffer coat comes back to me.
It looked so real. I even thought it was brand marketing. But it wasn’t. It was misinformation.
And when people found out, they didn’t get angry.
They got confused.
“If even this is fake, what can we still believe in?”
Maybe that question itself is flawed.
The point isn’t what we can still believe in —
but how we live ethically in an age where truth is fluid.
Like all technology, AI can’t be un-invented.
We’re heading into a future where content will be increasingly generated,
and the line between fact and fiction will continue to dissolve.
We cannot return to a time when truth was clear —
and perhaps that time never existed.
Even before AI, we were already surrounded by misinformation.
Still, I believe UNESCO’s media literacy framework represents a sincere conviction —
a belief that education can help us regain control over truth.
But control is not the answer.
Coexistence is.
That doesn’t mean abandoning judgment.
It means cultivating a more complex kind of literacy —
one that reflects while using AI, questions the logic of production,
and seeks collective change beyond personal efforts.
It means recognizing that while we can no longer stand outside technology,
we can still remain awake within it.
Last fall, while giving a presentation, a student sighed and said,
“It sounds like we can’t be sure of anything anymore.”
I thought for a moment and replied:
“It’s not that nothing can be certain. It’s that the way we become certain has changed.
We used to say something was true because it matched reality.
Now, we say something is credible because we understand how it was created, why, and by whom.
That kind of certainty is more fragile — but perhaps more honest.”
She nodded, not entirely convinced, and walked out.
I was left alone in the hallway. The boats at the dock were glowing orange in the dusk.
AI is redefining knowledge, truth, and reality.
We can participate in that process — not as passive users,
but as empowered citizens, as co-creators of meaning.
What we need is not a better truth-detector,
but the ability to swim in a sea of shifting truths.
Not to fight the current,
but to understand its direction, know our position within it,
and choose where we want to go.
It won’t be easy.
But this is the survival skill we must now learn.
Epilogue
The streetlights are on.
I pack up my things and step out of the library.
The air after the rain smells clean.
People walk past me, heads down, their screens glowing.
I take out my own phone, open a news app.
The algorithm has already curated today’s world for me.
I don’t long for 2019 as much anymore.
And maybe, I don’t hate the rain as much either.
A Note on This Essay
This piece originated from a final paper I wrote during the summer term.
I’m grateful to my professor for offering such thought-provoking prompts.
(Though yes, the academic pressure was real 😅.)
Six months later, I revisited the topic with fresh thoughts and wanted to share a revised version here.
Thank you for reading.
If you’ve made it this far, you might be wondering: What can I do now?
Here are a few suggestions:
- When using ChatGPT, pay attention to how its responses shape your thinking.
- When you see a piece of news, ask not just “Is this true?” but also “Who is saying this, and why now?”
- Check your app settings — see how your feed is being personalized.
- Pick up an introductory book on AI.
- Follow your country’s legislative discussions on AI regulation.
- Talk to your friends and family about how they use AI.
(Or drop me a message — I’d love to hear from you.)
If you’re an educator, consider how to integrate “understanding AI” into your curriculum.
If you’re a policymaker, push for stronger platform accountability.
If you’re just a regular person, like me — remember:
You have the right to demand a more transparent, more just digital world.
All change begins with a question.
References
- Hayles, N. Katherine (2025). Bacteria to AI: Human Futures with our Nonhuman Symbionts. University of Chicago Press.
- Pasquinelli, Matteo (2023). The Eye of the Master: A Social History of Artificial Intelligence. Verso Books.
- Rouvroy, Antoinette (2022). “Algorithmic Governmentality,” in More Posthuman Glossary. Bloomsbury Academic.
- UNESCO (2023). Guidelines for the Governance of Digital Platforms.