The comment appeared under yet another post about sick Palestinian children coming to the UK for medical treatment: "Do not let unpatriotic Lefties get in your way. Our people. Our flag. Rushmoor People First. 🇬🇧 ✌🏻"
Below it, a photo of someone wrapped in the Union Jack, standing defiantly on a suburban street. The "Raise the Flag" movement has been spreading across British Facebook groups — coordinated campaigns to hang flags as some kind of territorial marking against... what exactly? Sick children receiving medical care?
My response was simple: "What's the goal here? You've put time, energy, and money into flags — cloth on poles — and convinced yourselves it means something. Meanwhile, people in your own communities are hungry, cold, and struggling to survive. Imagine if all that effort went into food banks, housing support, or looking after the elderly and sick. That's what actually builds pride, that's what strengthens a nation. Flags don't feed families. Flags don't keep children warm. Flags don't heal the sick. People do."
The responses were predictable. One person argued people should spend their money however they want. Another claimed criticism of flag-waving was itself unpatriotic. But scattered between the defensive replies were private messages: "You're right, but there's no point arguing online," "It's all bots anyway," "Don't waste your energy on Facebook."
This is where we need to pause. When did we collectively decide that the internet — where billions form their worldviews, where elections get decided, where extremist movements organise — somehow doesn't matter?
The "It's Just Online" Delusion
In March 2019, a gunman killed 51 people at two mosques in Christchurch, New Zealand. He livestreamed it on Facebook and posted an online manifesto filled with memes and references from extremist online communities. The Pipeline from online rhetoric to offline violence couldn't have been clearer.
In 2018, a gunman killed 11 people at the Tree of Life synagogue in Pittsburgh. His final post on the social media platform Gab: "I'm going in." Before that, months of increasingly extreme online rhetoric that went largely unchallenged in his echo chambers.
These aren't anomalies. Researchers at the University of Warwick found direct correlations between online hate speech and offline violent crimes. A study by the Anti-Defamation League documented how extremist rhetoric online preceded 387 real-world violent incidents between 2016-2020.
Yet when I counter hateful rhetoric online, I'm told I'm "wasting energy." The same people who acknowledge that online propaganda elected presidents and sparked insurrections somehow think responding to that propaganda is pointless.
The Public Square Has Moved
Imagine walking through a town square in 1935 Germany and hearing someone shouting that Jewish children shouldn't receive medical care. Would you walk past in silence? Would you tell others who spoke up that they were "wasting their energy"?
The internet is our modern public square. It's where political consciousness forms, where social norms get negotiated, where the Overton window shifts. A Pew Research study found that 68% of adults under 30 get their news primarily from social media. These aren't passive consumers — they're forming worldviews based on which voices dominate these spaces.
When we abandon these spaces, we don't create some kind of pure offline alternative. We simply cede the territory to whoever's willing to flood it with their message. And right now, that's extremists with bot farms, troll armies, and coordinated disinformation campaigns.
"It's All Bots Anyway"
Yes, bots exist. Russia's Internet Research Agency, Chinese state operations, and various extremist groups deploy them heavily. But that's an argument FOR engagement, not against it.
Why would bad actors invest millions in bot networks if online discourse didn't shape reality? They understand what we've forgotten: repeated exposure to ideas — even from artificial sources — shifts real human opinion. It's the "mere exposure effect," documented in decades of psychology research.
How Bot Amplification Creates False Consensus:
Real Person: "Maybe we should help refugees?"
Bot 1: "NO! They're dangerous!"
Bot 2: "Absolutely not! Crime!"
Bot 3: "Never! Our people first!"
Bot 4: "Agreed! Send them back!"
Bot 5: "100% NO!"
Real Person scrolling: "Wow, everyone's against helping refugees..."
More importantly, real people are still reading. When someone scrolls past a thread about Palestinian children and sees only cruel comments (whether from bots or humans), that becomes the perceived consensus. But if they also see someone saying "sick children deserve medical care regardless of nationality," suddenly there's a choice. There's an alternative framework.
I can often spot bots — generic names, stock photos, repetitive phrasing. But even when engaging with obvious bots, I'm not writing for them. I'm writing for the seventeen-year-old reading silently, trying to figure out what kind of person they want to be.
The Method Matters
Not every online argument is worth having. Getting dragged into bad-faith debates with committed extremists is indeed energy-draining and pointless. But that's not what effective counter-messaging looks like.
When someone posts about being angry that sick children are receiving medical care, I don't need a lengthy debate. I drop a simple truth — "flags don't feed families" — and move on. It takes less than a minute. The comment stands there, offering an alternative perspective to everyone who reads the thread later.
These simple, memorable phrases work because they're hard to argue against without revealing cruel priorities. Try defending why symbolic nationalism matters more than feeding hungry children. Try explaining why flag-waving deserves more energy than housing the homeless. The rhetorical position crumbles immediately.
This isn't about converting the original poster. It's about the hundreds or thousands who read silently. It's about preventing the normalisation of cruelty through unopposed repetition.
Information Hygiene as Civic Duty
We understand physical hygiene — we wash our hands to prevent disease spread. Information hygiene follows the same principle. Letting false or hateful information spread unchallenged is like letting disease vectors multiply unchecked.
Every unchallenged piece of extremist rhetoric makes the next one easier to accept. Social psychologists call this the "normalisation pipeline" — repeated exposure to extreme views without counterargument gradually shifts what seems acceptable.
The Normalisation Pipeline:
Stage 1: "It's just edgy humour" (extremist memes shared)
Stage 2: "People are saying..." (ideas gain repetition)
Stage 3: "Many think..." (false consensus builds)
Stage 4: "There's debate about..." (enters mainstream)
Stage 5: "It's one perspective..." (legitimised as valid)
Stage 6: "Policy should reflect..." (political adoption)
Stage 7: Real-world violence/discrimination
When I respond to online hatred with compassion-centred arguments, I'm not trying to win a debate. I'm performing information hygiene — ensuring that cruel ideas don't spread without friction, that alternative frameworks remain visible, that the conversation's boundaries don't shrink to exclude basic humanity.
The Stakes Keep Rising
As I write this, AI-generated propaganda is becoming indistinguishable from human-created content. Deep fakes are becoming more sophisticated. Algorithmic amplification of outrage is intensifying. The online information environment is becoming more polluted, not less.
The response cannot be to abandon these spaces. That's like responding to air pollution by staying indoors forever — it cedes the entire environment to those causing the damage.
Young people are growing up entirely online. Their understanding of what's normal, what's acceptable, what's possible — it all forms in digital spaces. When those spaces contain only extremist voices because everyone else decided engagement was "pointless," we've failed an entire generation.
The Real-World Pipeline
Let me return to where we started: people enraged about sick children receiving medical care. This didn't emerge from nowhere. It followed years of unchallenged online rhetoric about immigrants, about "us vs them," about who deserves compassion and who doesn't.
Each encountered post, each unchallenged comment, each thread where cruelty went unopposed — they all contributed to a climate where denying medical care to injured children seems reasonable to some people.
That's the pipeline: online rhetoric → normalised cruelty → policy proposals → real suffering. We can interrupt it at the first stage, or we can deal with the consequences at the last.
Reclaiming Digital Spaces
The solution isn't everyone arguing constantly online. It's strategic, thoughtful counter-messaging by those who can do it without burning out. It's understanding that five minutes crafting a response that hundreds might see is actually highly efficient activism.
It's recognising that different people have different capacities. Maybe someone can't attend physical protests due to disability, work, or location. Their online engagement might be their most effective contribution to shifting cultural narratives.
It's about information patterns. Every comment promoting compassion, every response highlighting hypocrisy, every simple truth dropped into a thread — they all add weight to the cultural scale, tipping it away from cruelty.
The Choice
We face a simple choice. We can abandon online spaces to extremists, bots, and propagandists, telling ourselves it's "not real" while it shapes electoral outcomes and inspires violence. Or we can spend minimal effort ensuring that compassion, truth, and basic humanity remain visible in spaces where millions form their opinions.
When someone posts about raising flags while children go hungry, I'll keep pointing out the misprioritised resources. When rhetoric dehumanises refugees, I'll keep asserting their humanity. When lies spread, I'll keep dropping documented truth.
Not because I'll convert extremists. Not because I enjoy online conflict. But because silence in the face of spreading hatred is complicity. Because the teenager reading might see an alternative to cruelty. Because the internet is our public square, and walking past hatred without response is moral cowardice.
Flags don't feed families. But real discussion in the comments might shift minds. And shifted minds, eventually, change the world.
References
Studies and Research:
Müller, K. & Schwarz, C. (2021). "Fanning the Flames of Hate: Social Media and Hate Crime." Journal of the European Economic Association, 19(4), 2131–2167.
Williams, M.L. et al. (2019). "Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime." British Journal of Criminology, 60(1), 93-117.
Pew Research Center (2024). "News Consumption Across Social Media in 2024." Available at: www.pewresearch.org
Anti-Defamation League (2023). "Online Hate and Harassment: The American Experience 2023." Available at: www.adl.org
Investigative Reporting:
Bellingcat Investigation Team (2019). "Shitposting, Inspirational Terrorism, and the Christchurch Mosque Massacre." Bellingcat.
Institute for Strategic Dialogue (2023). "The Relationship Between Online and Offline Extremism."
Data & Society (2017). "The Oxygen of Amplification: Better Practices for Reporting on Extremists, Antagonists, and Manipulators."
Books and Long-form Analysis:
Phillips, W. & Milner, R.M. (2021). You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape. MIT Press.
Tufekci, Z. (2017). Twitter and Tear Gas: The Power and Fragility of Networked Protest. Yale University Press.
Additional Resources:
Center for Countering Digital Hate (2024). "The Disinformation Dozen 2.0."
First Draft News. "Understanding Information Disorder."
Oxford Internet Institute - Programme on Democracy & Technology research papers.
Note to editors: All links verified as of January 2025. For the most current statistics on online radicalisation, consult the Global Internet Forum to Counter Terrorism (GIFCT) transparency reports.