Your Handy Guide on Fighting Misinformation During Disaster Response

  • Stop engaging with inaccurate content, make your own (accurate) content instead.
  • Make fact checking neutral topics a part of your onboarding process.

We’ve all been there,  a disaster hits and suddenly our social media feeds are awash with conflicting information. Some folks claim the water supply is contaminated, others say it’s fine. Someone shares a post about police blocking aid deliveries, but we can’t verify if it’s true. In these critical moments, bad information doesn’t just cloud the truth – it actively hampers our ability to help our communities.

Social media chatrooms spring up quickly because people are looking for answers and have long understood the power of communally sourced relief found online. You’ve probably seen something like this on Reddit,

Here’s what a quick fact-check looks like for this;

Claim: FEMA provides a maximum of $42,500 to eligible homeowners for home repair or replacement under the Individuals and Households Program (IHP) for Fiscal Year 2024.

This claim is inaccurate. According to FEMA’s notice, effective for emergencies and major disasters declared on or after October 1, 2024, the maximum amount of IHP financial assistance is $43,600 for housing assistance and an additional $43,600 for other needs assistance.

Eligible homeowners can get up to $43,600 for home repair or replacement, with the potential for additional funds under other needs assistance, depending on individual circumstances.

I looked at the Reddit chain and people weren’t looking to check out the claim. It looked ‘legit’ enough to them and they had questions and worries of their own about Hurricane Milton. Research shows that people spreading misinfo usually aren’t doing it because they’re trolls or because they’re pushing an agenda. Most of the time, they’re just caught up in the chaos and urgency of the moment, forgetting to take that crucial second to think ‘wait, is this actually true?’

Take two headlines that are obviously false. No matter how outlandish they sound, these stories were shared thousands of times and are caught up in trends. What’s happening here? Were these people thinking deeply about what they shared? Or were they just forwarding as received? 

Here’s the kicker, more thinking doesn’t always help. Analytical thinking can help people identify false claims and avoid sharing them. But it can also backfire, leading people to justify and share content that aligns with their biases even if it’s false. Understanding this dynamic is key to finding effective solutions.

Why Traditional Fact-Checking Doesn’t Work

First, let’s talk about what doesn’t work. Do you know that person who responds to every questionable post with a ‘well, actually’ and a fact-check link? Turns out they might be making things worse. Studies show that publicly calling people out – even with solid facts – can make them share even less reliable info in the future.  Recently, I have observed that people capitalise on disasters to grow their social media following by sharing misinformation. Debunking misinformation crafted for that purpose by quoting the post simply gives it more fire and increases the views thereby sending a message to the algorithm that it needs to show that post to more people. We need a better way.

The good news is that when people take even a moment to think analytically about what they’re sharing, they naturally share more reliable information. This holds true regardless of their education, income, or political beliefs. The challenge is that social media is designed to make us react, not reflect.

So how do we build systems that encourage thoughtful sharing without slowing down our rapid response capabilities? 

For Communities on Platforms like Facebook and WhatsApp

There’s a method known as the Subtle Accuracy Prompt. Here’s how it works, when someone joins your group or network, ask them to help evaluate the accuracy of a simple, non-controversial piece of information. It seems small, but there are similar community examples that show this tiny intervention helps people share more reliable info later.

Example: ‘Hey, we’re trying to improve our info verification processes. Could you help us rate how accurate this statement is: ‘The local food bank is open Monday through Friday’?’Usually, neighbourhood watches or neighbourhood apps have communities that interact within their postcodes. Another approach is to create dedicated Signal channels or Telegram groups for verified information targeting your neighbourhood. But here’s the key – make verification feel natural and communal, not bureaucratic. Instead of ‘VERIFIED INFO ONLY!!!’ try ‘Community-Checked Updates.’


Topics: ,

Authors

  • A sketch of Kendra with orange hair and a happy smile.

    Kendra Allenby is a cartoonist for the New Yorker and other magazines, and teaches drawing and creative practice to adults. She often draws cartoons for the Red Cross and other humanitarian organizations where she uses humor to make difficult topics approachable. If she’s not drawing, she’s probably outside.

    View all posts
  • Nana is a researcher and policy consultant with a focus on clear, accessible language that demystifies complex topics. Dedicated to advancing responsible emerging tech practices and thoughtful policy development.

    View all posts

Continue Reading


Discussion

Leave a Reply

Your email address will not be published. Required fields are marked *