Dazzle © Branding - A Symphony of Innivation
Overview
In an age where false information spreads farther and faster than truth, Instagram users lack the tools to evaluate content critically. This project demonstrates how thoughtful design interventions can help users make informed decisions without removing their freedom or disrupting their experience.
I designed five research-backed features that reduce misinformation spread by up to 77% discovery rate, validated through usability testing with 13 participants from different countries to get broader perspectives.
03. The Problem
When Design Fuels the
Spread of Misinformation
Social media platforms are expertly engineered to keep us hooked, not informed. Every day, billions scroll through news on Instagram, yet the platform provides no tools to help users question what they see. This isn’t just a missing feature—it’s a deliberate design decision that allows misinformation to spread rapidly and unchecked.
01
The Confidence Trap: When
We Trust Ourselves Too Much
Challenge
Most users are confident in their ability to spot fake news—but the data paints a very different picture.
Gap
60% of users say they're confident in detecting false information.
Only 50% actually read headlines.
36.7% focus primarily on images and videos.
The result? Overconfidence becomes a launchpad for misinformation, making users more vulnerable than they realise. This paradox is dangerous. People feel informed, but they’re actually reacting to content in three-second bursts—the prime window for emotional manipulation and viral falsehoods.
Why It Matters?
When people overestimate their ability to spot deception, they let their guard down. They share impulsively, believe too quickly, and rarely double-check. The platform’s design doesn’t just allow this—it exploits it.
02
The Overload Trap: Drowning in Content
Challenge
Social media feeds are infinite rabbit holes of content. The endless flood of information creates decision paralysis and mental fatigue.
Reality
76.7% of users feel overwhelmed by information volume.
80% of young adults (18-29) access news via social media daily.
40% of U.S. Instagram users rely on the platform as a primary news source.
Average post viewing time: 3 seconds (critical intervention window, largely wasted).
Solution User Crave
93.4% want additional context for posts.
96.6% want to see multiple perspectives on the same story.
76% have encountered false information at least once.
Yet Instagram provides none of these tools.
Why It Matters?
Platforms are engineered for engagement—not understanding. Emotional posts travel 1.91x times as fast as the truth. For the algorithm, this is a victory. For users, it’s chaos.
03
Echo Chambers: How
Algorithms Build Invisible Barriers
Challenge
Instagram’s algorithm doesn’t surface what matters—it serves what keeps thumbs moving. This results in invisible filter bubbles where users only see content that confirms what they already believe.
Consequences
Users never encounter opposing viewpoints naturally.
Confirmation bias is algorithmically amplified
Complex issues become polarized narratives.
Users' mental models of reality diverge from actual reality.
The Real cost
During recent geopolitical crises (India-Pakistan 2025, Iran-Israel 2025), echo chambers did more than spread misinformation—they fanned the flames of conflict. State actors flooded platforms with coordinated false narratives, ensuring users in targeted regions saw only one side of the story. Informed civic participation became nearly impossible.
Why It Matters?
96.6% want to see how different sources frame the same story. They want perspective diversity. But algorithmic feeds are architecturally opposed to this.
04
The Black Box Dilemma: Why AI
Fact-Checking Fails to Build Trust
Challenge
70% of users say they’d welcome AI help to spot misinformation—but only one in five actually trust AI to do it fairly.
Why the gap exists?
Users fear algorithmic bias and hidden agendas.
"Fact-checking by AI" feels opaque and unaccountable.
No transparency into how decisions are made.
Challenge
Existing fact-checking systems are inconsistently applied.
Users worry: "Who controls the truth? Who decided what's true?"
Why It Matters?
When platforms moderate content in the shadows, users assume the worst: bias, hidden agendas, and corporate overreach. Trust evaporates—not just in the tool, but in the entire institution.
05
How Misinformation Goes Viral:
Platforms Designed for Speed, Not Truth
Challenge
Sharing takes a single tap. Fact-checking takes effort. No contest—which is why misinformation always has the head start.
Real-world implications
During the Iran-Israel conflict:
AI-generated images of "military attacks" circulated within minutes.
Journalists couldn't verify authenticity fast enough.
Thousands of retweets before corrections.
Damage to public trust, diplomatic understanding, and conflict de-escalation.
Why It Matters?
Instagram makes it effortless to share—but nearly impossible to verify. That’s no accident; it’s economic optimization in action. Emotional, viral content fuels engagement, even when it isn’t true.
Research & Strategy
Methodology
I followed a structured approach:
Understanding Context – Conducted surveys (30 participants), observational research, and platform analysis using the Vision-in-Product (ViP) methodology
Identifying Requirements – Synthesized data into 5 core pain points and 3 user personas (The Overwhelmed Skimmer, The Cautious Analyst, The Visual Learner)
Designing Solutions – Created five targeted interventions addressing each pain point
Evaluating Impact – Remote usability testing on Maze platform with 13 participants across 5 countries



