Every time we scroll, three powerful cognitive traps shape what we see—and what we share. Negativity bias means our brains cling to emotional, sensational stories, making them more memorable and shareable than balanced reporting. Confirmation bias pushes us to accept anything that fits our worldview, while dismissing evidence to the contrary. And the framing effect? It warps our perception depending on how a story is told—context outweighs facts. Together, these biases make us amplify what feels true rather than what is true. This isn’t a personal failing—it’s human nature, hardwired and universal.
Instagram’s design doesn’t just reflect our biases—it amplifies them. The visual-first feed bombards us with emotional content, letting sensational images and videos drown out nuance and context. Every like, comment, and share teaches the algorithm to show us more of what we already believe, deepening our echo chambers. With just one tap, sharing is instant—no speed bumps, no time to reflect. Headlines arrive stripped of source and context, so we don’t pause to question. And seeing a range of perspectives? The algorithm buries them. The outcome: a platform where emotional misinformation spreads like wildfire, outpacing the truth every time.
Research Pipeline
01
Platform Analysis
Instagram looks simple, but beneath the surface, powerful algorithms decide what we see and how we engage. To fight misinformation, I had to decode these invisible forces—and reveal why Instagram amplifies the wrong things.
02
Research
Surveys and real-world observation went beyond numbers—I dug into how users actually think, feel, and share. The result? A clear picture of the emotions and habits fueling every swipe and share.
03
Pain Point Mapping
All the research pointed to five big friction points—real gaps between user needs and what Instagram delivers. Each one became a target for bold, focused design.
04
Designing Solutions
Armed with sharp insights, I built five interactive prototypes—each tackling a pain point head-on. Every decision was grounded in research, not guesswork.
05
Usability Testing
Assumptions don’t count. I put my designs to the test with 13 users across five countries, gathering real feedback on what actually worked.
Driving at the
Heart of the Problem
I didn’t just rely on theory—I went straight to real users. I surveyed 30 active social media users who shared their honest struggles, revealing where design and human bias collide. Their stories exposed five friction points that became my blueprint for bold design solutions.
From Research
to Empathy
The research truly came alive when I met the real people behind the numbers. Three vivid user archetypes emerged—each with unique habits, frustrations, and motivations. Knowing who feels the pain is just as crucial as knowing where it hurts.
Name
Rohan
Age
22
User Behaviour
Primarily watches short videos(reels) and skims posts.
Pain Points
Feels overwhelmed by the volume of information on his feed.
Overconfident in his ability to spot misinformation, but skimming habit doesn't support it.
Experiences decision fatigue trying to figure out what’s credible and what’s not.
User Needs
A way to quickly gauge the credibility of a post without leaving the feed.
A method to filter or manage the information overload.
Tools to help him slow down and reflect before sharing.
Design Implication
Develop clear, simple visual indicators of credibility following posts or links.
Implement “friction” prompts before sharing.
Name
Paula
Age
28
User Behaviour
She follows diverse news sources on YouTube and dedicated news apps, and uses WhatsApp and Facebook for discussions. Sceptical of one-sided content and automated fact-checks.
Pain Points
Lack of time for a thorough investigation.
Struggles to distinguish opinion from fact.
Distrusts “black-box” solutions, including AI, that don’t explain their reasoning.
User Needs
Access to a range of perspectives.
Tools that provide context and transparency.
Design Implication
Design context overlays that summarise different viewpoints on a story.
Create a feature for AI that explain why a post was flagged.
Feature that lets her compare multiple viewpoints at the same time.
Name
Guilia
Age
19
User Behaviour
She consumes news mainly through visual platforms like Instagram Reels and YouTube Shorts, and finds text-heavy articles unappealing.
Pain Points
Easily swayed by emotionally manipulative visual content.
Doesn’t know how to verify the source or context of an image or video.
Finds it difficult to engage with information that isn’t presented visually.
User Needs
An intuitive, built-in way to check the origin of visual media.
Contextual information is attached directly to images and videos.
Design Implication
Develop pop-ups or expandable tags on viral images that provide source information and context.
Present fact-checks and additional perspectives.
01
Enhanced Credibility Labels
A bold, multi-layered labeling system instantly reveals if a post or a reel is news, opinion, satire, or AI-generated, sponsor, etc—plus whether it’s verified, disputed, or under review and other secondary labels supporting the primary labels. These labels sit right below usernames, blending seamlessly into Instagram’s look, so users get critical context at a glance before they ever engage.
Cognitive Psychology Applied
Counteract Confirmation Bias: Bold labels—like ‘satire’ or ‘news’—act as instant visual speed bumps, stopping users from making snap judgments. These cues break the confirmation bias cycle before it can even begin.
UX Psychology Applied
F-Pattern Thinking: Bold labels—like ‘satire’ or ‘news’—act as instant visual speed bumps, stopping users from making snap judgments. These cues break the confirmation bias cycle before it can even begin.
02
Contextual Overlay
A slide-up panel triggered by a subtle icon, revealing source details, publication timeline, credibility metrics, and live updates. Users explore context on-demand without leaving their browsing flow. Progressive disclosure prevents information overload while enabling deep investigation for those who want it.
Cognitive Psychology Applied
Information Gap Theory: By showing a preview ("Learn more about this source"), we create curiosity and cognitive closure-seeking. Users choose to engage rather than being forced, increasing motivation and retention.
User Agency & Autonomy: Giving users control over when and how they see information builds psychological ownership. Users feel empowered to investigate rather than directed to comply.
UX Psychology Applied
Progressive Disclosure: Rather than overwhelming users with all context upfront, information appears only when requested. This respects cognitive capacity and prevents decision fatigue.
Contextual Continuity: The overlay slides up from the bottom—native to Instagram's interaction language. Users stay in the environment they know, maintaining flow state and comfort.
03
Perspective Comparison
Instantly spot editorial bias as a pattern, not an accusation. A swipeable carousel reveals how news gets spun from every angle—Left, Right, and everything in between. It’s a tool for real comparison, opening minds instead of reinforcing echo chambers.
Cognitive Psychology Applied
Challenge Cognitive Shortcuts: Seeing multiple framings disrupts the brain's tendency to accept the first frame as "truth." Users can't automatically dismiss competing perspectives once they're visually present.
Perspective-Taking & Empathy: Understanding why others see things differently builds cognitive empathy. Exposing users to diverse framings—without judgment—develops nuanced thinking patterns.
UX Psychology Applied
Visualisation Over Instruction: Instead of warning users about bias (which triggers defensiveness), we show them how different sources frame the same story. Seeing patterns teaches more than telling.
04
Evidence Panel
Transparent disclosure explaining the reasoning behind credibility assessments of post—which sources informed the decision, what fact-checking processes applied, how the score was calculated—turns algorithmic transparency to user confidence.
Cognitive Psychology Applied
Transparency Builds Trust: By showing the "why" behind decisions, we give users the foundation to evaluate the evaluation itself. Transparency is the antidote to algorithmic anxiety.
Demystify Complex Systems: Explaining reasoning in plain language makes AI comprehensible to all users, regardless of technical literacy. Accessibility and trust go hand-in-hand.
UX Psychology Applied
Perceived Control: Studies show that perceived control—knowing you can verify something—increases trust even when users don't fully analyze the details. Transparency enables control.
Mental Models & Understanding: Users form mental models of how systems work. Clear explanations allow users to build accurate models rather than filling gaps with speculation or distrust.
05
Friction Prompts For Sharing
Introducing a 10-second delay accompanied by a warning message when users attempt to share flagged content. Temporarily disabling the share button and providing options to cancel, review additional context, or proceed with sharing. This approach encourages a moment of reflection without being punitive.
Cognitive Psychology Applied
Loss Aversion & Motivation: Highlighting potential harm ("unverified claims") activates loss aversion—users feel stronger motivation to avoid spreading falsehood than to share quickly.
Behavioural Nudges: Research shows even brief delays significantly reduce misinformation spread by giving users time for critical evaluation. Small friction enables big behavioural shifts.
UX Psychology Applied
Deliberate Decision-Making: One-click sharing enables automatic impulses. Adding a brief pause forces users to shift from impulsive to deliberative thinking.
Credibility Labels
User Testing Result
100% accuracy in content classification.
On a 5-point scale, trust ratings improved from 2.8 to 3.
Key Insight
All participants correctly identified satirical content after labels appeared.
Label placement optimization: 54% preferred labels below usernames (Option A) rather than below posts (Option B).
Context Overlay
User Testing Result
Increased users trust rating for the post to 3.6 out of 5.
100% of the user deemed it as useful.
Average user engagement was 36 seconds which shows genuine exploration.
Key Insight
100% of participants found the overlay useful. Time spent indicates genuine exploration, not superficial clicking.
Perspective Overlay
User Testing Result
Users rated it as 4.5 out of 5 as it helped them understand the different perspectives.
77% users were likely to use this feature as it helped them form better opinions.
Average user engagement was 48.9 seconds.
Key Insight
Longest interaction time of all features.
Participants actively analysed different political framings and ideologies.
Evidence Panel
User Testing Result
77% users discovered the panel themselves by exploration of the prototypes.
Participants appreciated brief, clear explanations on how AI gives rating, generates description, etc.
Key Insight
Participants appreciated brief, clear explanations.
This confirmed that transparency transforms algorithmic opacity into user confidence.
Friction Prompts
User Testing Result
69% users choose to "See Context" instead of sharing the flagged post immediately.
Average time with prompt: 40.4 seconds (4x the mandatory delay—genuine reflection)
Key Insight
Users spent 4x the mandatory 10-second delay with the prompt to get more information about the flagged content.








