AMAZON x Brainstation Industry project
Designing a human-in-the-loop verification system for Amazon to combat fake reviews
Role
Product Designer
Timeline
September 2024 (24 hours)
Team
3 Engineers
1 Cybersecurity
3 Data Analysts
2 Designers
Skills
UX Research
Competitive Analysis
Ideation
Prototyping
Overview
Millions of fake reviews were slipping through, and Amazon needed a scalable solution.
In a 24-hour hackathon for Amazon, my team tackled one of e-commerce's biggest challenges: the epidemic of fake reviews eroding consumer trust. Our goal was to design a scalable, privacy-conscious solution that leverages Amazon's existing assets to engineer a new standard of trust.
As the Design Lead, I was responsible for user research synthesis, journey mapping, interaction design, prototyping, and presenting our solution to the two judges.
Our team's human-AI synergistic approach won 1st place, praised for its innovation and immediate feasibility.
Both reviews are from verified purchases, but their quality vastly differs. One offers a low-effort, unhelpful comment, while the other provides verifiable detail with an image and specific experience.
The trust problem
Our research showed that current systems caught bots, but couldn't incentivize or identify genuine human quality.
Shoppers spend their time cross-referencing reviews, looking for verified purchase badges, and questioning authenticity instead of evaluating product quality.
This erosion of trust directly impacts Amazon's bottom line: hesitant buyers abandon carts, and legitimate sellers are undermined by fraudulent competitors.
How might we leverage human feedback to train AI systems to better identify authentic reviews, creating a more trustworthy ecosystem for everyone?
uncovering the gap
We asked: Could Amazon's own elite reviewers be the key to training a better AI?
We conducted a swift competitive analysis of trust systems on platforms like Yelp (Elite Squad). We noted that while many use human moderators, none had a closed-loop system where expert feedback directly trains the AI model that then surfaces better content.
We began by reverse-engineering Amazon's current review submission and consumption experience to identify strategic entry points for our solution.
Key Insight: The existing Vine Voice community itself can elevate genuinely helpful content, creating a new trust signal that goes beyond the "Verified Purchase" badge.
We created Amira, a seasoned Vine Voice member, to personify this insight and define the ideal user journey. Her ability to spot authentic, high-quality feedback became the guiding principle for our system design.
a symbiotic system
Our breakthrough was to design a "Human-in-the-Loop" system, turning Vine Voices into certifiers.
The core concept creates a virtuous cycle: Vine members efficiently validate AI-flagged reviews, and their judgments train the AI to become more accurate, continuously improving review integrity.
sketching the experience
I sketched a workflow that was fast and rewarding, mirroring how experts naturally evaluate content.
Early exploration of how to integrate the verification task into a Vine user's existing dashboard.
Design Decision 1
The Verification Mechanism
We debated several models: a 5-star rating vs. binary choice. We landed on a quick binary choice (authentic / not authentic) followed by a text field for reasoning. This mirrored how Amira naturally evaluates reviews and minimized cognitive load.
Design Decision 2
Reward System
We explored different reward models: monetary vs. social recognition. We combined them for maximum effect: a small instant credit for completing tasks and a badge on their profile to showcase their status as a 'Verifier.'
We used Amazon's existing design system to ensure our feature felt native, minimizing the learning curve for existing Vine users.
the reward system
The solution was a dashboard for Vine Voices and a new "Vine-Verified" badge for shoppers.
Feature 1: The Verification Dashboard
This tool was designed to leverage Vine Voice expertise at scale. I created an efficient workflow that strips away unnecessary and confidential information (e.g., reviewer identities) to ensure judgments are based purely on content quality. The simple binary choice allows for rapid verification, while the optional feedback field captures the nuanced reasoning that makes their input valuable for training our AI.
Pivot: To meet the dev timeline, we adjusted the scope to include the dashboard, verification options, and a text box. The multi-review feature I had proposed in my initial designs was cut.
Feature 2: The Incentive System
To help consumers quickly identify trusted contributors and recognize Vine Voices for their work, a new 'Verified Guardian' badge now appears on their profiles, signifying their role in upholding community trust.
Feature 3: The Shopper-Facing Result
On the product page, reviews that have been verified by multiple Vine Voices receive a special 'Vine-Verified Authentic' badge, giving shoppers an immediate, trusted signal of quality.
The result was a self-improving system that boosted trust for shoppers and gave Vine Voices a new purpose.
Key learnings
Getting aligned first let us build faster and smarter later.
Ideation is an investment, not a delay.
We dedicated nearly 6 hours of our 24-hour sprint to research and collaborative ideation. That upfront investment was necessary to discover a solution that truly worked for both the user and the business, ensuring it was not just usable but aligned with Amazon's model from the start.
Engineering collaboration isn't just for handoff, it's for building feasibility.
We had to make trade-offs and prioritize features to fit the technical scope. Preparing the Figma file for clarity and being ready to accommodate their feedback was crucial for building a solution that was buildable and presentable.