UX Research · Mercari · 2023

Sometimes you need to
build your own compass.

Mercari was planning major product changes with no consistent way to measure whether the experience was getting better or worse. I built the program that changed that — from a blank page to a quarterly VOC baseline used across product, data science, and marketing.

Role Senior UX Researcher
Methods Survey · Benchmarking
Outcome Quarterly Program · Looker Dashboard
+22%

Search success rate after UX improvements

+13%

Orders in previously underperforming categories

50%+

Of the company attended the inaugural Lunch & Learn

The Problem

Big bets.
No baseline.

Mercari's leadership was excited about experimentation — new features, repositioned audiences, significant changes to the information architecture. What wasn't there: any consistent way to measure whether the experience was improving over time. Without a baseline, every research study was a snapshot with no context. I proposed building the program from scratch.

Building the Program

Auditing what existed.
Designing what didn't.

I started by auditing existing measurement across the org — what was collected, where it lived, who owned it. The gaps were significant. From there I co-created a research plan with stakeholders across product, design, data science, and marketing, centering on five core measures:

1

Product–market fit

2

Ease of use

3

Ease of navigation

4

User motivations

5

Competitor preferences

Once data was flowing, I worked with engineering to pipe results into Looker — giving the whole organization a live view of UX health. Getting buy-in required as much work as the research itself: gathering requirements from PMs, designers, analysts, and leadership, and building shared agreement on what we were measuring and why.

What the Data Found

Search had been a hunch.
Now it was a fact.

Prior qualitative work had flagged search as a pain point — but qualitative signals are easy to deprioritize without numbers behind them. The benchmarking data changed that. For the first time, we had quantitative evidence that let us move from "users seem frustrated with search" to a precise, defensible problem statement the product team could act on.

1

Saved search was the biggest friction point

Users couldn't easily differentiate new listings or prioritize their most important searches. Redesigning the experience based on benchmarking recommendations increased search success rate by 22%.

2

Search results pages were underselling good inventory

I recommended adapting result density by category and clustering by predicted relevance. The experiments that followed increased orders by 13% in previously underperforming categories.

3

The lite listing feature hadn't made listing easier

After being moved to a different team pre-launch, I ran a post-launch benchmarking study on my own initiative. The data showed the feature hadn't landed — open-ended responses suggested it may have made the experience harder. That finding became the brief that drove a redesign in Mercari JP, resulting in a significant increase in listings from new sellers. Mercari US later adopted the same approach.

Impact

From scrappy to
the connective tissue.

The first report launched at a company-wide Lunch & Learn attended by over half of Mercari. Each quarter brought new collaborators — data scientists, customer success, and eventually a brand tracking partnership with Harris Poll. What started as a measurement gap became a shared source of truth that cross-functional teams actually used.

"We went from scrappy and haphazard experience measurement to a robust and rich system, which Megan is fully credited with."
— Thea Lee, Senior UX Research Manager, Mercari
Benchmarking Survey Design Quantitative Research VOC Program Design Looker Stakeholder Alignment Cross-functional Collaboration