Research-driven UX improvements
Turning a Knowledge Base into a Product —
A Research-Led Transformation at Ozon
When I took over responsibility for Ozon's Seller Knowledge Base, I quickly realized we were missing an opportunity: we had a content platform, but not a product. This case study walks through how we shifted from a publishing mindset to a product-driven approach. Using both qualitative and quantitative research, we redesigned the experience to improve content discovery, reduce support load, and actually help sellers solve real problems.
Why we had to rethink everything
The Ozon Seller Knowledge Base was already well maintained. It had hundreds of articles, regular updates, and even its own CMS. But there was a problem: sellers still preferred contacting support. Even when articles existed, they didn’t always lead to resolution.
So we asked ourselves:
  • Why do sellers turn to support when answers are in the KB?
  • How do they search, and where do they drop off?
  • Can we make the KB not just usable, but useful?
This mindset shift was critical: we stopped treating the knowledge base as just a content repository. Instead, we started thinking about it as a product.
Step 1: Finding the data we already had
Our first move wasn’t to rewrite articles. It was to look at the research that already existed. Several product teams had conducted their own studies around onboarding, promotions, and seller experience. We synthesized findings from six previous studies:
What we found:
  • Promotions and advertising are the most common pain points. Sellers want to act fast and need simple, actionable guidance.
  • Onboarding and first-sale steps confuse both new and experienced sellers.
  • Tariffs, commissions, logistics are relevant across all lifecycle stages.
These findings helped us prioritize content topics. But they also pointed to a more fundamental issue: sellers couldn’t find what they needed, even when it existed.
Step 2: Researching the real user experience
To validate and expand on previous insights, we launched a two-track research initiative:
  • A quantitative survey with 300+ respondents (138 completed it fully)
  • A series of moderated usability tests and in-depth interviews across desktop and mobile
We didn’t just ask whether users liked the KB. We traced how they searched for information, where they gave up, and how they interacted with structural elements like cards, search results, and toggles.
Search and suggestions
We identified two distinct behaviors in how sellers use search:
  • Confident users ignored search suggestions and typed full queries based on their own vocabulary.
  • Uncertain users heavily relied on suggestions, iteratively tweaking keywords until relevant results appeared.
But both groups hit friction:
  • Linguistic mismatches in inflected Russian terms made the system brittle.
  • Low relevance or mismatched mental models led to irrelevant articles.
On mobile, suggestions were often hidden or overlooked due to UI issues like keyboard overlap or visual design.
Card-based navigation
When sellers didn’t know what to ask, they used the homepage cards. But the card structure revealed several issues:
  • Vague or unfamiliar section titles deterred exploration.
Overly broad clusters of articles forced sellers to read through too much text to find the answer.
Article structure
Even when sellers landed on the right article, they often struggled to extract value:
  • Many missed key information hidden inside toggles or long paragraphs.
  • Most scanned using Ctrl+F, headings, or visual cues — and were frustrated when those didn’t match their query.
  • Sellers consistently requested step-by-step formats, screenshots, and video examples.
This exposed the gap between how the knowledge base was structured (by internal logic) and how users actually looked for help (by tasks, errors, or business goals).

Step 3: Treating the Knowledge base as a UX product
We approached redesign like a product team: combining UX principles with product metrics. Our framework had three layers:
1
Content as UX Surface
We treated articles as interfaces. Instead of abstract explanations, we redesigned content to match common user tasks:
  • Each article became a mini-flow with context, clear steps, edge cases, and embedded visuals
  • Introduced a "task-first" writing standard: start with the seller's goal, then break it down
  • Added screenshots, callouts, and section anchors to support fast scanning
2
Navigation as Journey Mapping
We mapped out how different seller types (newcomers, scaling sellers, operational teams) moved through the knowledge base:
  • Restructured the home page to reflect job-to-be-done categories
  • Redesigned cards to align with entry points by intent: e.g. "Upload a product" or "Set up FBO"
  • Improved in-article navigation with sticky tables of contents and collapsible blocks
We also removed ambiguous labels and rewrote card descriptions to preview article contents, reducing false clicks.
3
Measurement as Feedback Loop
We introduced a product metric layer to track effectiveness:
  • Search-to-click rate: are people clicking on results?
  • Bounce rate: are they getting what they came for?
  • Completion signals: did users engage with toggles, anchors, or links?
  • Deflection analysis: did users who viewed the article still contact support?
These metrics helped us prioritize updates, run A/B tests on formats, and make the KB evolve based on real behavior — not just editorial intent.
What сhanged
After rolling out the first round of improvements:
  • Support contacts on well-covered topics dropped
  • More sellers completed tasks without leaving the KB
  • We saw improved engagement with in-article navigation and visuals
We also improved seller sentiment. Open responses in feedback forms started referencing clarity, speed, and structure — the same issues that originally sent them to support.
Final Thoughts
The biggest takeaway? Content isn’t just support — it’s UX.
By treating the knowledge base like a product, not a documentation archive, we helped sellers become more self-sufficient. We reduced frustration and made support more scalable. And we created a feedback loop that continues to evolve based on real user behavior.
If you’re working on a knowledge product, don’t just ask: "Is the information correct?" Ask: "Can people find it, understand it, and act on it without giving up?"