Jump to

Article Helpfulness Survey

The Article Helpfulness Survey was an important source of customer feedback on Mozilla’s Support website with the ability to reveal the need for changes to help articles. So why wasn’t the Content team taking advantage of the insights it provided? I set out to discover why the Helpfulness Survey was so…unhelpful and what could be done to maximize the quality of the feedback gathered from it.

  • Duration: Oct 2024 - Mar 2025
  • Client: Mozilla, Customer Experience
  • Role: Primary UX Designer
Team:
  • Cindi Jordan, Staff Program Manager
  • Donna Kelly, Content Strategist
  • Ryan Johnson, Staff Software Engineer

Problem

The existing version of the Helpfulness Survey was somewhat useful in gathering customer feedback about an article, but those signals were muddled due to a mixture of misplaced product feedback, support requests, and personally identifiable information (PII) included in submissions. Additionally, the dashboards that comprised the survey metrics were cumbersome to use and lacked many options to transition from broad overviews to more focused segments of data. These issues ultimately detracted from the value the survey provided and did little to inform the Content team’s workflows.

Solution

Intentional changes to the selectable feedback reasons focused on specific aspects of an article’s quality, like clarity, accuracy, and imagery. With these text updates my project team and I sought to provide the Content team with customer insights that were more actionable.

Additionally, the survey dashboards were updated to accommodate newly captured data. Improved filter and segmentation tools allowed the Content team to “change altitudes” more easily, from spotlighting the impact of changes to individual articles to widening the focus and identifying trends that coincided with product update releases.

Results

The new survey was launched in January of 2025 and though the majority of survey-takers did not submit a feedback reason for their helpful or unhelpful votes, 40.21% and 33.38% respectively, it revealed users were finding articles more helpful than not. Submissions with Article provided the information I needed clocked in at 5.34% (51,904 total submissions) and Article is easy to understand was the next most common reason at 5.2% (50,525 total submissions). Overall, around 25% of voters were including a reason with their vote, offering valuable content feedback, but this was down from 41% when the survey only had negative reasons. It was clear that there was an opportunity to iterate further in future versions of the survey. Some other positive outcomes we saw post-launch included:

~25.98% reduction of personal emails included with submissions
79.76% increase in total votes (from 540,409 total votes to 971,441)
1.06% survey engagement rate for over 72 million KB article page views
Time span comparison: Jul 21, 2024 - Jan 21, 2025 to Jan 22 - Jun 22, 2025
Measure

Different metrics tell different stories

In order to get an understanding of our survey’s baseline performance, I did some digging in Google Analytics and found that between July 1st and November 1st (2024), Knowledge Base articles (across all locales) accounted for 76.94% of all page views on the support website.

60,087,784 views of pages with /kb/ in their URL vs 78,094,555 total views of any page on support.mozilla.org

In the same time span, there were 289,000 article_vote events recorded. Using these metrics, I calculated an overall engagement rate for the survey:

289,000 votes / 60,087,784 KB article page views) * 100 = 0.48% survey engagement rate

So, it’s not that the survey wasn’t getting any engagement, it just wasn’t a significant number. I put out an inquiry to other teams within Mozilla to get an idea of their experience with survey engagement rates. Discussing the topic with a manager on the Firefox Quantitative User Research team, I learned that form factor and context mattered a lot. In her experience, one question, in-product surveys had a higher opt-in rate than desktop surveys hosted on web pages. Meanwhile, click through on email surveys was generally very low, around a fraction of a percent.

I think your calculation is the obvious and correct one if you want to understand opt-in on a per article basis […] But a per-user vote will also help you understand what percent of visitors votes.
Rosanne - Manager, Firefox Quantitative User Research

She suggested calculating the rate of votes per user to get a sense of their voting habits. If it was much higher than the per-article vote rate, that meant users might visit a lot of articles before voting on one or a few (theoretically on the one that finally answers their problem). If it was much lower that meant fewer users voted, but those who did, voted on many articles. Calculating the rate of votes per user from the same time span:

(271,000 unique users who voted / 7.9M average Monthly Active Users) * 100 = 3.43% of all active users vote in a month
Strategy

(Signal) Quality over (signal) quantity

My project team and I were interested in monitoring engagement rate but focusing solely on the number of overall votes wasn’t going to solve the core issues the Content team had with feedback gathered from survey submissions:

1
Misplaced product feedback and support requests being submitted through the “I have feedback about Firefox” option and the “Tell us more” open text box.
2
Personally identifiable information being included with survey submissions.

These issues meant only some of the data resulting from the current survey provided actionable insights. Our primary goal became improving the feedback options in step 2, after the initial Yes/No vote, so that the Content team could understand what specific parts of an article users found unhelpful and make informed decisions about changes. Consulting with stakeholders, I worked through several iterations of the feedback reasons so they focused on the quality and accuracy of an article’s content.

Adding a step 2 for the “helpful” vote path could provide insights into what was working, potentially validating content strategies, like the recent Cognitive Load Reduction project, and encouraging the team to apply the same approach to other articles with lower helpfulness scores.

Not Helpful vote step 2 (Old)
Not Helpful vote step 2 (New)
Helpful vote step 2 (New)
1
Submissions with the inaccuracy option selected might indicate an issue with the article’s recency due to things like product updates or changes in company policy, prompting changes to its content.
2
If a user felt an article was confusing we could revise it for clarity with the use of more simplified language - important when articles covered advanced technical topics.
3
Feedback about the visuals would validate the efficacy of the Content team’s Cognitive Load Reduction project, in which updated standards for images, and eventually video, sought to make UI screenshots more straightforward to readers.
4
The fourth option sought to identify instances where users felt an article was lacking critical information to help resolve their issue.
5
And finally, Other was kept as a way to capture feedback that wasn’t represented by the options available.

An optional comments text box field allowed users to include additional context when selecting an established reason, however, if a user selected Other this field became required in order to prevent feedback that was undefined. Additional changes to the UI to addressed the issue of PII and helped make the survey widget’s functionality a little more transparent.

Before
After

I updated the Comments input field placeholder text to discourage users from including their personal information in submissions.

Not Helpful vote (Old)
Not Helpful vote (New)
Helpful vote (New)

I changed header text that appeared in step 2 of the survey to explicitly communicate to the user their initial Yes/No vote had been counted at this point.

User completes step 2
User clicks Cancel
Error message

Lastly, I added an additional confirmation message to differentiate outcomes when users opt out of selecting a feedback reason in the second step.

Analyze

Dashboards that are more useful and usable

The existing dashboards provided a passable cross-section of the Knowledge Base, but the Content team wasn’t taking full advantage of their utility. Certain aspects like the available filtering tools made it cumbersome to focus on individual products, articles, or time frames, and with the changes to the survey’s feedback options, newly captured positive feedback still needed to be incorporated.

Many of the filters required manually entered values to segment the data which was cumbersome when you had to type out “zh-CN” to filter for the Simplified Chinese locale or “/firefox/” to only display Firefox for Desktop data.
Some columns, like Document Slug and URL, were redundant, while others, like Views (Last x Days), weren’t affected by the time span filter. These unnecessary data points made the dashboard cluttered and overwhelming.
Unfortunately, only the Locale filter’s format could be changed. Technical constraints surrounding articles that belonged to multiple topics or product prevented us from changing those filters into selectable lists.
Removing the Document Slug column eliminated the redundancy and users could still reach the article by clicking its URL. The four View (Last x Days) columns were consolidated into one that reflected the time span filter.
In this format, it wasn’t easy to determine how many times each feedback reason was submitted and what percentage they made of all submissions.
A completely new dashboard with a numbered count and percentage breakdown of each feedback reason provided a better sense of the Knowledge Base helpfulness overall.
What you see is what you get - the old Votes Over Time trend graph didn’t have any filters to focus the data.
Adding the same filters available on the other dashboards made the new trend graph much usable.

I worked with one of our platform engineers to revamp the survey’s dashboards, leveraging user stories I co-wrote with the manager of the Content team to target improvements that would have the most impact. Over the course of a week, we managed to achieve 8 out of 13 changes requested.

Understanding the pain points the Content team was experiencing with the existing dashboards taught me that the quality of the signals captured was only half the equation. Optimizing how they were aggregated and analyzed was just as important. Improvements to both the data and the dashboards meant Helpfulness metrics were more actionable and the Content team could explore how an article’s helpfulness score correlated to objectives like content freshness and support deflection.

Concusions & Takeaways

In the months following the launch of the updated survey, I noted positive changes in the metrics we were tracking like the rate of votes per user, reducing PII, and survey engagement (though I’ve been unable to attribute this increase to any one thing). Beyond the numbers, there were several outcomes that pointed to the overall success of the project, including:

Enriched core workflows for technical writers - routine assessment of survey insights shifted operational work from reactive fixes to systematic, data-informed maintenance.
Expanded reporting capabilities - Helpfulness dashboard metrics were shared with executive leadership as part of the new Customer Experience Business Review (CXBR) initiative.
Larger helpfulness sample sizes - more votes meant Helpfulness Scores, a “trust metric” displayed with an article’s metadata, were more representative of user confidence in the quality of Knowledge Base articles.

Our first iteration only consisted of changes to wording and the addition of positive feedback reasons, but there were still plenty of considerations for future versions of the survey like:

Identifying trends in the comments - there was potential for them to yield important additional context, we just needed to figure out an effective way to sift through all of them and glean those insights.
Test strategies for deflecting irrelevant survey submissions - revisiting the use of messaging and links to direct users to Mozilla Connect for submitting product feedback or the Ask a Question flow for making support requests.
Exploring the addition of a neutral vote - if article readers feel the content was 'just ok' there isn’t an option to express that, so they won't participate in the survey.

Looking forward, key takeaways from this project will inform strategies for implementing additional surveys on the support website to measure localization accuracy and customer effort with the broader goal of validating connections between content quality and customer satisfaction in our self-serve support strategies.

View another project