Experiment Progress Bar Redesign

Improve clarity and rebuild trust with an intuitive UX

My Role

Sole designer responsible for:
User research, solution discovery, component design, end-to-end flow design, pre-launch QA.

Problem

The old progress bar failed to provide users with clarity on experiment status and when to make decisions, creating friction and reducing trust.

Goal

Create an intuitive design that helps users make confident decisions on their experiments while maintaining the statistical rigor behind it.

Impact

The redesign achieved both its user goal and business goal by reducing related support tickets by 47% and reinstalling user trust in the product.
Eppo's redesigned progress bar component with hover-triggered peeks, launched in Q1 2023
Project Background
Eppo is an advanced experimentation platform. It enables both technical and non-technical teams to run experiments at a high velocity without sacrificing rigor.

What's Wrong?

For many customers, there can be dozens of experiments running at the same time. To help users track experiment progress, we created a progress bar.
However, the original progress bar failed at its goal and made it confusing for users to know when to call an experiment.

Why Is This Problem Important?

On the surface, the old progress bar caused user confusions which led to a large number of support tickets. However, this problem was much more serious as it hurt both Eppo's business goals and user goals.
Uncovering User Pains Through Research
We know that our users, especially the PM persona, are frustrated with the current design. However, we need to understand the reasons behind this frustration before we jump to solutions. We had several assumptions that needed to be validated, and I am working with our statistical engineer, who is leading the technical design of the project, to dig deeper into the issue.

Research Methodology

Qualitative research

  • User interviews vis Zoom
  • Led by: designer (me) and stats engineer

Interview participants

  • 3 PMs from customer companies
  • 2 Customer Data Scientists from Eppo
We learned that the progress bar is the #1 complaint during trial. This significantly reduced trust in the product, adding challenges in winning deals.

Identifying the Problem

During the interviews, it was very obvious that the existing progress bar raised more questions than answers.

Color confusion

Users did not understand how to interpret the different indications of the color of the bar. What does gradient mean, is it better than grey?

Timestamp confusion

Experiments might have an end date, users often associated progress with runtime, leading to mistaking progress with duration.

Statistical concept misunderstanding

Experiment progress was calculated by a combination of statistical components from power to minimal detectable effect which were not intuitive for non-technical users.
Ultimately, our users didn't understand how progress is measured and therefore couldn't make use of the information provided by the progress bar.

The Impact of the Failed Progress Bar

Not understanding what the bar measures has real consequences for both product managers and data scientists. After talking with users, we had an in-depth conversation with our customer data scientists to learn more about the impact of this failed design.
Even though the progress bar is primarily used by product managers, there are a good amount of data experts who own experiments or support product managers that use this feature too.

Impact on product managers

  • Relying on non-primary indicators such as sample size for decision-making
  • Risking making decision with inconclusive data
  • Confusing experiment runtime with progress due to the misleading linear visual

Impact on data scientists

  • Frustration with inaccurate estimations due to Eppo's power calculation limitation
  • Undermined credibility caused by unfavorable results from underpowered experiments
  • Handling unrealistic expectation for progress to reach 100%
How might we help users make experiment decisions with confidence while maintaining statistical rigor?
Solution Discovery
Coming out of the interviews, we had a lot of "lightbulb" moments and we decided to discard the old design and start fresh. the stats engineer and I had a brainstorming session, and we explored a wide range of solutions. While it's exciting to have abundant ideas, I was concerned that too much creative freedom might lead to unrealistic scope.
Following up the brainstorming session, I led an ideation workshop with both the stats engineer and our customer data scientists. The goal of this workshop is to explore more specific ideas with impact and feasibility, so we can narrow things down.
HMW brainstorming board with votes during the ideation workshop. Click to view full image.

Design Framework & Goals

After the workshop, we had a much more clear picture of what should go into the new progress bar. This framework for design exploration then helped the team align on two project goals.

Design Framework

  • Avoid using linear visual representation for progress
  • Include minimal requirements on duration and sample size
  • Surface more supporting details like sample size and primary metric lift
  • Have actionable and clear labels

Goal 1: An intuitive UI to improve clarity

Make it easy for users to leverage the information of the progress bar to make the right decision at the right time.

Goal 2: Transparent data and logic

Provide users with supporting details and tools like recommendations to make a confident decision.
Thinking Outside of the “Bar”
There was a strong sentiment against the bar visual because it misled users to believe progress meant experiment runtime. So I started exploring design concepts finding alternatives and reimagining how we can communicate experiment progress in a non-linear way.
I created three concepts and presented them to the team. While each concept had its pros and cons, we believed we were heading in the right direction.
Early design concepts, three non-bar visual concepts. Click to view full image.
Concept 2 with the dial meter design had most buy-in. There were some issues with this concept such as the side by side display might create an unclear hierarchy, but the team agreed that this concept is the most promising with its simplicity.
As a next step, I did more explorations around this concept to address the following issues brought up in the meeting:
Further explorations of the dial meter design. Click to view full image.

What if the “Bar” Was Right?

After developing more detailed designs for concept 2, I believe the dial meter is good in representing progress in a non-linear way. However, I felt that something was missing.
I added so much more labels, text and visual elements to avoid confusion, now we ran the risk of overwhelming the users with these bells and whistles. One of our goals is to reduce user's cognitive load with a simple, intuitive design. I think we missed it here.
So I revisited the old progress bar. The bar design had the opposite issue, it didn't show enough. It's the lack of context to how progress is measured that created confusions in the first place. However, simplicity was its strength and I wanted to preserve it in the new design.
I created a new bar with the idea that if it's clear that progress is measured by MDE, users should understand how far their experiment is from the desired effect.

How is the new design different?

  • Incorporated MDE and CDE to clarify what the bar is measuring
  • Set MDE as a target for the bar so linear progression makes more sense
  • Leveraged colors instead of text and labels to differentiate statuses
I presented all three explorations, including the redesigned bar. The team liked the clarity in the dial meter design, but ultimately preferred the bar for its readability and minimal visual noise.
However, for the bar to succeed this time, there's still a lot to be done.
The Path Forward
The progress bar should be a clear indicator for when a decision can be made, but users cannot rely on progress alone to make the most appropriate decision. Therefore supporting information like primary metric life and sample size should be included too. The challenge is to add more details while maintaining the simplicity of the bar.

What Does It Take for the Bar To Succeed?

Straightforward progress measurement

  • Replace jargons like power and MDE with precision. This concept is more intuitive because it measures certainty.

Clear indication of action

  • Replace the old "Wrap Up" label with the more actionable "Ready" one to improve clarity.

Supporting details to help decision making

  • Surface the primary metric lift as it directly impacts the readiness of an experiment
  • Hide supporting details like current sample size in a popover to avoid overwhelming the users
The team agreed that combining the concept of precision and the primary metric lift achieves the goal of intuitiveness and clarity. Now the new design was approved, I moved on to fine tuning the details and working out the end-to-end flow.
Precision is not a guess. It’s how certain you want to be about the experiment result, giving you more confidence in both expectations and decision.
New Design Coming Together

Design Principles of the New Progress Bar

While working out the details of the progress bar bar and the experiment card, I created a set of design principles to guide every design decision for this project.

Respect the real estate constraints

The new progress bar must fit within the existing experiment card's limited space.

Visuals are used to reduce cognitive load

The bar visual element should complement the text but not solely rely on the text to make sense.

Mindful and limited use of colors

Use color sparingly to maintain its highlighting power.

Accessible to all users

Combine text, icons, and color to maximize readability and accessibility
The anatomy of the new progress bar in an experiment card. Click to view full image.
Implementation of Experiment Decision-Making Logic
If the bar component was the biggest visual challenge, the underlying decision-making logic was the biggest technical design challenge.
The progress bar uses precision as an indicator of when to review an experiment. However, precision is not the single decision-making factor. The experiment outcome depends on all three of the following:
  • Precision
  • Confidence interval method (CI method)
  • Statistical significance
Precision, CI method, and lift are monitored together to move an experiment from "Running" state to "Ready" state, but the different combinations of these factors lead to different outcomes.
To better illustrate the decision-making logic, I collaborated with our statistical experts and created an experiment decision flow chart. This chart helped guide the rest of the design.
How experiment moves from one state to another and what decisions can be made in each state. Click to view full image.

Actionable Labels and Dynamic Peeks

To help users navigate the intricate decision-making flow, we paired the progress bar visuals with actionable labels and peeks that provide a more in-depth analysis explaining how the experiment arrived at this decision point.

Ready for Review

This new experiment status replaced the old "Wrap Up" state. It indicates a decision point where users should review the results and take action.

Confetti 🎉

This icon helps visually separate the two “ready” states and indicates a positive ship decision. Adding this icon as a symbol of celebration brings delight to the product.

Peeks for more information

The hover-triggered peeks provide easy access to supporting data to help users decide. It only appears when users want more information, keeping the overall interface clean and simple.

The End-to-End Experience

The progress bar component might seem small, but the changes we made to the mechanism of progress tracking required a redesign to the experiment creation flow and updates to experiment details page.

New fields for experiment configuration

  • Select statistical analysis model (CI method)
  • Select default or customized precision target
These new fields will provide necessary details to inform the progress bar and the peek content.

Updated experiment details page header

The new progress bar component was also added to the the experiment details page where users can view the metrics and results.
The new progress bar marks a linear journey towards a precision target. We hope it gives every team renewed confidence and clarity.
Launched & Loved

Building the New Progress Bar

The new progress bar required a lot of engineering effort because we had to refactor the experiment progress tracking logic. We rolled it out in two phases:

Phase 1: Ship new bar component

  • Build new precision tracking logic
  • Build new dynamic peeks
  • Update experiment card and experiment detail page header to match Figma
  • Implement global change from MDE to precision

Phase 2: Add minimal requirements

  • Show experiment minimal requirements on the progress bar component
  • Allow companies to set minimal requirements defaults in Admin Settings
After 6 weeks of engineering work and QA, we finally launched the progress bar at the end of Q1 2023!

Post-Launch Impact

This was a highly-anticipated launch, and I'm proud to say that the new design achieved the goal to reduce cognitive load for users while maintaining statistical rigor.
After the launch, we received an overwhelming amount of customer love. On top of that, the new design showed positive impact on both the users and the business.

Business impact

  • Reduced related support tickets by ~ 47%
  • Eliminated a major point of friction in the trial experience
  • Strengthened customer’s trust in the product

User impact

  • Improved user satisfaction in experiment monitoring
  • Supported better experiment decision-making
  • Encouraged better experiment practice such as preventing calling an experiment prematurally
Learnings & Takeaways

Resilience is an asset to success in unfamiliar territory

As designers, we can develop strong understanding of complex domains outside our expertise. In this project, I was able to grasp and simplify statistical concepts I had never encountered before.

Every decision comes with tradeoffs, the best decision delivers the most value to the target user

While iterative design is natural to designers, we should actively guide cross-functional teams toward shippable solutions. Working with our statistical engineer taught me to take ownership in aligning the team on MVP solutions that move projects forward.

Good process leads to efficiency

Design reviews, stakeholder check-ins, and decision logs may seem like overhead during tight timelines. However, these practices align team goals and clarify next steps, ultimately creating efficiency and saving time.