CASE

Case studies
Abby Thompson
Abby Thompson

Improve scorecards

Improve scorecards

Improve scorecards

Making Scorecards actionable and understandable

Making Scorecards actionable and understandable

Making Scorecards actionable and understandable

Compass, Atlassian

2023 - 2024

Summary

Scorecards were confusing and over-engineered, leading to poor user understanding. I led improvements based on user research—simplifying logic, adding visualisation, and making them more actionable. Early results showed fewer support issues and clearer user comprehension.

Role

Lead Product Designer

Team

12 Developers, Product Manager

Context & Problem

Our scorecards weren’t resonating with customers—both built-in and custom configurations. Content was difficult to understand, and there was no visualisation to aid interpretation. As a result, scorecards became a point of confusion rather than clarity.

Through many hours of user calls, we uncovered consistent pain points:

  • Visual design issues: Customers were put off by excessive use of red—creating a sense of failure rather than constructive feedback.

  • Lack of clarity: They didn’t understand why a scorecard was failing or what to do about it.

  • Complexity in logic: The scoring model—particularly the percentages and weighting—was over-engineered and opaque.

  • No guidance: There were no supporting visual cues or explanations to help users interpret results or take action.

Team & Constraints

We had a team of 12 developers split across 3 smaller working groups. A key challenge was staying on top of their progress, unblocking edge cases during build, and ensuring a steady flow of meaningful, well-scoped work. As an early-stage product, we often faced sudden pivots that left us with an empty roadmap—so we had to move quickly, make smart calls, and deliver thoughtful direction to keep momentum.

Process

We mapped out a series of improvements grounded in what we heard from users. Our approach was twofold:

  1. Quick wins vs scalable improvements: We collaborated with engineers to identify low-effort fixes and also explore what it would take to evolve the scorecard system for long-term scalability.

  2. Customer-centric roadmap: We prioritised changes based on impact to customer understanding and usability.

Solution

Features We Shipped or Explored

  • Custom thresholds – giving teams control over what “good” looks like.

  • Scorecard visualisation graphs – to aid interpretation at a glance.

  • Application model – enabling a more intuitive way of applying scorecards and clarifying copy.

  • Regex criteria – for more flexible, advanced logic.

  • Auto-connected metrics – reducing setup friction.

  • Notifications – improving visibility when scorecards fail.

  • Add Jira Issue - provide action on failing or needs attention criteria/scorecards.

  • AI-assisted criteria building (exploratory) – to reduce manual effort in setup.

Impact

Although the work was still rolling out when I left, we began seeing a drop in the number of support asks and clarifying questions related to scorecards—early signs that the experience was becoming more intuitive and self-serve.

Reflection

The backend for scorecards had not been designed for scale, and that limited how easily we could evolve the product. This reinforced the importance of balancing speed-to-market with architectural foresight, especially in core systems.