
Compass, Atlassian
2023 - 2024
Summary
Scorecards were confusing and over-engineered, leading to poor user understanding. I led improvements based on user research - simplifying logic, adding visualisation, and making them more actionable. Early results showed fewer support issues and clearer user comprehension.
Role
Product Designer
Team
12 Developers, Product Manager
Problem
Our scorecards weren’t resonating with customers - both built-in and custom configurations. Content was difficult to understand, and there was no visualisation to aid interpretation. As a result, scorecards became a point of confusion rather than clarity.
Through many hours of user calls, we uncovered consistent pain points:
Visual design issues: Customers were put off by excessive use of red - creating a sense of failure rather than constructive feedback.
Lack of clarity: They didn’t understand why a scorecard was failing or what to do about it.
Complexity in logic: The scoring model - particularly the percentages and weighting - was over-engineered and opaque.
No guidance: There were no supporting visual cues or explanations to help users interpret results or take action.
Solution
Features We Shipped
Custom thresholds – giving teams control over what “good” looks like.
Scorecard visualisation graphs – to aid interpretation at a glance.
Application model – enabling a more intuitive way of applying scorecards and clarifying copy.
Regex criteria – for more flexible, advanced logic.
Auto-connected metrics – reducing setup friction.
Notifications – improving visibility when scorecards fail.
Add Jira Issue - provide action on failing or needs attention criteria/scorecards.
AI-assisted criteria building (exploratory) – to reduce manual effort in setup.