top of page

How We Ensure Product Excellence: The Product Quality Evaluation Process

Trusting someone else to collaborate on your product build is not easy. It’s like handing your baby over to a sitter – you have to trust they’ll put just as much care into the job as you.  


As a team made up of former founders and internal product leaders, Remedy knows this anxiety. Both long-term tech partners and internal product teams need to find a way to objectively measure product quality to thoughtfully scale, enhance, and adjust strategy throughout a build. 


So, we decided to build our own process: The Product Quality Evaluation


The Product Quality Evaluation is a continuously updating model that measures a product team’s performance on a given project. The three main goals of this process are to ensure: 


  1. Quality by systematizing high standards and ensuring stakeholders can continuously understand if the product team is meeting outlined criteria

  2. Velocity by mapping out development at a constant pace so outcomes are predictable and continuously deliver business value

  3. Cohesion through check-ins that allow our PMs, team members, and partners to call out strengths and identify growth areas


So, what does this evaluation process actually look like?


The evaluation consists of three recurring steps at designated intervals during a build phase. For teams using Scrum Agile Methodology, the evaluation occurs before and after each sprint. For teams using Kanban or ScrumBan Agile Methodology, it’s before and after every delivery cycle.


The three steps include: 


Step 1 - Gathering and Scoring Data


We gather and score data about the success of a sprint or delivery cycle from three different sources. This data is compiled in a Product Excellence Log.


Our three data sources include:


  1. Product Managers: Remedy PMs conduct an assessment with their assigned Remedy product lead that evaluates their preparation, process, and output for the sprint or delivery cycle. 


The assessment scores the completion of two different types of project processes: “must have processes” and “supplemental processes.”


Must-have processes are the required steps in a sprint or delivery cycle (such as sprint duration, demo cadence, prioritized objectives, etc.). These steps are scored on a “complete” vs. “not complete” binary. The overall Must-Have Processes score is the percent of processes marked as “complete.”


Supplemental processes are the extra processes PMs implement to go above-and-beyond (such as retro feedback implementation, consistent sprint velocity, objective definition for upcoming sprints, etc.). Supplemental processes are assigned a weight on a scale of 1-5 based on importance and then marked as “complete” or “not complete.”


The final supplemental process score is the weighted percent of processes marked as “complete.” Over time, many supplemental processes become team-wide must-have processes to ensure standardized quality improvement over time.


Ultimately, the data collected from PMs generates two data points: success on must-have processes and success on supplemental processes.


  1. Partner stakeholders: The technical and business stakeholders from our partner’s internal team complete a survey that generates a Net Promoter Score (NPS) for Remedy’s team, product manager, and overall progress. The survey asks them to rank their overall satisfaction with the sprint on a scale of 0-10 and then asks for qualitative answers about specific sprint aspects.


This data generates a final score out of 10 points.


  1. Remedy team members: Internal project team members, including engineers, architects, designers, and QA, fill out an anonymous survey where they rate the Product Manager’s performance, their overall satisfaction with the process, and their perceived success of the sprint on a scale of 0-10.


This data generates a final score that measures a team member's overall satisfaction out of 10 points.


Step 2 - Reviewing Scores


The overall scores from step 1 are compared against two data points: 


  • Internal Historical Data: Previous data about the team from other project periods. 


This comparison allows us to calculate team consistency, identify deterioration early, and understand team strengths that can be utilized.


  • Cross-Team Historical Data: Company-wide product scores across different teams.


This comparison allows us to determine if a team is appropriately meeting the company standard, setting a new standard, or failing to meet a standard. It informs our company-wide strength areas and growth opportunities.


Step 3 - Acting on the Data 


Based on these data comparisons, we can observe and take action on the exact process, skill, or practices contributing to performance trends.


  • For example, is a team showing improvement? Are they remaining consistently high? We’ll give the team a shoutout and encourage our partner to do the same.


  • Is a team falling short? We’ll introduce training and mentorship, check in multiple times per week for multiple sprints, and adjust the team if needed.


  • Are numbers not matching the verbal sentiment we’re receiving from team members or the partner? We’ll revisit the product evaluation model itself and iterate.


The data collected on team members’ individual performances also informs individualized growth plans for Project Managers and engineers to improve their skills and grow within Remedy. High scores and consistent improvement are tied to internal promotions and opportunities.


Graph of Remedy teams’ average Product Evaluation Scores since the implementation of the Product Evaluation Process in November 2022. The graph includes data points gathered from PMs (dark blue) and Remedy project team members (teal).



Since the implementation of the Product Evaluation Process in Q4 2022, Remedy has seen a 53% increase in overall scores – and we’re only looking up. 


We’re excited to keep raising the bar to create the best products possible with our partners.


Comments


bottom of page