ベストプラクティス

What Should You Work on Next? ICE Scoring for Feature Prioritization

So, the workshop went great! The team pulled together and came up with some great ideas and potential features. Now what? Product managers face this problem all the time: What do I work on first?

When it comes to feature prioritization, one method ProductHQ subscribes to is the ICE model of feature scoring. Three variables comprise the ICE framework: impact, confidence, and ease. Once the terms are defined within an organization, the results of the model results offer a clearer perspective on what to work on by eliminating feature noise. Let’s look at the various components of the ICE model in more detail, go through an example, and then outline some pitfalls and objections.

Some Background

Sean Ellis first popularized the ICE methodology through the Growth Hacking approach. This scoring framework leads to both direct and indirect outcomes. The direct outcome is feature identification/prioritization without all of the additional processes and discussions that can stall the product team’s momentum. The indirect outcome is increased stakeholder engagement and ownership of the overall prioritization process. This fosters organizational engagement, which a product manager wants during the final product roadmap discussions.

Definitions

Let’s define the ICE terms. An organization must agree on these definitions (and on the value of ICE framework in general) before applying the methodology. Every organization has its own perspective and should consider its overall strategic objectives when finalizing these definitions. Each of these terms is scored on a scale of 1-10, with ten being the highest. 

  • Impact: The potential gain from the feature, presented in terms of business/strategic objectives.
  • Confidence: The level to which the team feels the feature is understood/scoped. 
  • Ease: The degree of complexity involved in completing the feature (in simple terms – how long it will take to finish)

We’re presenting the exercise from a feature perspective due to the nature of product roadmaps. Of course, you can expand the ICE scoring to overall work which needs to be completed simply by adjusting the definitions. In this case, the term “work” would be defined as the end-to-end tasks needed to get the job done. 

Application

The ICE scoring methodology is fairly easy to incorporate into your product management process. The basic steps are as follows:

  1. Determine the objectives you expect the feature to impact. 
  2. Score each feature based on the three ICE parameters. 
  3. Rank the list of features to determine which to prioritize. 

The first step is the most important. You, your team, and potentially a wider group of stakeholders must identify the “impact” metric(s). Likely, the metric you select will be revenue; however, any quantitative performance indicator will do. Other options might include user adoption rate, customer retention rate, and net promoter score (NPS).

Next up is the actual scoring of the potential features with regard to impact, confidence, and ease. Let’s look at each of these in more detail. As an example, let’s assume we’re using ICE scoring to evaluate a feature set that is aimed at driving improved revenue outcomes for the next quarter.

Impact

Once you’ve chosen the metric your feature will impact, score that feature based on its effect on the metric. In our example, the evaluation is based on revenue impact.

  • 1: No impact on revenue (or other target business metric)
  • 2 – 5: Minimal impact on revenue (or other target business metric)
  • 6 – 8: Definite impact on revenue (or other target business metric)
  • 9 – 10: Significant impact on revenue (or other target business metric)

Some additional questions to consider as you evaluate the feature with regard to impact include:

  • Would this feature help us reach a strategic objective?
  • Will this feature improve our Objectives and Key Results (OKR) for this reporting period?

Confidence

How well has your team scoped this feature? The more “knowledge gaps” there are regarding the details of the feature and how to complete it, the lower the feature should rank on the confidence parameter. Back to the example, we used this breakout to segment confidence.

  • 1 – 3: High risk (many unknowns and little supporting evidence about the potential feature) 
  • 4 – 7: Medium Risk (good information is available, but the blueprint for execution is still unclear)
  • 8 – 10: Low/mitigated risk (plenty of customer feedback and data points backing the feature) 

Confidence really comes down to managing the unknowns of the feature and weighing the following factors:

  • Cost dependencies
  • Risk
  • Proven data
  • 顧客からのフィードバック
  • Product design and architecture

Ease

This parameter is the simplest of the three. However, you’ll need to be realistic when assessing how long it will take to complete work on the feature in question. Our example uses a segment aligned with Agile execution.

  • 1 – 2: One month or more
  • 3 -5: One to two weeks
  • 6 – 7: Less than a week
  • 8 – 10: Less than one day

Consider your team’s capabilities, additional responsibilities, day-to-day productivity capacity, and potential roadblocks.

Three Potential Pitfalls

Like all methodologies, ICE scoring isn’t perfect. Below are three limitations of this method, along with a brief explanation of how to mitigate their negative effects.

  • Potential Pitfall: The ICE model can be “hacked” by only surveying those affected by the feature request.
  • Mitigation Method: Once all scoring is completed, review the outcome with all stakeholders as a sensibility/sanity test.  If needed, expand the scoring options to other groups within the organization, or even to the user base. 

 

  • Potential Pitfall: Required work will get low scores.
  • Mitigation Method: Although higher scores should direct the product team’s work, some tasks are unavoidable.  One example is regulatory compliance. Consider removing required work from the list of features to be scored. 

 

  • Potential Pitfall: Important dependencies might fall by the wayside.
  • Mitigation Method: ICE scoring could yield a low score for a particular feature that other features are dependent upon. In this case, the product leader needs to use their judgment to ensure work is executed correctly. 

A Few Common Objections

As mentioned above, the ICE scoring methodology requires organizational buy-in. Some stakeholders might be skeptical of ICE scoring as a method of feature prioritization. And of course, any change in process might lead to some backlash or objections.

One of these might be that the parameters are only considered (and scored) from the perspective of the development team. For example, building out a feature(s) could be very straight forward on the development end. However, that same feature might create a mountain of work for other parts of the business. Features that could impact compliance and legal reviews, employee training, and user documentation might fall into this category. To avoid this, include additional departments in the scoring process. 

Also, some might claim that ICE only works for small companies. In fact, ICE scoring can work in organizations of any size. It is more commonly found in the start-up world due to how quickly it can be applied and its relationship to growth hacking. However, a larger organization can use this methodology if the product leader properly defines and facilitates its application. 

ICE scoring is a great tool for product managers to quickly rank their backlog and to level the playing field of requests from all areas of your organization. Remember: whatever the result from your ICE Scoring, make sure you communicate the decisions you’ve made, any potential risks to achieving your goals, and the reasoning behind your prioritization framework. 

自分の目で確かめてください

Pendoがどのように貴社を支援できるかをご確認ください。
デモを依頼する
画像
あらゆる規模に対応するソリューション
カスタマイズされたわかりやすいプランで、お客様にぴったりのPendoが見つかります。
価格を見る
画像
無料で試す
無料版Pendoをインストールしませんか?
今すぐセットアップ