In the fast-moving world of startups, where resources are tight and adapting quickly is essential, prioritizing the backlog has to stay a priority. There is always a long list of tasks and new ideas popping up all the time, and it can take time to decide what to tackle first. But, it is much more important than it might seem – choosing the right tasks can make a big difference in how fast the product improves and how happy users are.
Task prioritization is more than just a matter of choice. It is a complex process that needs a balanced approach. In this article, we will have a look at two popular backlog prioritization methods - RICE and ICE and also discuss a new DPA method. As with anything, there are advantages and disadvantages, and we will analyze in which cases each of them is most effective.
Hopefully, after reading this article, you will know how to make prioritization more objective and automated, minimizing the influence of subjective assessments.
What is a backlog and why is prioritization important
A list of tasks, requirements, and ideas that need to be implemented for the development of an IT product – that is a backlog. It includes new features, improvements, bug fixes, and technical debt and helps the team stay focused on key tasks to maintain steady product development.
Why prioritization and updating the backlog are important:
- Limited resources: Team time and resources are always limited! It’s key to direct them to tasks that bring the most benefit to the product and users.
- Changing priorities: Business and user needs can change – regular backlog updates will let you adapt the product to new conditions and market expectations.
- Risk management: With prioritization situations where the team spends time on tasks that have little impact on product success can be avoided.
Risks if you don't maintain the backlog:
- Loss of focus: Without clear priorities within the team, the team may work on irrelevant tasks, slowing down product development.
- Increase in technical debt: Ignoring the importance of bug fixes and technical improvements can lead to decreased productivity and difficulty in scaling the product.
- User dissatisfaction: If key user requests and issues stay unresolved, it will negatively affect customer retention and satisfaction.
- Business risks: Incorrect priorities can slow down the launch of strategically important features, which will negatively impact competitiveness and revenue.
As you can see, regular prioritization and updating of the backlog is not just part of the development process but makes the foundation of successful product management.
Prioritization methods
There are several key factors that the choice of backlog will depend on, which will also vary because of the product development stage, team composition, and business goals.
Let’s have a look at two of the most popular backlog prioritization methods: RICE and ICE, look at their advantages and disadvantages, see how to apply them in practice, and what data to focus on when setting scores. And then, once we cover the basics, we'll look at a new prioritization method that I developed for my team and how we use it - the DPA method.
RICE Method
The Reach, Impact, Confidence, Effort method is a popular approach for prioritizing tasks in product backlogs. It allows for an objective evaluation of each task based on four criteria. Priority is calculated as a numerical value. It’s well-suited for products where it's important to consider both the potential impact of a task on users and the confidence in its success, and both mature and young products can benefit from it quite a lot.
What is RICE?
- Reach — the number of users or events that will be affected by the task over a specific period of time. Data sources:
- Analytical systems: Google Analytics, Amplitude, Mixpanel, Yandex Metrica — tools that track user activity and help understand how many people use certain features or go through specific scenarios.
- CRM data: Number of users, audience segments, and customer data that may be affected by the task.
- Historical data: Assessment of how many users interacted with similar features or solutions in the past.
- A/B testing: Results of experiments on different features will help predict potential reach.
- Impact — how strongly the task will affect each user or event. Data sources:
- User feedback: Results of surveys, interviews, reviews from social networks, app marketplaces, or customer support.
- A/B testing: Changes in metrics (conversion, retention, revenue) depending on feature implementation will help assess its impact.
- Expert opinion: Sometimes impact assessment can be based on team experience or product expert knowledge.
- Comparison with similar features: If similar improvements in the past had a strong effect, a similar impact can be expected.
- Confidence — how confident you are in the data you have for estimating Reach and Impact. Ideally, confidence should be based on data about the product, consumers, and competitive environment. To avoid situations where you think a new feature will increase user satisfaction, but have never measured NPS up to this point. Data sources:
- Quality and reliability of available data: How accurate and detailed is the data on previous metrics and research?
- Historical forecast accuracy: If the team has already accurately predicted task results several times, a high level of confidence can be assigned.
- Test results: The more experimental data, A/B tests, or tested hypotheses, the higher the Confidence.
- Team experience: Confidence can also be based on the intuition and previous experience of the team in solving similar tasks.
- Effort — how many resources, time, or effort will be required to complete the task. This can be estimated by calculating development time, checking for the availability of necessary specialists and technologies in development, and risks. Data sources:
- Development team estimates: Organize planning with developers, designers, and other team members to estimate the scope of work in person-hours or person-months.
- Historical data: Use data on time spent on similar tasks in the past.
- Expert assessment: If the task is new and unique, involve experts for an approximate estimate of labor costs.
- Risk analysis: Consider risks and possible complications that may increase costs (for example,
RICE formula:
RICE = (Reach * Impact * Confidence) / Effort
When to use this method?
This approach is great:
- If you have many tasks and need to compare them by their potential value for users.
- Products in growth or scaling stages, when it's important to quickly find the most valuable improvements.
- For mature products, where prioritization requires a balance between new features and improving existing ones.
If you're working with high uncertainty, where it's difficult to estimate Reach or Impact (which is characteristic of innovative products), or if it's hard to measure or predict real task results (for example, for creative or research tasks), RICE may not be the best – results can be potentially unreliable (and unusable).
Advantages and disadvantages of the RICE method
Pros | Cons |
Structure and objectivity. Good for comparing different tasks using a unified system. | Dependence on data accuracy. If the Impact or Reach assessment is inaccurate, priorities will be incorrect too. |
Transparency! Easy to explain to the team why a particular task was chosen. | Complexity. Requires time and effort to evaluate each criterion. |
Takes into account both potential benefits and implementation costs. | Underestimation of long-term improvements. Tasks with low Immediate Impact but high long-term effect may end up in low positions. |
Example of RICE application
Let's say you're working on a food delivery app and want to prioritize several potential improvements:
- Task 1: Introduction of automatic order status notifications.
- Task 2: Improving the shopping cart interface.
- Task 3: Implementing a loyalty program for regular customers.
Let's create a decision-making table
Task prioritization based on RICE scores shows that:
- The introduction of automatic order status notifications is the highest priority task, as it has high reach, medium impact, and requires relatively little effort.
- Improving the shopping cart interface is important, but has a lower priority, as it requires more effort and affects fewer users.
- The loyalty program has a low priority, as it affects a limited audience and requires significant effort with not very high confidence.
The result is clear – the team can now focus on the first task as the most effective for short-term product improvement.
ICE Method
The (Impact, Confidence, Effort) method is a simplified version of the RICE method, which is also used for prioritizing tasks in product backlogs. The main difference is the absence of the "Reach" criterion, which makes the ICE method easier to apply, especially when it's not possible or necessary to assess the reach of a task in detail.
What is ICE?
- Impact — an assessment of how significant the result of the task will be for users or businesses.
- Confidence — the degree of confidence that the task will lead to the expected result. It is rated on a scale from 1 to 10, where 1 is low confidence and 10 is high.
- Effort — the amount of resources or time that will be required to complete the task.
ICE formula:
ICE = (Impact * Confidence) / Effort
When to use this method?
ICE would be perfect for:
- Products in the early stages of development, where it's important to quickly make decisions about task priorities.
- Situations where it's difficult to estimate user reach (for example, for new features).
- Product teams that need to quickly evaluate multiple tasks.
However, it is less effective for mature products where it's necessary to consider a broader context and impact on users. In this case, it's better to use RICE, which leaves room for a deeper analysis of the potential impact on the audience.
Advantages and disadvantages of the ICE method
Pros | Cons |
Simplicity and speed: the method is easy to apply even without accurate data. | Lack of reach can make estimates less accurate for large-scale tasks. |
Focus on significant factors — impact and confidence — without the need for detailed reach assessment. | Risks of underestimating tasks with long-term or indirect impact (e.g., infrastructure improvements). |
Easy to use for small teams and startups, where time and resource constraints play a key role. | Possible subjectivity in assessing impact and confidence without accurate data. |
Example of ICE application
Now, we still have our food delivery app and the same tasks as before. Let's see if the priority of tasks will change if we use the ICE method. Again, the tasks are as follows:
- Task 1: Introduction of automatic order status notifications.
- Task 2: Improving the shopping cart interface.
- Task 3: Implementing a loyalty program for regular customers.
Task prioritization through ICE shows that:
- The introduction of automatic order status notifications has the highest priority with an ICE score of 14.4. This has a significant impact and confidence with relatively little effort.
- Improving the shopping cart interface has moderate priority (ICE = 6.85) but requires more effort than the first task.
- The loyalty program has the lowest priority (ICE = 4.37) due to moderate impact and confidence with relatively high effort.
ICE is easy to apply and can help you quickly determine the priority of tasks in the backlog, especially in conditions of uncertainty or limited resources. For startups and small teams where decision-making speed is more important than accuracy, ICE can become an irreplaceable tool.
What's the new prioritization method?
Although the RICE and ICE methods are effective, they share a drawback. Both of them heavily depend on expert assessments, which can’t always be reliable. So, a new prioritization method that minimizes dependence on subjective opinions would have to be based on objective data and automated processes, instead of subjective opinions. I came up with a method that I'll call DPA – the Data-Driven Prioritization Approach.
The main idea is using objective metrics collected in real-time, such as user activity, revenue, and performance issues. This way prioritization is automated by using a system that analyzes these metrics according to predefined rules.
Key metrics for DPA
- Feature Usage: The number of users using a particular feature. More in-demand features receive higher priority!
- Error Rate: The more problems users encounter with a certain feature, the higher the priority for fixing or improving it.
- Return on Investment (ROI): Features that directly affect company revenue receive greater weight. This parameter takes into account both direct profit (e.g., conversion) and indirect metrics (e.g., reduction in user churn).
- Time since last feature update: If a feature hasn't been updated for a long time, it may need to be updated to improve UX or technical characteristics.
- Customer Requests: Based on data from customer support, surveys, and feedback analysis, the team can perform the most important tasks.
How does it work?
- All tasks in the backlog automatically receive scores for each criterion based on real data collected using analytical tools (Google Analytics, Amplitude, CRM systems, bug trackers).
- Each criterion has its weight depending on its importance to the business. So, errors and performance may have greater weight for technical debt, while feature usage may be more important for new features.
- The algorithm regularly recalculates task priorities based on real-time data changes (for example, an increase in errors or growth in feature usage).
The formula for each task:
DPA Score = (Usage Weight * Usage Score) + (Error Rate Weight * Error Rate Score) + (ROI Weight * ROI Score) + (Update Frequency Weight * Update Frequency Score) + (Customer Requests Weight * Requests Score)
Data sources for evaluation
- User data analysis:
- Use analytics tools (e.g., Google Analytics, Mixpanel, or others) to obtain information on how users interact with the application.
- Track which features are used most frequently to determine the Usage Score.
- Error reports:
- Collect data on the number and types of errors occurring in the application. This can be done using error tracking tools such as Sentry or Bugsnag.
- Assessing the number of errors will help you establish the Error Rate Score.
- User surveys:
- Conduct surveys or interviews with users to understand their needs and preferences.
- Customer Requests can be collected through feedback forms or specialized platforms such as SurveyMonkey or Typeform.
- Financial reports:
- To assess the ROI Score, you can analyze past revenue and profit data associated with various application features.
- Identify which features lead to the greatest increase in revenue, and use this data to justify your estimates.
- Update frequency:
- Analyze how often each feature needs to be updated. This can be done by examining the history of changes in the application and the frequency of update releases.
Trying it in practice
With DPA, each task is evaluated on several criteria, taking into account their weight. For each one, we will use the following parameters:
- Usage Weight: Importance of usage (usage weight).
- Usage Score: Usage score (how often this feature is used).
- Error Rate Weight: Importance of the error rate (error weight).
- Error Rate Score: Error rate score (how often errors occur).
- ROI Weight: Importance of return on investment (ROI weight).
- ROI Score: Return on investment score (how high the potential profit is).
- Update Frequency Weight: Importance of update frequency (update weight).
- Update Frequency Score: Update frequency score (how often the feature is updated).
- Customer Requests Weight: Importance of customer requests (requests weight).
- Customer Requests Score: Customer requests score (how often customers request this feature).
Evaluating each task according to DPA criteria
Task 1: Introduction of automatic order status notifications
- Usage Weight: 0.25 (usage weight) — obtained based on user data analysis.
- Usage Score: 9 — based on the fact that most users actively request information about order status, which was confirmed by surveys.
- Error Rate Weight: 0.15 — set based on error statistics identified using a monitoring tool.
- Error Rate Score: 2 — high quality of the current notification system, but some errors still occur.
- ROI Weight: 0.25 — obtained based on analysis of potential increase in customer retention.
- ROI Score: 8 — takes into account the possibility of increasing the number of repeat purchases due to improved user interaction.
- Update Frequency Weight: 0.15 — data on the frequency of updates to the notification system.
- Update Frequency Score: 5 — frequent system updates are required to maintain relevance.
- Customer Requests Weight: 0.20 — based on user surveys where they noted high interest in this feature.
- Customer Requests Score: 10 — maximum score due to high number of requests.
DPA Score=7.30
Task 2: Improving the shopping cart interface
- Usage Weight: 0.30 — set based on interface usage analysis.
- Usage Score: 7 — average frequency of use, confirmed by analytics.
- Error Rate Weight: 0.20 — based on data about the number of interface issues.
- Error Rate Score: 4 — indicates the presence of some errors.
- ROI Weight: 0.20 — return on investment was assessed based on historical data.
- ROI Score: 6 — average return from interface improvements.
- Update Frequency Weight: 0.10 — requires updating once every six months.
- Update Frequency Score: 4 — considered necessary to update.
- Customer Requests Weight: 0.20 — collected through user surveys.
- Customer Requests Score: 6 — moderate number of requests from users.
DPA Score= 5.70
Task 3: Implementation of a loyalty program
- Usage Weight: 0.20 — based on user analysis.
- Usage Score: 5 — low frequency of use among regular customers.
- Error Rate Weight: 0.20 — based on analysis of error frequency in other loyalty programs.
- Error Rate Score: 3 — indicates a low error rate.
- ROI Weight: 0.30 — high weight due to potential benefit from customer retention.
- ROI Score: 7 — takes into account the possibility of a significant increase in repeat purchases.
- Update Frequency Weight: 0.10 — based on analysis of the need to update the program.
- Update Frequency Score: 2 — infrequent updating of the loyalty program.
- Customer Requests Weight: 0.20 — based on surveys showing interest in this program.
- Customer Requests Score: 6 — moderate number of requests from users.
DPA Score = 5.10
Comparing priorities
Now we can compare the DPA scores for each task:
- Task 1 (Order status notifications): DPA Score = 7.30
- Task 2 (Improving the shopping cart interface): DPA Score = 5.70
- Task 3 (Loyalty program): DPA Score = 5.10
Task prioritization based on DPA scores shows that:
- Introduction of automatic order status notifications has the highest priority with a DPA Score of 7.30, indicating high importance of usage and user requests.
- Improving the shopping cart interface has medium priority (DPA Score = 5.70), with good scores but less significant impact.
- Loyalty program has the lowest priority (DPA Score = 5.10) due to relatively low frequency of use and fewer customer requests.
Comparison of RICE, ICE, DPA
For clarity, let's display the data in a table:
Method | Criteria | Pros | Cons | When to use |
RICE | Reach, Impact, Confidence, Effort | Considers reach and impact; more accurate estimates | Requires a lot of data; there is subjectivity | For mature products with large reach |
ICE | Impact, Confidence, Effort | Simple and quick to calculate | No consideration of reach; higher subjectivity | For startups and products in early stages |
DPA | Usage, Error Rate, ROI, etc. | Automation; objective metrics | Complexity of setup; dependence on data | For large products with active usage |
As I’ve already mentioned a few times, the choice of an appropriate prioritization method will depend on the product development stage, data availability, and team goals:
- RICE is perfect for mature products with a large user base, where it's critical to consider the reach of tasks. However, its dependence on expert assessments and the need to collect extensive data complicate its application.
- ICE is a simplified version of RICE, working excellently for startups or products in early stages when quick decision-making is required. It's easy to use but may be less accurate due to the lack of reach consideration.
- DPA is the most objective method, fully relying on data. It's optimal for products with large volumes of data and requires minimal team involvement. However, the complexity of setup and dependence on data quality can become limiting factors.
Conclusion
IT Product management is its unique realm, where making the right choices can be the difference between a successful launch and a stalled project. Thankfully, we are not the first to be dealing with incoming challenges, and there are systems that can support us in the process of growing a company. Each one of the approaches we’ve discussed today is unique in its own manner, and can provide you with robust managing tools depending on your goal:
- RICE and ICE are focused on quantitative indicators, providing simplicity and intuitive assessment. They are suitable for teams with limited resources aiming for quick results.
- DPA offers a deeper, data-based approach, considering multiple factors, making it useful for complex projects with large volumes of user data.
The choice depends on the context of your product, how mature it is, and what the user needs. There is really no universal solution, and it's important to adapt the approach according to specific requirements. Regularly updating priorities based on data and feedback, allows teams to remain flexible and create higher-quality products, which ultimately brings you to a point where customers are satisfied and long-term success is a given.