Prioritization Frameworks

What is Prioritisation?

One of the most challenging and important aspects of Product Management is prioritization. 

In product, prioritization is on a whole other level! You’ve got a list of unprioritized features and tasks splayed out in front of you. The engineers are telling you that Feature A will be really cool and will take you to the next level. But a key stakeholder is gently suggesting that Feature B be included in V1. Finally your Data Analyst is convinced that Feature B is completely unnecessary, and that users are crying out for Feature C.

Who decides what gets worked on? You.

Prioritization is absolutely essential for product teams and to product development. Choosing the right high priority can feel daunting. 

 

Let’s look at some key commonly used prioritization frameworks in the industry.


The MoSCoW Method



The name is an acronym of four prioritization categories: Must have (Mo), Should have (S), Could have (Co), and Won’t have (W)

Must have

‘Must have’ represents the features that you absolutely should not launch without. This could be for legal reasons, safety concerns, or business reasons.

To work out if something qualifies as ‘Must have’ think about the worst and best-case scenarios for not including it. If you can’t picture success without it, it’s a Must have!

Should have

‘Should have’ is for things that would be better to include, but you’re not destined for disaster without them. They are essential for the overall success of the product

Could have

‘Could have’ things would be nice to include if you have the resources, but aren’t necessary for success. The line between ‘Could have’ and ‘Should have’ can seem very thin. 

To work out what belongs where, think of how each requirement (or lack thereof) will affect customer experience. The lesser the impact, the further down the priority list the requirement goes!

Won’t have

‘Won’t have’ doesn’t mean ‘this requirement is trash and it will NEVER be included’, it just means ‘not this time.’

It could be for a variety of reasons, like a lack of resources or time. In any case, it helps you and your stakeholders agree what won’t make it in your next release, which greatly helps to manage their expectations.

PG Tip: When you start prioritizing features using the MoSCoW method, classify them as “Won’t Haves” and then justify why they need a higher rank.



RICE Scoring



Another key prioritization methodology is the RICE scoring system, which again has four categories to help assess priority; Reach, Impact, Confidence, and Effort. The formula for calculating RICE score for each feature is:

RICE Score = (Reach * Impact * Confidence) / Effort

 

Reach

Estimate how many people will be impacted by a feature or release in a certain period of time (month/quarter etc.). As with all things in Product, make sure your answers are backed up by data and not just off the top of your head.

Impact

During the planning stage, Impact is difficult to measure precisely.Think about the goal you’re trying to reach. It could be to delight customers (measured in positive reviews and referrals) or reduce abandonment.

There’s no real scientific method for measuring impact. Intercom recommends a multiple-choice scale:

3 = massive impact

2 = high impact

1 = medium impact

0.5 = low impact

0.25 = minimal impact

Confidence

Confidence percentage gives your estimates a percentage to boost its priority-level when you’re lacking the data to prove its importance. You can also use it to help de-prioritize things you’d rather not take a risk on. This is where the Product Manager is supposed to bring in all his experience to come up with a confidence percentage based on her intuition or gut-feeling.

Generally, anything above 80% is considered a high confidence score, and anything below 50% is pretty much unqualified.

Effort

Estimate the total amount of time the feature/project will need from all team members: product, engineering and design. 

Effort is estimated as a number of “person-months” – the work that one team member can do in a month. The more time allotted to a project, the higher the reach, impact, and confidence will need to be to make it worth the effort.

PG Tip: Start with 0.25 as Impact score and 50% as confidence percentage and they justify their scale change.



Kano Model



The idea behind the Kano model is that Customer Satisfaction depends on the level of Functionality that a feature provides (how well a feature is implemented).

The Kano model is best represented by a graph (above)

Delighters: The features that customers will perceive as going ‘above and beyond’ their expectations. These are the things that will differentiate you from your competition.

Performance features: Customers respond well to high investments in performance features.

Basic features: The minimum expected by customers to solve their problems. Without these, the product is basically useless to them.

The main idea behind the Kano model is that if you focus on the features that come under these three brackets, the higher your level of customer satisfaction will be.

To find out how customers value certain features, use questionnaires asking how their experience of your product would change with or without them (check on the internet on sample Kano questionnaires to design one for your use case). The Kano model is useful when you’re prioritizing product features based on the customer’s perception of value:

Perception is the key word here. If the customer lives in an arid climate, rain-sensing wipers may seem unimportant to them, and there will be no delight. Using the Kano model (or any other model incorporating customer value) requires you to know your customer well.

PG Tip: As time goes along, you may find that features which used to be delighters move down closer towards ‘Basic Features’ as technology catches up and customers have come to expect them, so it’s important to reassess periodically.



Value vs Complexity Quadrant



A value vs. Complexity Quadrant is a prioritization instrument in the form of a matrix. It is a simple 2 x 2 grid with “Value” plotted against “Complexity.”

Value is the benefit your customers and your business get out of the feature. Is the feature going to alleviate any customers’ pains, improve their day-to-day workflow, and help them achieve the desired outcome? Also, is the feature going to have a positive impact on the bottom line of your business? 

Complexity (or Effort) is what it takes for your organization to deliver this feature. It’s not enough that we create a feature that our customers love. The feature or product must also work for our business. Can you afford the cost of building and provisioning the feature? Operational costs, development time, skills, training, technology, and infrastructure costs are just some of the categories that you have to think about when estimating complexity.

If you can get more value with fewer efforts, that’s a feature you should prioritize.

 

The quadrants created by this matrix are: 

  1. Quick Wins (upper-left). Due to their high value and low complexity, these features are the low-hanging-fruit opportunities in our business that we must execute with top priority.
  2. Major Projects, Big Bets, or Potential Features (upper-right). The initiatives that fall into this block are the big project releases that we know are valuable but are too risky to take on because of the resources and costs involved with them.
  3. Fill-Ins or Maybes (lower-left). In this quadrant are usually positioned the “nice to have” features. Things like small improvements to the interface and one day, maybe ideas.
  4. Time Sink Features (lower-right). Time sinks are the initiatives that we never want our team to be working on.

PG Tip: Don’t use this framework if you’re working on a super mature product with a long list of features.



Weighted Scoring Model


Here’s how to use the Weighted Scoring Prioritization framework:

  • Start with a clear strategic overview of your next product release.
  • Compile a list of product features that are related to that release. You don’t want to score every single feature in your backlog. Identify and group only the most relevant features for that release theme.
  • Define the scoring criteria and assign weights to each driver. Come up with a list of drivers (or parameters) and decide their importance by giving each driver a specific weight from 0% (smallest contribution to the overall score) to 100% (biggest contribution to the score). Make sure all of the stakeholders agree on each criterion.
  • Go through each feature and assign a score from 1 to 100 for each driver. The higher the score, the higher the impact that feature has on that driver.

Here is an example scorecard: 

PG Tip: Be careful when defining drivers and weights and have the alignment of the stakeholders on these before you start scoring features. Any bias here could lead to a wrongly prioritized list.



Do’s and Don'ts of Prioritisation


Do’s

  • Approach prioritization as a team activity; not only does it create buy-in on the team, you get different perspectives. It’s also a lot more fun.
  • Limit the number of items you are prioritizing – focus on the biggest items rather than the details.
  • Categorize and group initiatives together into strategic themes (for example, “improving satisfaction” for a particular persona would be a good way to group).
  • Before you begin prioritizing, it’s helpful if you understand the customer value for each initiative. The customer value should be rooted in evidence that you’ve gathered from customers rather than your opinions.
  • Before you begin, have a rough estimate of the cost. Even the T-shirt sizing of “small” “medium” and “large” will be helpful during the process.

Don'ts

  • Don’t prioritize based on what your competitors are doing.  Your product’s development should be based on the research, your customer feedback and innovative ideas that you and your team compile — not on what another product is doing.
  • Don’t prioritize based on requests from your sales team. Your sales team will always have a feature-request opinion. But relying on their opinion is the fastest way to lose direction for the product’s strategic purpose.
  • Don’t prioritize what's easy. Even if your developers tell you that they can get a lot of items checked off of the list quickly. It might sound like a viable option but this isn’t a product strategy. In fact, doing so is a strong indication that you’re not working toward an objective for your product.
  • Don’t prioritize based on your gut instinct alone. Driving a product to a successful market launch demands hard evidence and a prioritization framework to support the product manager’s decisions. Think: industry research, user surveys, conversations with customers, feedback from the company’s sales or support teams.


Summary of Prioritization Frameworks


Knowing which prioritization framework to use is tough! The Kano model is useful for making customer-centric decisions and focuses on delight, but it can take time to carry out all the questionnaires needed for your insights to be accurate and fair.

Many people like the RICE scoring system as it takes confidence into account in a qualitative way, but there are still a lot of uncertainties.

MoSCoW focuses on what matters to both customers and stakeholders, which is particularly useful for Product Managers who struggle with managing stakeholder expectations. It’s also the simplest to understand for non-technical stakeholders. However, there’s nothing stopping you from putting too many things into ‘Must have’ and overextending your resources.

MoSCoW

  • Choose when: You need to communicate what needs to be included (or excluded) in a feature release
  • Pros: Identities product launch criteria
  • Cons: Doesn’t set prioritization between features grouped in the same bucket

RICE

  • Choose when: You need an objective scoring system that has been proved instead of developing one from scratch
  • Pros: Quantifies total impact per time worked
  • Cons: Its predefined scoring factors don’t allow for customization. May not be the perfect fit for your organization

Kano Model

  • Choose when: You need to make better decisions for product improvements and add-ons
  • Pros: Prioritizing features based on the customers’ perception of value 
  • Cons: It doesn’t take into account complexity or effort; customer surveys can be time-consuming

Value vs. Complexity Quadrant 

  • Choose when: Working on a new product, building an MVP or when development resources are scarce
  • Pros: Great for identifying quick wins and low-hanging-fruit opportunities
  • Cons: Hard to navigate when there’s an extensive list of features

Weighted Scoring

  • Choose when: Weighting a long list of feature drivers and product initiatives
  • Pros: Quantifies feature importance and ROI
  • Cons: Drivers’ weights can be manipulated to favor political decisions; requires full team alignment on the different drivers and features involved in the scoring process

References




RELATED READS

image description

Acing the Product Management Interview

I have been interviewing for a lot of APM/PM roles & also mentoring candidates who aspire to transition into PM roles. During the last couple of years, we have seen exponential growth in a numb

Read

SIMILAR EVENTS

image description

Product Management: Ask Me Anything

Product Gurukul team & their mentors conduct reglular 1:1 sessions with enthusiasts in the field of Product Management looking to get into prod...

explore
whatsapp