Bringing a new approach to usability study  —  Kano Model

Monday morning, 10 a.m. Weekly alignment is about to start.

Everybody is there: the project manager, developers, stakeholders, and everyone else who has a say in the product.

Trello is up: endless lists of tickets displaying various contributions, priorities, and ideas about what should be part of the first release. You’re trying to balance budget and deadline, and you have a general sense of what could work and what won’t. However, you have no organized information or verified prioritized data to convince everyone.

You have too many features that are set to go live with the first release. Each feature comes with its own user interface: menu items, buttons, and other elements that make things complicated. When you increase the complexity of a product, you degrade the user experience. As a result, users are hand-held and, a few months later, have not touched half of the features. Instead of deciding what to take out after spending resources on fail-safe features, you want to study your users before the development process begins, so that you can build a product with the right features in the right order.

Satisfied users, a nurtured client and cost-effective resource planning are the recipe for success. But you have to set the table first. How does your team measure user satisfaction? How do you create a product roadmap with organized user data to advise the client? How do you help your development team define priorities that take into account the effort that goes into building a feature?

At Vermonster, we tackle this problem using the Kano model.

In 1984 Japanese researcher Noriaki Kano established a set of techniques, now known as the Kano model, to measure and analyze user satisfaction during product development. The Kano model defines product attributes by their effect on customer satisfaction.

Customer satisfaction chart

Customer satisfaction chart

Product functionality chart

Product functionality chart

Product attributes are classified into one of four categories plotted on a graph measuring user satisfaction and feature functionality:

Must-Be Attributes

These are the attributes that the user expects to have in the product, like the way we expect our iPhones to have touch screens and our phone plans to include unlimited talk and text. Do you see how the curve on the graph is never positive? These features are already expected, so increasing the performance of these features faces decreasing returns in customer satisfaction. However, when these features are done poorly, it leads to extreme frustration and dissatisfaction. Once a user is upset about such a feature, she often will not use the product or will have a biased opinion of the rest of the product’s features.

Must-be feature

Must-be feature

One-Dimensional Attributes

Some features have a linear satisfaction vs. functionality relationship. When it comes to these features, the more functionality we implement, the more satisfied users we will get. The cleanliness of your hotel room can be categorized as a One-Dimensional feature. As would your data plan or your iPhone’s battery life. Increased functionality requires increased investment. The more functionality we add, the more resources and time we need to invest. The cleaner hotel room requires more staff hours, and improved iPhone battery life requires more time from Apple’s engineering staff.

One-dimensional feature

One-dimensional feature

Attractive Attributes

Contrary to must-be attributes, these are unexpected features that create a positive response, ranging from a simple grin to “mind-blown.” The attractiveness curve never goes below the horizontal axis because users can’t be unsatisfied about a feature that they did not expect from a product. For example, Steve Jobs blew our minds when he introduced the first iPhone with a touch screen, and the simple treats that we sometimes receive in hotel lobbies always put a smile on our faces in large part because we don’t expect them.

Attractive feature

Attractive feature

Indifferent Attributes

These are the features that don’t have any effect on how satisfied our users will be. Their presence does not make a difference, and the effort to build these features would be a waste of resources.

Indifferent feature

Indifferent feature

The Survey

In order to determine which categories that a product feature falls under, we ask three questions to measure functionality, dysfunctionality, and importance. For the first two, we provide the user five answers to choose from: I like it, I expect it, I am neutral, I can tolerate it, I dislike it. Each answer is put into this chart, which we use to determine a feature’s classification.

Kano model graph

Kano model graph

The two new categories we see here rise due to user factor. If our user answered I like or I dislike for both functional and dysfunctional questions, we have a problem. This is an unconventional way to ask someone whether they like or don’t like something. However, if the majority of our features fall under “Questionable,” then we are likely at fault and need to revise our questionnaire. The second category here is “Reverse,” a situation where our user wants the opposite of what we’re proposing. In this case, we simply swap the functional and dysfunctional question and find the right category.

Once we get all the answers, prioritization is the key. User needs and development time are the two factors that need to be taken most seriously. Adding a third question — importance — helps with defining user needs. To measure importance, we use a five-point scale: not at all important, somewhat important, important, very important, extremely important.

Feature importance chart

Feature importance chart

In order to prioritize development resources, we identify the level of effort that goes into building a feature. We do this internally, while the other three attributes (functional, dysfunctional, importance) are defined by our users. High, medium, low, and existing feature are the four levels of effort we work with. While importance is a priority on the user’s end, level of effort adds another layer to be used internally. If there are two attractive features with similar importance for the user, within budget constraints, the feature with a lower effort level will have a higher build priority.

Prototyping and Kano Model

At Vermonster, once the features that could or should be a part of the product are determined, we prototype each feature we would like to test with real users. It is crucial to have a platform ready for users to perform their actions, instead of just talking them through the steps that you plan on including in the product. A/B testing with two or more prototypes is useful when there are conflicting features that cannot be tested in a single prototype or user flow. If one feature/question in the survey requires subtasks, we break the question down. That being said, usability tests are tiring for everybody included in the process, and limiting the features that are tested within a day is important for better research and results. In such cases, we focus on the features that will benefit our next steps toward user satisfaction and resolve back-end conflict with stakeholders.

The Kano model turned out to be a perfect fit for our small teams, where our complementary skills are essential, and it shortened our delivery. Yes, leaving our conventional methods of user research and fitting the Kano Model into our development process was challenging, and at times funny — learning to write questions in a way that made sense on a grander scale — it was rewarding in the end.

One more thing: Remember how you felt a few years ago when hotels started informing you that you’d be provided with free Wi-Fi. Think about how you’d feel now. The categories these features fall under are not static. The way users feel about a certain feature right now doesn’t mean that is how they will feel one year from now. Over time, attractive features turn into one-dimensionals, and those turn into must-be features. This decay can be caused by competitor products, the pace of technological development, or the simple fact that it is human nature to get used to things. A usability test conducted today may become irrelevant in the future, so it is important that these studies be done periodically.

UXCeren PaydasComment