• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Certified Kanban Training - Find Your Trainer!

AKT Materials

ALUMNI SIGN INSIGN OUT

Contact

Kanban University

Kanban University

Management Training, Consulting, Conferences, Publishing & Software

  • About
    • Kanban University
    • Contact Us
  • Courses
    • *NEW* Flow Manager Course
    • Team Kanban Practitioner
    • Scrum Better with Kanban
    • Kanban System Design
    • Kanban Systems Improvement
    • Kanban For Design and Innovation
    • Enterprise Scale Kanban
    • Kanban Maturity Model
    • Kanban Coaching
    • Kanban Train the Trainer
    • Change Leadership Masterclass
    • Kanban University Course Catalog
    • Submit a Request
  • Events
  • Resources
    • Kanban Merch Shop
    • The Official Guide to The Kanban Method
    • Kanban+ Online Learning Platform
    • Improve Your Scrum With Kanban Training
    • Kanban Books
    • What is the Kanban Method
    • Frequently Asked Questions
    • Blog
    • State of Kanban
    • Agility Without the Overheads Services
    • See All Resources
  • Our Network
    • Consultants – AKC
    • Trainers – AKT
    • Training Orgs – LTO
    • Alumni
  • Find Classes

David Anderson

Evangelizing A Product Concept By Validating A Design

January 17, 2016 by David Anderson

The following article was first published June 7th, 2000. It describes a project to create a design for a set of wireless data applications for Nokia at their Americas office in Las Colinas, TX. This edition has been slightly edited for style, and in content to keep it relevant for an audience 16 years later. The original did not contain the name of the client, the team members or the narrative of the project, in order to protect client confidentiality. Given that almost 16 years have passed and the client is no longer in business, I feel it is reasonable to elaborate on the original article with the narrative detail.

At the time, I thought of what we were doing as Agile UX, because the design approach is iterative and rapid. Actually, we didn’t have the “Agile” word until 2001. The style of this process felt like eXtreme Programming for UX. It didn’t occur to me to think of it as Lean UX. The client was Nokia, at the time, the world’s largest mobile phone company. They were extremely secretive about their future plans. This wasn’t a startup even though the project was a “startup” within Nokia and the domain – wireless data applications – was bleeding edge at the time. The technology was WAP and our purpose was not only to create a business Nokia could exploit but to enable Nokia to use our work as an archetype to encourage network operators and 3rd parties to develop usable applications on WAP. [For those not familiar with WAP, it was a 2G wireless data technology which used a text-based screen format, via the hard key interface of what were known as “candy bar” phones.] read more…

Evangelizing A Product Concept By Validating A Design

In larger technology companies it can often be difficult to develop an understanding of the advantages of doing good product design early. This can be even more so with UX design processes that should be done early, close to the beginning of a project, while the product is being defined and the requirements written. At this stage it is often true there is no funding to build prototypes, to do testing, and with recognizable brands, it often isn’t acceptable to test something in the market. It is not unusual to find a number of very skeptical people around, who question, the time, budget and effort which must go into a UX design for an innovative new product or line of products, when there are easy, low risk alternatives for enhancing existing products.

So how do you overcome this skepticism? How do you sell early UX desing to a skeptical audience? Working at Nokia’s Americas HQ [in 2000] I discovered that we can win people over by using usability testing to give influencers and decision makers experiential immersion with a working prototype.

In human factors and usability engineering, employees of the firm are considered “invalid” participants in tests because they might have foresight into how the product operates. The idea is that employs spoil the scientific nature of the test results. [In 2000 human factors and usability testing was very much a science conducted by professionally trained people, in 2016 that is much less so.] However, by carefully selecting a set of “invalid” test participants, you can sow the seeds for future success with the product and gain buy-in to fund the project to completion.

This strategy is not without its risks. If your design isn’t good the project may get killed a lot earlier. Visibility can be a double-edged sword. This article seeks to advise you how to select the candidate evangelists and how to manage the risks of negative reactions by ramping the introduction of participants in product testing as you refine the level of fidelity and usability. The goal is to gain an influential band of company evangelists for your UX design. These people should become the ones who will go forth and spread the word enabling you to get the budget and schedule you need to create a production product.

Creating WAP Applications for Nokia

At Nokia’s Americas HQ in Las Colinas, TX, a younger business school graduate who held a director level position in the business development and product strategy unit, was tasked with proving the value and providing a demonstration platform for the new wireless data technology known as WAP. He hired the consulting firm I worked with to develop this with them. We put together a small cross-functional team of 4 people and we deployed to cubicles on the same floor of the building in Las Colinas, TX. We were surrounded by people who did sales, marketing and strategic planning for Nokia mobile phone sales all across Latin America. For many people on the floor, Spanish was their native language. Our team consisted of: me, officially as business analyst (but later UX designer); Carly, a technical writer; Scott, a we developer; and Terry, a usability engineer.

The goal was to develop a suite of location-based services and to include a transactional or billing component. The purpose was to demonstrate to wireless network operators that money could be made from wireless data applications and that consumers would find the services valuable.

Exploring the Market and Product Opportunity

In the beginning we sat down with 10-12 product managers, strategic planning and marketing people. We didn’t make much headway using our firm’s playbook which was based on requirements capture techniques described by people such as Gauss, Weinberg and Gilb. The problem was  that the field was too nascent and the clients had little concept of what they wanted save for the high level statement about location based services and billing. The breakthrough came when I introduced them to the technique of writing personas. We developed a persona for each market segment. One we focused on a lot in the early days of the project was the “soccer mom” market segment.

It was on this project that I developed my Lifestyle Snapshots technique which was later to be featured in Tamara Adlin and John Pruitt’s book, The Essential Persona Lifecycle. The scenario I wrote up for the book was different from those that I described on my own blog in April 2000 [to be reposted at this site soon].

Once we had a set of lifestyle snapshots for each persona, we began to look at opportunities for the technology to intervene in the lives of the personas – to do things for them that we thought would be valuable. We prioritized this list of opportunities and developed usage scenarios for them.

This whole process took less than 2 weeks as we bootstrapped the project. It was easier to block out chunks of time to pick the brains of the strategic planning, marketing and product management people and gain some consensus and agreement on the specifics – an outline of the product – which personas, which opportunities for intervention with technology, which usage scenarios.

Then the project moved into a new phase.

Rapidly Developing and Validating a Product Prototype

In phase two, we switched to a mode where Terry and I would work together to design the screens for a usage scenario. While I was working on the designs, Terry would devise how to test them. We would collaborate for perhaps 30 to 60 minutes, work alone for maybe 90 minutes then reconvene to compare our work and refine it with small iterative changes. By lunch time each day, we had a testable design, and a set of tests. Scott had been asked to work a “late shift” and he didn’t start until after lunch. We handed off the design to him and asked him to code it up. When we returned the next morning the code was working. It took a few days to reach a critical mass and get a little bit of a pipeline going. At that point we started to schedule usability tests for several hours each morning. Over lunch Terry and I would analyze the results and iterate on the designs. This could take until 4pm in the afternoon in some cases. We’d hand off modified designs to Scott and collect the working code the next morning. We iterated a version of the product every day and tested it with a fresh set of users, every day. All of the participants in the tests were Nokia employees or their relatives and they all signed NDAs. Throughout the process, Carly kept pace with us with user documentation, a formal specification (we were an outsource consulting firm after all), and content for the screen/applications.

To give you an idea of the scenarios we were capable of delivering: “my kids are hungry. I just picked them up from soccer. Give me directions to the closest McDonald’s with a jungle gym facility.” This scenario was well within our range and capability to deliver – although the data and directions were mocked up for our prototype, the business development people had sources for such data even in 2000.

Test as early as possible

We didn’t have the word “Agile” in 2000. However, the process we were developing was partly inspired by eXtreme Programming. Where I had used a FDD-inspired UX process on other line-of-business apps, for this exploratory (“startup”) environment, I needed something more iterative rather than incremental. We got that from the cross-functional team and the daily work cycle of design-code-test.

Terry and I realized we wanted to test as soon as possible – just days into the 2nd phase of the project and every day from then on. We wanted to be seen to provide frequent, tangible deliverables of real business value [yes, I used that phrase in 2000.] The sponsor was spending budget on this exploratory work and we wanted to show him something quickly. We wanted to build trust with results.

Guideline: Test as early as possible and never later than 3 months into a very large project.

More than an MVP – Enough Functionality to Create Evangelists

The size and scope of the project will determine exactly what you are testing within a given time period. For a big project, you are testing prototypes, possibly in a mockup environment. For smaller projects you may be able to test the real product in the production environment. With large projects, you are testing a prototype. It may be a high fidelity prototype or a low fidelity prototype – in 1998 I was using cardboard cut-out mockups at a bank in Singapore but not with any formal usability testing to provide quantitative feedback. The prototype is likely to feature only a limited set of functionality. Ideally this would represent sufficient functionality you could launch it for a single market segment. [In 2016, the trendy term for this is MVP – minimum viable product – and often it is only sufficient to give insight into the market, not sufficiently complete to launch to a segment.]

If our goal is to turn the testers into evangelists then it is essential that the prototype you do test, has been developed from a thorough user centered design perspective and has some functional integrity. For example, in a home banking web application, it should allow you to perform functions such as account balance check, inter-account funds transfer, and bill payment, completely end-to-end. It must look to the user like a system which has real potential and true business value. The prototype must deliver on 1 or more goals for each persona we’re testing. Achieving goals is a means of delivery true vaue. Goal achievement is something tangible for the user. It turns them into believers. We aren’t just validating small aspects of functionality, we are creating political capital.

Why Are We Wasting Money On Something Fake?

Some of you may meet resistance to the notion of a prototype which by implication is a ‘throw away’. It is worthwhile remembering the wisdom of Fred Brooks’, “Plan to throw one away because you will anyway!” This argument stems from the fact that you can throw away an early prototype and get the design correct or you can wait and take a high risk that you will have to throw away the production system because you didn’t get the design correct. Writing in the early 1970s Brooks was advising us to validate our product designs early or waste lots of time and money reworking a production system later.

When uncertainty is high, options on future products have significant value. Hence, you ought to be willing to spend more to purchase the option to deliver the right thing, to the right market at the right time. The cost of the prototype is the cost of purchasing the option. It is real option theory in action.

Test in at least 3 phases

So you have a system or prototype ready for usability testing. You have met the criteria to be frequent, tangible and of true business value. Now it is time to test. It is time to consider how to minimize the risk of showing an early product to skeptical influencers and decision makers at your firm.

You are trying to achieve a political win for design and design processes. You risk showing a poor design and a badly written piece of software to a skeptical audience. You must consider that bugs in the code will reflect badly on the design and the whole principle of design. To manage this risk, we took a 3 phase approach.

Guideline: You must deliver a complete design for at least one user goal before testing.

Test with “friendly” users first

We started with the clients – the marketing, product management and individual contributors from strategic planning at Nokia.

Initially select a few members from your own team or closely related people. For example members of your QA team; technical writers; analysts; people who supplied requirements; marketing people; sales engineers; anyone as long as they are closely related to your project or product and have some skin in the game. They feel that they have a personal input on what has been produced so far.

You need at least 5 and ideally 8 of these friendly test participants. If you are to get 8 sets of tests results in half a day, you need small test scenarios that you can complete in 30 minutes, or you need multiple test labs and staff to run them in parallel.

Run the usability test in the normal manner. Typically we found 3 or 4 really stupid design errors, or poor choices which needed fixed. We delivered those to Scott when the testing session ended. With a formal usability lab with a 1-way mirror, it is possible to prepare the top priority results from the test while the tests are happening. You don’t need to wait for formal analysis of the results post hoc.

Phase 1 doesn’t have a time period. Instead it has exit criteria. The primary exit criteria is that you have a stable working prototype without functional defects and with all the obvious face palm moment design flaws removed. When you have this: End of phase 1.

Test with “real” users next

Now it is time to run the proper user testing. You will have selected a number of users, perhaps several sets, based on target market, demographics, known user groups, professions, job specifications etc.. Bring these people in and run the same tests on your new design.

At Nokia, we did this with family members of staff from the business development business unit and had them sign NDAs. While family members, they did meet the market segment criteria to represent the personas we were testing.

We wanted to refine our design every day. We felt that 5 to 8 data points was enough. [At the time, this flew in the face of established usability engineering as a science where much larger sample sets of test results were expected to draw reasonable conclusions.] In other words, each time you have enough data, make an improvement to your product, iterate the design quickly. Naturally, you will have to prioritize the changes, some major design ideas may need to be dropped. You will also need to select your developers carefully. Not every developer likes to make rapid changes like this. Select an individual who has “hacker” like mentality and just loves fast lifecycle iterations. [Note: these 2 sentences were written in 2000 before we had Agile and when this style of working was less well accepted in the software development community.]

A worthwhile tip is that you might leave 1 day free in your test schedule for every 5 to 8 test participants. This will give your programming team an extra day to make and test any changes that you ask from them. In other words, develop a bit of a pipeline. Every other day, have a different area of the functionality to test, to extend the lead time for bigger changes. [Note: this idea of pipelining changes and extended the lead time beyond the test cycle cadence is a natural fit for Kanban. We didn’t the notion of kanban systems for product development in 2000.]

Guideline: Iterate quickly, make several well advised design improvements during testing with real users.

Guideline: Select a developer(s) who are suited to the nature of prototyping and rapid design changes.

The result of this phase should be some solid usability test results and a much improved design which has been shown through the testing to deliver on the establish user goals for the design. End of phase 2.

Phase 3: Selecting the Evangelists

The third phase is where we re-run the tests but this time using company employees who are not directly involved in the project or with the product. They have no skin in the game. Our goal is to turn these people into evangelists. We will do this by demonstrating the true business value of the design and by emphasizing how you got to such an elegant design. As we gain in confidence with our results, we will seek to invite higher and higher level managers to test the product design prototype.

Evangelists must have influence

As an initial strategy for selecting evangelists, you may like to consider the question, “Who do we need to influence?” It might be the third line manager, your bosses boss, or maybe the VP of Finance or the Director of Sales, or maybe the technical support team who are tired of supporting terrible earlier products.

So the first approach might simply be to invite them to the tests. Have a senior official send them an email or a letter, saying that they have been selected to participate in the usability testing of the next generation product and that this is their opportunity to get an early “heads up” on what is coming through the development pipe. That usually works.

If these people don’t fit your target demographic then consider inviting their kids, or parents, or spouses, or golf partners, or whoever will match your demographic but is closely related to the people you need to influence.

Using the Law of the Few

There is a second, more scientific approach, to selecting the evangelists. We can use The Law of the Few described in “The Tipping Point” by Malcolm Gladwell. Gladwell describes three key personality types which we can use to communicate our message. These are Connectors, Mavens and Salesmen. Just a few of these people ought to be enough to tip the skeptics and see that the design message is evangelized throughout the organization. Connectors, Mavens and Salesmen are the people with influence in a community. These are exactly the type of people to whom we need to sell our new design and its benefits.

Connectors are the type of people who know everyone. They are the people you go to, to get the latest gossip in the office. The people at cocktail parties who introduce you to someone that “you really should meet”. Everyone knows one or two connectors because the connector makes it his/her business to know you.

Mavens are people who know lots of stuff rather than lots of people. A company maven may well be a product manager. Someone who is employed to know all about all the competitors and their products. Or it might be someone in a development role, who knows lots about technology or lots about the computer network. The kind of guy that you call when your computer is broken. These kinds of people get around the company and they get to know lots of people. The network guy fixes your computer but he also fixes the CEO’s computer!

Salesmen are the proverbial bullshit merchants who just don’t take “No” for an answer. They could “sell sand to Arabs” and “ice to Eskimos.” They will tend to latch on to one single thing which they see as the advantage and they will go out and sell that advantage. Finding these people is easy. They probably sold you something recently like a lottery ticket for a charity or a share of a syndicated race horse. Influencing them isn’t hard either. Just make sure that they can see that one key advantage and let them go after it.

Usability Testing with Evangelists

So now you have selected the final set of test participants. You have arranged for a senior figure to invite them to the tests. You have refined the tests you will be asking them to complete and the design that you will be showing them. You are ready for phase 3.

It is important that potential evangelists are tested with goal directed questions. Ask them to solve problems of tangible business value. Something which they can see offers value to the user and either profit potential or improved service, to the business.

Run the test much like any other test. Offer some play time at the beginning, go through suitable introductions and make the test participant feel at ease. Present the test questions as normal.

If you care about usability engineering as a science then you will want to keep the test results separate from the phase 2 test results. These phase 3 testers are potentially invalid users and spoil you scientific data from phase 2. However, any negative results obtained from this third group should still be considered valid and you should still act to fix problems uncovered by the evangelist group.

After the questions are complete, ask the participant to provide feedback. Let them talk, let them say what they think. This is your chance to turn them into an evangelist.

It is key that you take the opportunity to sell the user centered design process which led to your product design. Hopefully they will give you an invitation by complimenting the design, or saying something like, “this is much better than KML Corp’s competitive offering”. It is key that you sell the science of the usability testing. The test participant must not leave with the notion that they just participated in a marketing focus group. Emphasize that the design team derive important data from formal testing and that successive rounds of earlier testing have provided numerous improvements already.

If you have done a good job, then your participants ought to leave the test with a warm feeling. They realize that they achieved a number of goals with the new product and that those goals were achieved as easily as might be expected given the constraints of the technology [which with WAP and 2G candy bar phones, were considerable.] Hopefully, they may consider that the design is superior to previous products and better than the competitors’ products. If this has happened then they will go forth among their colleagues and they will tell them, “the new product – I saw it – excellent. Can’t wait to see the sales figures”.

With a message like that circulating, you will have no problems in the next budgetary round when you need to ask for renewed funding for Interaction Design and Usability Engineering.

Epilogue

So what happened at Nokia? Given what has happened at Nokia in the intervening 15 years, this may come as no surprise.

So we had a working prototype – a set of WAP applications with stubbed out back end functionality but nonetheless working. To convert this prototype into a product system was going to take some time and some money. Perhaps a department of 15 to 20 people for some period of months. With on-going support and subsequent versions, we were looking at a few million dollars per year over 2 to 3 years and hence a total budget of $10-20 million dollars. [High tech workers in Dallas are not cheap and in the tech bubble of 2000 were hard to come by.]

Permission was needed. It was a big portfolio level decision. Senior managers were flown in from Finland. They were shown the prototype and given the full pitch on market segmentation, go-to-market strategy and so forth. It became clear during the meeting that they had not seen anything like at HQ in Finland. No one had been able to demonstrate real consumer value in WAP 2G data. Doors were closed. When they reopened the decision had been made not to fund the project. We the consultants went home – job done. [In fact, I left to join Sprint PCS, to design wireless data applications and take an influential role in their 3G rollout in 2002.] The business development people dusted themselves off and started to work on another idea. In real option theory terms, Nokia had chosen not to convert this option instead to discard it. Life goes on.

[The original of this article appeared on June 7, 2000 at http://www.uidesign.net/papers/2000/evangelize.html and can be found via Internet archival services]

Filed Under: Foundations Tagged With: Lean UX, Product Desing, Product Management, Real Options, Usability Engineering, User Experience, UX

Defining KPIs in Enterprise Services Planning

January 15, 2016 by David Anderson

All KPIs should be fitness criteria metrics. All KPIs should be recognizable by your customers and addressing aspects of how they evaluate the fitness of your product or service. If your customer doesn’t recognize or care about your KPIs then they aren’t “key”, “performance” indicators, they may indicate something else but they aren’t predictors of how well your business is performing or likely to perform in future.

This blog follows my recent posts on Market Segmentation and Fitness for Purpose Score explaining how we define Fitness Criteria Metrics. These metrics enalbe us to evaluate whether our product, service or service delivery is “fit for purpose” in the eyes of a customer from a given market segment. They are effectively the Key Performance Indicators (KPIs) for each market segment. All other metrics should either be guiding an improvement initiative or indicating the general health of your business, business or product unit or service delivery capability. If you can’t place a metric in one of these categories then you don’t need it.

Project-Manager-and-MomRead more…

Fitness Criteria Metrics

We left our story of Neeta the busy project manager, mother of 4 kids, having established that she represents a member of two market segments – the “working late ordering food for the team in the office” cluster, and the “feed my children, its an emergency!” cluster. We also determined that the main metrics of concern are: delivery time; quality – both functional quality (the menu and order accuracy) and non-functional quality (hot, tasty, artisan, gourmet or maybe not); predictability (of delivery time, and perhaps of quality too); safety or regulatory concerns, perhaps including trust that organic ingredients were used or not. In our example, these 4 main metrics apply whether Neeta is ordering pizza for the team or whether she is ordering for her family. However, the satisfaction thresholds vary significantly based on the context.

When Neeta orders for the team they are happy to wait an hour to 90 minutes for delivery. If minor errors are made in order accuracy it is unlikely to matter too much. However, there is a threshold. If some of the team are vegetarian then there must be some vegetarian pizzas delivered or we’d consider the order a failure. The menu is important in the sense that the geeks want a more exotic set of choices. They are fussy about their non-functional requirements. They want the pizza hot, tasty, artisan and gourmet. They aren’t too particular about predictability of delivery time or order accuracy. A 30 to 45 window for delivery is probably acceptable and a few minor errors in the order is also acceptable. They do care about health and safety in the restaurant but they only care about the traffic safety of the delivery boy in so much as it doesn’t endanger the quality of the pizza on arrival.

When Neeta orders for her family, the threshold levels are significantly different. The kids are really hungry and impatient and now they know that pizza is coming, they are super-excited about it. Fast delivery is essential. Predictable delivery is essential: the 6 year old is now running his countdown timer on his iPad. The menu was important but only in so far as it offered simple plain cheese pizza with tomato sauce. The kids are so excited that they won’t mind if the pizza is a little cold on arrival, nor will they mind if it got shaken up a bit during transportation. Order accuracy is important to the kids. If it isn’t a plain cheese pizza they will be extremely upset and unlikely to eat it at all. Meanwhile, mommy can worry about safety and regulatory concerns but they may have a preference for a restaurant that promises to use organic ingredients. They have no concept of whether they trust this assertion. The restaurant said it was organic – mommy can worry about whether that is true or not.

So in summary…

Kids

Fast delivery

100% order accuracy

Not concerned too much on non-functional quality

Predictable delivery

Not concerned too much on safety or regulatory concerns

Geeks

Longer delivery acceptable

Some errors in order accuracy acceptable

Extremely fussy about non-functional quality

Wider tolerance for unpredictable delivery

Not too concerned about safety or regulatory issues

In 4 of these 5 categories we have significantly different fitness evaluation thresholds for these two segments.

If we are to successfully serve both segments, driving improvements so that we have high levels of customer satisfaction in both segments, we must use the higher thresholds with each metric as our benchmark. Alternatively, we need to segregate our service delivery by segment. We can do this by introducing two classes of service, one of each segment. This might work through pricing, for example, would Neeta pay a premium for the guaranteed fast delivery for the kids? Or it might work through capacity allocation and demand shaping based on it. We might, for example, refuse to take large commercial destination orders during peak times for domestic orders. In this example, we trade off educating our corporate clients to order earlier in the day, versus the risk that they will go elsewhere. We do this because we value the domestic market.

There is no formula for this. No right or wrong. We make choices for our business in terms of which segments we wish to serve and how well we wish to serve them. Choices come with consequences. We need to be prepared to live with these consequences and be willing to be accountable for them.

The metrics and threshold values we’ve developed for each of our segments should become the KPIs for our business and specifically in this case, the pizza delivery service. We should put in place the mechanisms, instrumentation and customer feedback, to measure these metrics. We can use the results at Service Delivery Reviews, Operations Reviews and Strategy Reviews to determine how well we are serving our markets, where we need to make improvements and which segments we wish to serve.

In my next post in this series I will take a look at the other types of metrics: those which guide improvements; and general health indicators. My experience working with clients in 2015 is that most existing KPIs are in fact merely general health indicators. As a consequence these businesses are optimizing for the wrong things and customer satisfaction and their ability to survive and thrive in the market is impaired. All KPIs should be based on threshold values for fitness criteria metrics derived from analysis of the market segments you choose to serve.

Read Part 2: Your KPIs Probably Aren’t! But What Are They?

Filed Under: ESP Tagged With: Enterprise Services Planning, ESP, Fitness Criteria Metrics, Fitness for Purpose, Kanban, Key Performance Indicators, KPIs, Marketing, Strategic Planning

Market Segmentation for Enterprise Services Planning

January 14, 2016 by David Anderson

I realized after posting my article on Fitness For Purpose Score that it isn’t reasonable to expect readers to know the background and context that stimulated it. It isn’t reasonable that I assume readers are up-to-date with speeches I’ve given ove the last two years covering Evolutionary Change, Fitness for Purpose and Enterprise Services Planning. So I felt some explanation of how we do market segmentation for ESP was in order to provide better context for Fitness For Purpose Score.

How do we know whether a change in our service delivery capability represents an improvement? This is the fair and reasonable question that should drive our decision making about how we manage, how we make decisions, and which changes we choose to invest in, consolidate and amplify. In evolutionary theory, a mutuation survives and thrives if it is “fitter” for its environment [this is actually a gross simplification but it will do for introductory paragraph on a related but different topic of marker segmentation.] So how do we know whether or not a change to our service delivery capability makes it fitter for its environment? What do we mean by “environment” in this context? “Environment” is the market that we deliver into. So “fitness” is determined by whether the market feels our product or service and the way we deliver it, is “fit for purpose.” So to understand “fitness” to enable and drive evolutionary improvements, we first need to understand our market and what defines “fitness for purpose.” To do this we segment the market by customer purpose and the criteria with which they evaluate our “fitness for [that] purpose.” …

At Lean Kanban Inc we create our market segmentation by clustering narratives about our customers. We do this by telling stories about them. The technique is a direct application of Dave Snowden’s technique from his Cynefin Framework. To explain this in our training and in the speeches I linked above, I tell the tale of Neeta, a fictional project manager and mother of 4. Neeta is based on a real woman who works in the Canadian public sector and has considerable Kanban expertise. Neeta needs to order pizza for delivery to her office to feed her team who are working late against a deadline. On another evening the same week, she needs to order pizza for delivery to her home to feed her children who are hungry because she came home late. Neeta doesn’t represent one market segment, she represents two! The reason for this is that the purpose, context and fitness selection criteria are different in each of the two contexts.

Project-Manager-and-MomWhen Neeta orders pizza for her children she needs: fast delivery – ideally within 20 minutes; she needs order accuracy – the kids only like plain cheese pizza; the non-functional quality doesn’t matter too much, the kids will eat cold pizza so long as it is cheese pizza; she needs a simple menu and predictable service; she wants delivery when promised because the kids need their expectations set and they are unforgiving; she also cares that the restaurant is clean and can be trusted to follow health and safety regulations; she may care whether or not they use organic ingredients because she is feeding her family.

When Neeta orders  pizza for her office her need are similar but some of the criteria vary and the threshold values are different: she needs delivery in up to 90 minutes; order accuracy is important but if one or two mistakes are made it won’t make a big difference; however, the non-functional quality matters, hot, tasty, pizza with gourmet flavors and exotic ingredients are required for these discerning geeks; it doesn’t matter if delivery isn’t as predictable as it might be, so long as they show up eventually – the team are busy; and yes, she still cares whether the restaurant meets health and safety legislation standards but organic ingredients probably isn’t so much of a concern.

In other words, Neeta decides whether she likes the pizza service and whether she will use it again, based on two different sets of criteria, depending on her context. This may lead her to use different service providers for each purpose, if one provider can’t meet both sets of her needs. As a result Neeta represents two segments, not one.

Pizza-BoyHow would you know that Neeta represents two segments and not just one? Traditional demographic profiling wouldn’t give you this insight! Well perhaps she uses different credit cards or payment mechanisms depending on context? And the delivery address is different. So there are some obvious clues. However, the people in the business who know Neeta’s story are the person who took her telephone order, and the delivery boy who delivered the pizzas. It is these frontline staff who understand the customers best.

If you are to cluster customer narratives to determine segmentation, you need to bring frontline staff into the story telling sessions. You need to listen for context, purpose and selection criteria and create segments based on affinity of these aspects of the market. Give each cluster a nickname. Recognize that an individual customer can appear in multiple segments depending on their context on a specific day and time.

The challenge of this for many companies is that the people who best understand the customer’s context, purpose and selection criteria are often the lowest paid, shortest tenured, highest turnover staff in the business. Foolishly, many companies under value, the value of customer facing staff. Traditional 20th Century service delivery businesses take a transaction view of customer interaction rather than a relationship view. If you value repeat business and you value the insights that will enable your business to evolve and survive in a rapidly changing market then you need to value customer facing people and involve them in your strategic planning.

Once you have the clustered narratives defining your segment, now select the segments you want to serve. This is a key piece of strategic planning. Which businesses do you want to be in? Which don’t you care about? Which do you want to actively discourage? Based on this you will develop the Fitness Criteria Metrics to drive your management decision making and evolutionary improvement.

Designing Fitness Criteria Metrics, choosing their threshold values, and making them your KPIs (Key Performance Indicators) will be the subject of my next post.

Filed Under: ESP Tagged With: Enterprise Services Planning, ESP, Kanban, Marketing, Strategic Planning

Fitness For Purpose Score and Net Fitness Score

January 11, 2016 by David Anderson

Regular followers of my work will know that I have expressed dissatisfaction with Net Promoter Score (NPS). Steve Denning in his book Radical Management suggested NPS was “the only metric you’ll ever need.” Steve is a writer for Forbes, an investment magazine. High NPS scores correlate with high stock prices and hence from an investor’s point of view NPS is an important metric. If you are a CEO of a public company, who receives a large portion of your salary as bonuses based on changes in the stock price then NPS is an important metric. However, many of my clients who collect NPS data report to me that it isn’t an actionable metric. NPS merely tells you whether you are winning or losing. It doesn’t tell you what to do!

There are some antidotes to NPS’ failings. The second question asking reviewers to “tell us why you gave the rating in the previous question?” provides the opportunity for short narratives. These micro-narratives can be clustered using a tool such as Sensemaker and useful information can be extracted. There may be actionable information hidden in the clustering of narratives. This advanced use of NPS information is very much still in its infancy and not readily available to many or most businesses.

I’ve decided to introduce a new metric into our own surveys. I call this Fitness For Purpose Score. I am hopeful this will become a key strategic planning tool in Enterprise Services Planning.

Fitness For Purpose Score

It is often true that businesses do not know the purpose with which a customer consumes their product or service. A product or service designed for a specific purpose may get used for something else. Some of the more famous examples, are washing machines used to make lassi yoghurt drinks for Indian restaurants. In evolutionary science this is known as an exaptation: where something designed for one purpose is adapted for use with another purpose. To have actionable metrics for product or service delivery improvement, you need to understand the customer’s purpose for consuming your offering. When you understand this purpose, you can create the appropriate fitness criteria metrics. With Enterprise Services Planning (and Kanban) we use fitness criteria metrics to drive improvements. Fitness criteria metrics are used at all levels to compare capability with expectations. Fitness for Purpose Score is intended to help us understand purpose and whether or not our current capability meets expectations. If it doesn’t we can probe for thresholds to establish new fitness criteria metrics.

This is how our sales and marketing team will be using Fitness For Purpose Score in our own surveys in 2016.

Question 1: What was your purpose [in attending our training class? What did you hope to learn, take away, or do differently after the class?]

Question 2: Please indicate how “fit for purpose” you found [this class]?

  1. Extremely – I got everything I needed and more
  2. Highly – I got everything I needed
  3. Mostly – I got most of what I needed but some of my needs were not met
  4. Partially – some of my purpose was met but significant & important elements were missing
  5. Slightly – I took some value from it but most of what I was looking for was missing
  6. Not at all – I got nothing useful

Question 3: Please state specifically why you gave your rating for question 2

Questions 1 and 3 specifically ask for short narrative answers. These micro-narratives can be clustered. Question 1 will provide clusters of purpose which can be validated against our existing market segmentation and may reveal new segments, while question 3 will provide clusters of actionable information for improvements and possibly new fitness criteria metrics or threshold value for existing metrics. We can decide whether or not to pursue specific clusters and whether we are likely to be able to achieve adequate fitness levels to satisfy our customers during our Strategy Review meeting.

For example, our own product is management training, though we also have an event planning and publishing business. We position and sell our intellectual property as management training and we deliver it as training classes and mentoring. We know that a significant segment exists for software process improvement and for process engineers and coaches who consume our products and services in order to help them in their coaching practice. We know this segment exists but we specifically and intentionally don’t cater to it. We feel it would be a strategic distraction and undermine our overall message that managers need to be accountable, to take responsibility, to make better decisions and to take action where and when necessary to improve service delivery. The return-on-investment in our products and services is realized when existing managers change their behavior as a result of our training. And hence, while we appreciate the patronage of process engineers and coaches, we do not specifically cater for their needs.

Net Fitness Score

I purposefully moved away from the NPS use of an eleven point numerical scale [0 thru 10]. My background in human factors, psychology and user experience design, taught me that humans have problems with categorization beyond 6 categories without a specific taxonomy to guide them. This isn’t a result of Miller’s “Magic Number 7” rather the work of Bousfield W.A. & A.K, and Cohen, B.H. between 1952 and 1966 on clustering. For example, if you ask humans to rate something 1 to 10 they will struggle to create 10 distinct categories in their mind. When asked to devise their own taxonomy, or clusters, as lay people to the domain, they will tend to create no more than 6 categories. Hence, a scale of 0 through 5 is most appropriate for general consumption. I believe the NPS people tried this but discovered that in some cultures, such as Finland, people never give the top score on principle. They always choose one below the best. Hence, the NPS reaction to this was to double the scale using 0 through 10 so that people could give a 9 when they are really giving a 4.5. My feeling on this is that it highlights the issues with numerical scales and undeclared taxonomies. The solution of doubling the scale, however, creates a randomness in the system and generates noise in the data reducing the signal strength, because of the general human issue of modeling categories against the scale. Fixing one problem, the cultural propensity never to give top ranking, creates another problem, a cognitive issue in the general population to struggle with more than 6 undeclared categories. Hence, to avoid both problems, I am declaring the categories with narrative.

Scores of 4 and 5 are intended to indicate that someone is satisfied and the product or service was fit for their purpose.

Score of 3 is intended to indicate a neutral person. They didn’t get everything they needed to be delighted with the service but they got something acceptable for their investment in time and money.

Scores of 2 or below are intended to show dissatisfied customers who felt their purpose was unfulfilled by the product or service. This may be because the product is poor or it may be because the purpose was previously unknown or represents a segment that the business has strategically decided to ignore. Not all dissatisfied customers need to be serviced fully and satisfied: some customers, you simply don’t want – they represents segments you aren’t interested in pursuing.

Net Fitness Score [NFS] = % satisfied customers – % dissatisfied customers

NFS can be improved through better marketing communications that direct the right audience to your business and dissuade the wrong audience. So NFS can be used to drive excellence in marketing as well as used to explore new segments and the fitness criteria metrics that light them up as viable and profitable businesses.

Filed Under: Foundations Tagged With: Enterprise Services Planning, Kanban, Kanban Cadences, Marketing, Strategic Planning, Strategy Review

Proto-Replenishment Semi-Push System

January 7, 2016 by David Anderson

There is a class of replenishment meeting which I believe we need to call out and name separately. In these replenishment meetings the backlog is already committed and often already prioritized. I am proposing we label these proto-replenishment meetings. This post explains why and asks whether the name proto-replenishment is appropriate or not…

Proto-Replenishment Semi Push SystemRead more…

The job of replenishing the kanban system is somewhat trivial, generally inwardly facing and selection is often based on technical, resource, skillset, or coordinating delivery dependencies to facilitate flow. In other words, replenishment is more about “definition of ready” and selecting based on readiness, than it is about commitment, scheduling and sequencing of work from a business risk perspective. The pool to select from is referred to as a backlog and it is already committed. There is already an agreement with the customer that the backlog items are in-scope and must be delivered.

Full replenishment meetings as described in the Kanban (blue book) feature the customers (service requestors), work items submitted are not committed, instead the collection of them represents a pool of options. The replenishment features the selection of items from the pool and discussions around appropriate scheduling and sequencing of items. The replenishment meeting features the act of commitment. There is genuinely deferred commitment until kanban system replenishment and a proper pull system is in operation. Customer work is pulled based on capacity signaled by kanban in the system. At a full replenishment meeting customers are present and make the decisions. The focus is external and outward looking and decisions are made on immediate business impact of delivery: cost of delay is a key influence on decision making.

deferredcommitmentpull

In these other forms of replenishment, see diagram below, the work has been pushed, perhaps in large batches, perhaps based on a belief, or mathematical understanding of capability but nevertheless pushed into the system. Commitment is not deferred. Commitment is made early at the point the batch is pushed. This is common for large projects. When we are using Kanban with a large project, the project typically has a committed backlog. The kanban system replenishment meetings are simply about selecting work to flow through the project workflow. The main task is sequencing and typically, technical, delivery and resource risks are the main inputs to the selection process. At these alternative replenishment meetings, customers are seldom if ever present, the attendees are almost entirely from the delivery side. The focus is internal and the decision making is influenced by internal concerns.

Proto-Replenishment Semi Push SystemSo these meeting are different. They clearly represent a less mature, and shallower implementation of Kanban. The question is whether they should be labeled “proto-replenishment” or whether we need another alternative name? Proto-Kanban is an established term, usually used to refer to Kanban Boards without WIP limits, though other variants exist. The term was coined by Richard Turner, of the Stevens Institute, because the case study literature showed that these proto-Kanban implementations often matured into full Kanban implementations later. Hence, these boards without WIP limits were evolutionary predecessors of Kanban and the suffix “proto” indicates an expectation that they are a seed from which something more mature may develop.

The question is whether proto-replenishment is appropriate or not? I believe it is. Generally, proto-kanban implementations are inwardly facing and they are often at what Klaus Leopold labeled Flight Level I & II. When a kanban system is designed in an outwardly facing fashion asking the questions, who are our customers? And what do they request from us? Then we typically see much deeper implementations including WIP limits, pull and classes of service. However, getting from proto-replenishment to full replenishment will not happen without leadership. This transition highlights the true crux of Kanban implementation. If you can get beyond proto-replenishment then you have implemented a pull system. This is a non-trivial step. It is the non-trivial step that Kanban Coaching Professionals (KCPs) are trained to help you take.

Recognizing proto-replenishment as a concept introduces a whole new way to understand and teach improving depth of Kanban and tuning implementation to organizational maturity. It will enable coaches and change agents to point out a lack of deferred commitment and the associated business risks that early commitment carries with it. It also gives a another very clear and simple test for “are we doing Kanban or not?” To have truly implemented Kanban your replenishment meetings will involve customers, you won’t have a backlog, you will have a pool of options, and commitment will be made at the replenishment meeting as you pull work into your kanban system. If that isn’t happening you are still in the proto-kanban stage and have a lot more opportunity ahead of you.

Filed Under: Foundations Tagged With: Depth of Kanban, Kanban, Proto-Kanban, Replenishment

Emerging Roles in Kanban – SDM and SRM

January 6, 2016 by David Anderson

Kanban has always been the “start with what you do now” method, and no one gets a “new role, responsibilities, or job titles” at least not initially. However, it is now clear that some roles are emerging in the field with some implementations. So, it is valuable to report this, while they remain suggestions, options, or ideas, rather than prescribed roles for a Kanban implementation. This post follows my previous one that Kanban doesn’t share Agile’s cross-functional team reogranization agenda, and has always been a cross team, cross function solution for service delivery workflows. What follows is in the context of a service delivery workflow which spans functions or teams and is (most likely) orthogonal to the organizational structure of the enterprise, business or product unit.

Service Delivery Manager

From the early days of Kanban, we talked about the need for a manager to take on responsibility for flow of work. Perhaps, echoing the concept of scrummaster, in some implementations the role of person responsible for flow has been nicknamed flow manager or sometimes “flowmaster”. It’s a sticky, if arcane and tribal, title. For our official literature, I wanted something more corporate friendly, and something that is more outwardly facing. “Flow manager” is inwardly facing and focused on process problems. I prefer names that are outwardly focused and address customer needs. This is in line with the Kanban value of “Customer Focus.”

There is precedent for renaming concepts in Kanban to give them more customer focus. Inspired by the Improvement Kata in Toyota Kata, we defined and named, the System Capability Review meeting in 2012. This was later renamed to Service Delivery Review (SDR). The name change was to give the meeting an outward focus on customer needs, rather than an inward focus on process performance. By keeping the naming, the language, and the values, externally focused, we insure that the right metrics are used to drive relevant, valuable improvements. An outward focus is vital to insure “fitness for purpose” and to deliver on the Kanban agenda of survivability.

So, the “flow manager” concept is called the Service Delivery Manager. It is primarily intended to be a role played by an existing member of staff and not a new job title or position. So, by creating Service Delivery Manager, we do weaken the message that no one gets new responsibilities – actually someone does, the someone who takes on the SDM role.

sdmroleinservicedelivery

The SDM role existed in 2007 in our first full scale Kanban Method implementation. It was usually played by a project manager from the PMO. The SDM carried responsibility for the Replenishment Meeting, the Delivery Planning Meeting, escalating blocker tickets, and what we would now call Risk Review. Replenishment, Delivery Planning and Risk Review are 3 of the Kanban Cadences.

In more recent implementations the SDM also facilitates the daily Kanban Meeting. In 2007, this role was taken by one of the function managers in the workflow. The SDM role was usually played by someone from the PMO.

Service Request Manager

For some number of years, the question has existed, what do you do with middle-men in the workflow? As a general rule, we wish to remove non-value-adding middle-men positions from the workflow. However, we also wish to avoid resistance to change. These are two core tenets of Kanban coaching and general goals we might have for change management when deploying Kanban in an organization. And the following guidance has existed since 2009: we seek to elevate the role of the middle-man, above the workflow, out of the value stream. The most common example of this is shown in the diagram, “What do you do with the Product Owners?”

whatdoyoudowithpos

The goal is to reposition the role of product owner as a risk manager and facilitator: someone who owns the policies for the system which frame decisions together with facilitating the decision making mechanism. This role is of higher value, is transparent and open to scrutiny and relieves us of the risk of the “hero product owner” who magically understand where the best business value is to be found. This elevated risk management and policy owning position improves corporate governance, improves consistency of process, and reduces personnel risk associated with a single individual.

Nevertheless, an individual with a “hero product owner” self-image will resist such a change. Kanban Coaching Professionals are trained to manage this resistance as part of their training in the Kanban Coaching Masterclass.

When the product owner is successfully repositioned above the workflow as the owner of the policies for risk assessment, scheduling, sequencing and selection, they have successfully transitioned into the Service Request Manager (SRM) role.

Again, we are weakening the “no one gets new responsibilities” principle, but this transition is generally managed over a period of time and isn’t necessarily thrust upon individuals at the start of Kanban adoption.

When the SRM role exists, the SRM usually takes responsibility for the Replenishment Meeting and will play a role in the Strategy Review and Risk Review.

Filed Under: Foundations Tagged With: Kanban, Kanban Cadences, Service Delivery Manager, Service Request Manager

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to Next Page »

Footer

Kanban University
7941 Katy Frwy, #33, Houston, TX 77024, USA
About Us  |  Contact Us  |  Privacy Policy

© 2025 Kanban University. All rights reserved. Accredited Kanban Trainer and Kanban Coaching Professional are registered trademarks of Kanban University.