Metrics: How to Improve Key Business Results - novelonlinefull.com
You’re read light novel Metrics: How to Improve Key Business Results Part 2 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
Should we buy more ski machines? Get rid of some of the stair steppers? Should we make the limit for time on the ski machine less? Should we create an exercise cla.s.s based on the stair stepper? It should be obvious that the proper answer is not obvious. Part of the confusion may be due to the lack of a question. Why did we collect the data? Why did we do the a.n.a.lysis that led us to the metric? It's impossible to tell a complete story without a root question.
Root Questions.
In my book Why Organizations Struggle So Hard to Improve So Little (Praeger, 2009), my coauthors, Michael Langthorne and Donald Padgett, and I compare metrics to a tree. In this view, the data are the leaves, the measures are the twigs, the information is the branches, and the metrics are the trunk of the tree. All of these exist only with a good set of roots. These roots represent the root question. This a.n.a.logy is a great way of showing the relationship between the components of the metric.
Figure 1-6 shows another view of the components that make up a metric. It's fitting that it looks like an organic structure.
Figure 1-6. Metric components Without a good root question, the answers that you derive may lead you in the wrong direction. Answers are only useful when you know the question. The root question is so integral to the metric that it has to be part of the definition of a metric.
A metric tells a complete story using data, measures, information, and other metrics to answer a root question.
The root question is essentially the most important component of a metric. It is the map we use to help determine our direction. It identifies the goal of our journey. There are instances where you may, with good reason and to good result, collect data without a root question (see Chapter 15), but for the practical use of metrics, this is unacceptable. It would be like taking off on a journey without a destination in mind. No purpose, no plan, and no direction-just get in your car and start driving.
Later you may realize that you forgot your driver's license, your money, and even your shoes. You may realize that you'd already traveled too far to make it back with the amount of fuel remaining in your tank. You may realize that the only logical course of action is to continue on, although you don't know where you will end up. You are more likely to end up where you don't want to be. Since the only right place would be the destination that you forgot to determine, it is much more likely that you will end up someplace other than that right place. And, when you fail to reach the destination (which in the end you may or may not have identified), you will blame the car. It didn't get enough miles to the gallon.
You won't blame the lack of forethought. Even if you get more gas and you figure out where you want to go, you'll not go back home for your wallet, license, or shoes. You've invested too much. Instead, you'll continue on and try to reach the destination from where you are, not wanting to admit that everything you'd done to that point was wasted effort.
You need the purpose of the metric, the root question, so that you are as efficient as possible with your resources (the car, fuel, time, and your efforts). Of course, if you have unlimited resources or you make money regardless of how you use the resources (perhaps you have a wealthy pa.s.senger who is only concerned with being shuttled around, not caring about the destination, purpose, or how long it takes to get there), then efficiency doesn't matter. But if you're like most of us and need to make the most of what you have, embarking on this meandering journey is more than wasteful. It actually will end up costing much more than the expenses incurred.
The lack of direction will seed a level of despair and resentment in you and your coworkers, your superiors, and subordinates. It can destroy the spirit of your organization.
The root question provides you with focus and direction. You know where you are headed. You know the destination. You know the purpose of the metrics and the question you are trying to answer.
A root question, a correctly worded and fully thought-out question, allows you to determine the right answer(s). Without a root question-the right question-the answer you derive will be the result of a meandering journey. This answer will likely do more harm than good.
Even a well-worded root question will fail to lead to good results if the question is not the right question.
To put it all together, let's look at a full example. A metric is a complete story told through representation of information. Information in turn, is a compilation of measures, used to convey meaning. Measures are the results built from data, the lowest level of collectable components (values or numbers). The following is a simple example : Data: 15 and 35 Measures: 15 mpg and 35 mpg Information: Miles-per-gallon achieved using unleaded gasoline in a compact car: 15 mpg in the city, 35 mpg on the highway Metric: The metric that would logically follow would be a picture (charts or graphs in most cases) that tells a story. In this case the story may be a comparison between the fuel efficiency of different compact car models (miles per gallon), combined with other indicators used to select the right car for you.
Root Question: What is the best car for me?
The use of data, measures, and information are more relative than hard and fast. I don't mean to dictate inflexible definitions that will keep you from getting to the metric. The goal is to develop metrics-answers-to our questions.
The data could include the miles-per-gallon tag. Measures could include "in the city" and "on the highway." Information could distinguish between the various cars' make and model. The major point to take away is that additional meaning (and context) are provided as we progress from data to measures to information. Also, metrics make a full story of this and much more information.
Let's look at another example, ill.u.s.trated in Figure 1-7. Using a customer service desk as our model, we can identify each of the components listed.
Root Question: Is the service desk responsive to our customers?
Data: 1,259 per month 59 per month Responses on a 15 scale Measures: The number of trouble calls The number of abandoned calls The length of time before the caller hung up The survey responses Information: Percentage of total calls that were abandoned, by month Percentage of total calls that were abandoned, by year Metric: Figure 1-7. Percentage of service calls abandoned, by month and by year Looking at Figure 1-7, the responsiveness of the service desk for the past year has been well within expectations. During March, July, and August, however, the percentage of calls abandoned were above expectations (more than 20%). These three spikes are worth investigating to determine both the cause and the likelihood that these could be a recurring problem. Also of note is the steady increases leading up to these spikes. April and October were excellent months for responsiveness and should be a.n.a.lyzed to see if the causes are repeatable.
Is this a metric? Yes, this qualifies in our taxonomy as a metric because it tells a story in response to a root question.
Is this a good metric? No, it definitely can be better. It can tell a more complete story. Looking back at the information, we can also incorporate the survey responses on "time to answer" to determine the customers' perceptions of the service desk's responsiveness. Another important component of the metric should be the percentage of abandoned calls under 30 seconds. This standard could vary; it could be under 15 or 45 seconds. It depends on different factors. What is the customer listening to during the time on hold? How long does a person typically stay on the line before he realizes he dialed the wrong number? How short of a wait is considered not to be a lost opportunity? But improving this metric without first addressing the root question is, as a friend recently put it, like putting icing on a rock. It might look good enough to eat, but it's not. We can keep improving this metric so it looks better, but it won't satisfy unless we go back to the root question.
The metric, like its components, are tools that can be used to answer the root question. We will address the proper use of these tools later. For now, it's enough to have a common understanding of what the components are and how they relate to each other.
The Data-Metric Paradox.
There is an interesting paradox involving the components of metrics and their relationship to the root question used to derive them.
Data, the easiest to understand, identify, and collect, should be the last item to develop. The most complex and difficult component, the root question, has to come first. As our a.n.a.logy of the hapless driver on the meandering journey showed, we must first identify our destination and purpose. Rather than start with the simple to build the complex, we must start with the most complex and use it to identify the simple.
The three little pigs also ran into this paradox. The first pig's doctor was happy with data and measures, but ignored the bigger, more important requirement. He lost his patient, but did it with "healthy" numbers. Business can do the same. You can have good data points (sales per customer, profit/sale, or repeat customers) and still go out of business.
We have to start with the complex to uncover the simple-start at the root question and drive unerringly toward data.
Identifying the correct root question is not as simple as it sounds, nor as difficult as we normally make it. We need to be inquisitive. We need to keep digging, until we reach the truth, the question at the root, the need, the requirement. What is the purpose of the metric? What is it that you really want to know? This root need enables us to form a picture of the answer. This picture is the design of the metric.
Once you know the root question, you can draw a picture.
The picture, the design of the metric, can be created without any idea of the actual answers. The metric provides the form for the information. The information tells us what measures we'll need, and the measures identify the data required. This is the best way to create a metric. From the question to the metric to the information to the measures and, finally, to the data.
Unfortunately, most times we attempt it in the opposite direction, starting with the simple (data) trying to expound on it to develop the complex (root question). This process seldom succeeds. But when we start at the complex, forming a picture of what the question is and how the answer will look, it becomes easy to work down to the data.
Data, measures, information, and metrics all serve the same master: the root question. They all have a common goal: to provide answers to the question. Because of this, the question defines the level of answer necessary.
Let's pose the following question: How far is it to Grandma's house? You don't need a metric to provide the answer to this question. You don't even need information. A measure (for example, the number of miles) will suffice. And you will be fully satisfied. For data to be sufficient, you have to ask the question with enough context to make a simple number or value an adequate answer. How many miles is it to Grandma's house? How much longer will it take to get to Grandma's house? In these cases, data is all you need. But data is rarely useful in and of itself.
Let's pose another question: Do we have time to do any sightseeing or shopping along the way and still make it in time for Grandma's turkey dinner? To answer this, we require information. The measures and data might include the following: The time Grandma is serving dinner The current time The number of sightseeing or shopping stops along the way The estimated time to sightsee/shop per stop We still don't need a metric. And we definitely don't need a recurring metric in which we have to collect, a.n.a.lyze, and report the results of building information on a periodic basis.
The problem is, management-anyone above staff level-has been conditioned to almost always start with data. The few that start with metrics already have the answer in mind (not a bad thing per se), but lack the question. This leads to them asking for recurring (weekly, monthly, quarterly, or annual) reports. They'd like to see the metrics in a certain format: trend lines with comparison to a baseline based on best practices, both monthly and with a running annual total. Sounds great, but without knowing the root question, asking for this answer runs the risk of wasting a lot of resources.
Sometimes clients (managers, department heads, organizational leaders, etc.) who know the answer (metric) they want before they know the question, realize that the answers don't really fulfill their needs and try to give a mid-course correction. Like our meandering driver, they refuse to start fresh and go back to the beginning, to ignore the work already accomplished and start over again. When they realize the metric is faulty, they do one of the following: a.s.sume the data is wrong Decide the a.n.a.lysis is wrong Tweak the data (not the metric) Some really wise managers decide that the metric is incorrect and try again. Unfortunately, they don't realize that the problem is the lack of a root question or that the question they are working from is wrong. Instead of starting over again from the question, they try to redesign the metric.
The bottom line and the solution to this common problem? Pick your cliche, any of them work: We have to start at the top. We have to start at the end. We have to start with the end in mind. You can't dig a hole from the bottom up. We have to identify the correct root question.
The root question will determine the level of the answer. If the question is complex enough and needs answers on a periodic basis, chances are you will need to develop a comprehensive metric. A question along the lines of "How is the health of (a service or little pigs)?" may require a metric to answer it, especially if you want to continue to monitor the health on a regular basis.
The vagueness of the question makes it more complex. Clarity simplifies.
As we design the metric to answer the root question, we realize that we need to have measures of the various components that make up an organization's health. In the case of the three pigs (or humans for that matter), we may want indicators on the respiratory, circulatory, digestive, and endocrine systems. To say nothing of the nervous or excretory systems, bones, or muscles. The point is, we need a lot more information since our question was of wide scope.
If our question had a narrower scope, the answer would be simpler. Take the following question, for example: How is your weight-control going? The answer can be provided by taking periodic measures after stepping on a reliable scale. Unfortunately, rarely is the question this specific. If the first little pig is only asked about his weight, the other indicators of health are missed. Focusing too closely on a specific measure may lead to missing important information. You may be asking the wrong question, like the second little pig's doctor, who only used three indicators and neglected to share the bigger picture with his porcine patient.
Perhaps you know about your blood pressure. Perhaps you had a full checkup and everything is fine-except you need to lose a few pounds. "How is your weight loss coming along?" may be good enough. If it is, then a metric is overkill. A measure will suffice.
When designing a metric, the most important part is getting the right root question. This will let us know what level of information is required to answer it. It will govern the design of the metric down to what data to collect.
Metric Components.
Let's recap the components of a metric and their definitions: Data: Data, for our purposes, is the simplest possible form of information and is usually represented by a number or value; for example, six, twenty-two, seventy, true, false, high, or low.
Measures: Made up of data, measures add the lowest level of context possible to the data. Measures can be made up of other measures.
Information: Information is made up of data and measures. Information can be made up of other information. Information provides additional, more meaningful context.
Metrics: Metrics are made up of data, measures, and information. Metrics can be made up of other metrics. Metrics give full context to the information. Metrics (attempt to) tell a complete story. Metrics (attempt to) answer a root question.
Root Question: The purpose for the metric. Root questions define the requirements of the metric and determine its usefulness.
Recap.
This chapter introduced a common language for metrics and their components. It also introduced the Data-Metric Paradox, in which we learned that we have to start with the most complex to drive to the simple. We have to start with the root question to get us safely to the proper level of information necessary to answer the question. It's possible the question may not require a metric, or even information. When tasked by management to create a metric (or a metric program) we have to slow down and ask what the root questions are. We have to be willing and able to accept that the answer may not lie in creating a metric at all.
Bonus Material.
I fear I may have misled you. By presenting this chapter in the manner I did, you might wonder if I'm leading you down the wrong track. But there was a logical reason for it. I considered presenting the definitions from the top-down order that I'd like you to address them: root question to metric to information to measures and, finally, to data. But I worried that readers would rebel. This is not the normal order in which we come to metrics. Unfortunately, our normal journey to metrics starts with requests for data.
Figure 1-8 depicts the hierarchy between the components, from the bottom up.
Figure 1-8. Hierarchy of metrics components I've worked with many managers who have been tasked to present meaningful information (metrics) on how their departments or units were doing. The first step they all take is to ask, what data do we have? This tack is taken in an innocent attempt to keep from creating more work for the staff. The hope is to fill the "box" with existing data and placate the organization's leadership. But, because we don't take the demand for metrics as an opportunity to develop something useful for all levels of the organization, we do as little as possible to satisfy the request. The next question is (if the existing data doesn't seem to be enough), what data can we get? So, rather than introduce them in the order that I insist is right, I presented them to you in the order I thought you would find familiar. Now, I hope you'll take the leap of faith to trust me-and start with the big picture first.
So, please accept my apology and now allow me to ensure that you have a proper foundation for the rest of this book.
Misconception 1. Data is not useful.
I may have given you the impression that data is not useful. Or that measures and information, without being part of a larger metric, lack applicability to improving your organization. The Data-Metric Paradox addressed this, but it's worth pointing out again. Data can be very useful, if your question is extremely specific and requires only a numeric/value answer like, what time is it? If data is all that is required, it is likely that you won't need a metric or any of the detailed information that accompanies it.
Misconception 2. Start with data and then build toward metrics.
Most of the time, this misconception is born of the misguided belief that you need data, you need answers. The other catalyst for this misconception is the abundance of data available, thanks to technology. You may believe that you should collect data, try to group them into measures, then take the measures and compile them into meaningful pieces of information, and finally, take these components and build a metric to give it all meaning and clarity.
The truth is just the opposite. There are so many possible data that you can collect, that beginning at the data level will almost a.s.sure that you fail to create a useable metric. As you will see, it is important to start with the end in mind; the metric is the end (if the question warrants it) and, therefore, also our beginning.
Misconception 3. You have to have a root question before you gather data, measures, or information.
While I wish this were true, you'll find many times that you are required to gather data, a.n.a.lyze measures, and create informative reports with no idea of the reason why. Like most good soldiers, you may very well have to do what is short of "right." I highly suggest that you do your best to identify the root question before you start, but if you can't, of course you can gather information without it. If this happens, I recommend you try to get to the root question as soon as possible within the process.
When you get the "I'll know it when I see it" argument from a higher-up, stay strong. The best way to help customers identify what they really want is to help them identify their root question. In Douglas Hubbard's excellent book, How to Measure Anything: Finding the Value of Intangibles in Business (Wiley, 2010), the author introduces his methodology for identifying what people really want to know.
Hubbard uses what he calls a "clarification chain," which allows you to keep digging deeper into what matters to the client. He asks simply, "What do you mean by x?" In his example of working with the Department of Veterans Affairs, he ended up holding multiple workshops just to get to the root question. Hubbard doesn't require a root question per se, he stops well short. But his book is about measures, not metrics (in our definition). The good news is that he is totally correct about being able to measure anything. I've used his book numerous times. Sometimes when I work with a client, we run into a roadblock after we've identified the question, designed a plausible metric, and determined the information we need. Often, the client doesn't want to get past the picture because she doesn't believe we can measure what we need to compile the information. I pull out Douglas Hubbard's book from the shelf and a.s.sure her that we can really measure anything. This is even easier because with the root question, metric, and information requirements in hand, identifying the measures become very simple.
So, it's not impossible to start at the bottom, it's just not the wise choice.
Designing Metrics.
The How.
Now that we have a common language to communicate with, the next step is to discuss how to proceed. I've read numerous books, articles, and blog posts on Balanced Scorecards, Performance Measures, and Metrics for Improvement. Each pushes the reader to use the author's methods and tools. But, I haven't found one yet that puts "how to develop a metric from scratch" into plain English. It's about time someone did.
In this chapter we'll cover the following: How to form a root question-the right root question How to develop a metric by drawing a picture How to flesh out the information, measures, and data needed to make the picture How to collect data, measures, and information This will seem like a lot of work (and it is), but I guarantee you that if you follow this method you will save an enormous amount of time and effort in the long run. Most of your savings will come from less rework, less frustration, and less dissatisfaction with the metrics you develop.
Think of it this way: You can build a house by first creating a blueprint to ensure you get the house that you want. Or you can just order a lot of lumber and supplies and make it up as you go along. This process doesn't work when building a house or developing software. It requires discipline to do the groundwork first. It will be well worth it. I've never seen anyone disappointed because they had a well thought-out plan, but I've helped many programmers try to unravel the spaghetti code they ended up with because they started programming before they knew what the requirements were.
While programmers have improved at upfront planning, and builders would never think to just start hammering away, sadly those seeking to use metrics still want to skip the requirements phase.
So let's start working on that blueprint.
Getting to the Root Question.
Before you can design a metric, you have to first identify the root question: What is the real driving need? In the service desk example in the last chapter, the director asked, "Is the service desk responsive to our customers?" The a.n.a.lyst took that question and developed a decent metric with it-percentage of service calls abandoned. He didn't do a great job, however, because he didn't make a picture (metric) first. Instead, he went straight to collecting data and measures. He also didn't determine if the question was a root question or the right question.
The discussion I've had many times with clients often goes like the following dialog (in which I'm the metrics designer): Director: "I'd like to know if our service desk is responsive to our customers."