Metrics: How to Improve Key Business Results - novelonlinefull.com
You’re read light novel Metrics: How to Improve Key Business Results Part 9 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
The manager looked uncomfortable.
The employee continued, "So, tell me, what do you want me to change? If you want, I won't take any cases from the other a.n.a.lysts and I'll let the customers' toughest problems go unresolved. Your call. You're the boss."
Needless (but fun) to say, the boss never bothered him about his time to resolve again. And luckily for all involved, the boss did not remain in the position much longer.
The only proper initial response to metrics is to investigate.
Earlier, I offered the view that metrics were not facts. You can give metrics too much value by deciding that they are facts. This is dangerous when leadership decides to "drive" decisions with metrics. This gives metrics more power than they deserve. When we elevate metrics to truth, we stop looking deeper. We also risk making decisions and taking actions based on information that may easily be less than 100 percent accurate.
Metrics are not facts. They are indicators.
When we give metrics some undeserved lofty status (as truth instead of indicators) we encourage our organization to "chase the data" rather than work toward the underlying root question the metrics were designed to answer. We send a totally clear and equally wrong message to our staff that the metrics are what matter. We end up trying to influence behavior with numbers, percentages, charts, and graphs.
The simplest example may be in customer satisfaction surveys. Even direct feedback provided during a focus group interview has to be taken with a grain of salt. And when we look at truly objective data, there is always room for misinterpretation. Objective measuring tools can have defects and produce faulty data.
Most times good managers (as well as good workers and good customers) know the truth without the data. Investigate when you see data that doesn't match your gut instinct. Investigate when the data agrees too readily with your hunch.
One of the major benefits of building a metric the way I suggest is that it tells a complete story in answer to a root question. If you've built it well, chances are, it's accurate and comprehensive. It is the closest thing you'll get to the truth. But, I know from experience, no matter how hard I try there is always room for error and misinterpretation. A little pause for the cause of investigation won't hurt-and it may help immensely.
Metrics Can Be Wrong.
Since there is the possibility of variance and error in any collection method, there is always room for doubting the total validity of any measure. If you don't have a healthy skepticism of what the information says, you will be led down the wrong path as often as not. Let's say the check-engine light in your car comes on. Let's also say that the car is new. Even if we know that the light is a malfunction indicator, we should refrain from jumping to conclusions. My favorite visits to the mechanics are when they run their diagnostics on my check-engine light and they determine that the only problem is with the check-engine light.
Perhaps you are thinking that the fuel-level indicator would be a better example. If the fuel gauge reads near empty, especially if the warning light accompanies it, you can have a high level of confidence that you need gas. But the gas gauge is still only an indicator. Perhaps it's a more reliable one than the check-engine light, but it's still only an indicator. Besides the variance involved (I noticed that when on a hill the gauge goes from nearly empty to nearly an eighth of a tank!), there is still the possibility of a stuck or broken gauge.
I understand if you choose to believe the gas gauge, the thermometer, or the digital clock-which are single measures. But, when you're looking at metrics, which are made up of multiple data, measures, and information, I hope you do so with a healthy dose of humility toward your ability to interpret the meaning of the metric.
This healthy humility keeps us from rushing to conclusions or decisions based solely on indicators (metrics).
Metrics are a tool, an indicator-they are not the answer and may have multiple interpretations.
I've heard (too often for my taste) that metrics should "drive" decisions. I much prefer the att.i.tude and belief that metrics should "inform" decisions.
Accurate Metrics Are Still Simply Indicators.
Putting aside the possibility of erroneous data, there are important reasons to refrain from putting too much trust in metrics.
Let's look at an example from the world of Major League Baseball. I like to use baseball because of all the major sports, baseball is easily the most statistically focused. Fans, writers, announcers, and players alike use statistics to discuss America's pastime. It is arguably an intrinsic part of the game.
To be in the National Baseball Hall of Fame is, in many ways, the pinnacle of a player's career. Let's look at one of the greatest player's statistics. In 2011, I was able to witness Derek Jeter's 3,000th hit (a home run), one of the accomplishments a player can achieve to essentially a.s.sure his position in the Hall of Fame (Jeter was only the 28th player of all time to achieve this). The question was immediately raised-could Jeter become the all-time leader in hits? The present all-time leader had 4,256 hits! Personally, I don't think Jeter will make it.
The all-time hits leader was also voted as an All-Star 17 times in a 23-year career-at an unheard of five different positions. He won three World Series championships, two Golden Glove Awards, one National League Most Valuable Player (MVP) award, and also a World Series MVP award. He also won Rookie of the Year and the Lou Gehrig Memorial Award and was selected to Major League Baseball's All-Century Team. According to one online source, his MLB records are as follows: Most hits Most outs Most games played Most at bats Most singles Most runs by a switch hitter Most doubles by a switch hitter Most walks by a switch hitter Most total bases by a switch hitter Most seasons with 200 or more hits Most consecutive seasons with 100 or more hits.
Most consecutive seasons with 600 at bats.
Only player to play more than 500 games at each of five different positions.
This baseball player holds a few other world records, as well as numerous National League records that include most runs and doubles.
In every list I could find, he was ranked in the top 50 of all-time baseball players. In 1998 The Sporting News ranked him as the 25th and The Society for American Baseball Research placed him at 48th.
So, based on all of this objective, critically checked data, it should be easily understood why this professional baseball player was unanimously elected to the National Baseball Hall of Fame on the first ballot that he was eligible for.
But he wasn't elected.
His name is Pete Rose. He is not in the Baseball Hall of Fame and may never get there. If you look at all of the statistical data that the voters for the Hall use, his selection is a no-brainer. But the statistics, while telling a complete story, lacks the input that was taken into account-specifically that he broke one of baseball's not-to-be-breached rules: he legally and illegally gambled on professional baseball games.
In the face of the overwhelming "facts" that Pete Rose should be in the Baseball Hall of Fame, the truth is in direct contrast to the data.
Even if we look at well-defined metrics that tell a full story, they are only indicators in the truest sense. If you fully and clearly explain the results of your investigation, you complete the metric by explaining the meaning of the indicator. You explain what the metrics indicate so that better decisions can be made, improvement opportunities identified, or progress determined. You are providing an interpretation-hopefully one backed by the results of your investigation.
No matter how you decorate it, metrics are only indicators and as such should elicit only one initial response: to investigate.
Of course, some metrics are simple enough that you will accept their story without as much investigation (like the gas gauge on your car), but even in these instances, you should keep a watchful eye in case they start to show you data that you believe is misleading or erroneous.
At the end of the day, even if you have total confidence in the accuracy of the data (pro-sports statistics, for example), you have to treat it all as indicators. Data can't predict the future. If it could, then there would be no reason to play the games!
The point is that metrics should not be seen as facts but rather as indicators of current and past conditions. Used properly, metrics should lead our conversations, help us to focus, and draw our attention in the right direction. Metrics don't provide the answers; they help us ask the right questions and take the right actions.
Indicators: Qualitative vs. Quant.i.tative Data.
The simple difference between qualitative and quant.i.tative data is that qualitative data is made up of opinions and quant.i.tative data is made up of objective numbers. Qualitative data is more readily accepted to be an indicator, while quant.i.tative data is more likely to be mistakenly viewed as fact, without any further investigation necessary. Let's look at these two main categories of indicators.
Qualitative Data.
Customer satisfaction ratings are opinions-a qualitative measure of how satisfied your customer is. Most qualitative collection tools consist of surveys and interviews. They can be in the form of open-ended questions, multiple-choice questions, or ratings. Even observations can be qualitative, if they don't involve capturing "numbers"-like counting the number of strikes in baseball, or the number of questions about a specific product line. When observations capture the opinions of the observer, we still have qualitative data.
Many times, qualitative data is what is called for to provide answers to our root question. Besides asking how satisfied your customers are, some other examples are: How satisfied are your workers?
Which product do your customers prefer, regular or diet?
How fast do they want it?
How much money are your customers willing to pay for your product or service?
When or at what hours do your customers expect your product or service to be available?
Do your workers feel appreciated?
No matter how you collect this data, they are opinions. They are not objective data. They are not, for the most part, even numbers. You can take qualitative data and try to transpose them into more quant.i.tative forms-turning opinions into values on a Likert scale, for example. But in the end, they are still opinions. They'll look like quant.i.tative data, but they are not.
Some a.n.a.lysts, especially those that believe the customer is always right, believe that qualitative data is the best data. Through open-ended questions these a.n.a.lysts believe you receive valuable feedback on your processes, products, and services. Since the customer is king, what better a.n.a.lytical tool is there than to capture the customers' opinion on your products and services?
These a.n.a.lysts love focus groups and interviews. Surveys will suffice in a pinch, but they lack the ability for a.n.a.lysts to observe the non-verbals and other signs that can help them determine the answer to the question, "How satisfied are our customers?"
One of the most popular organizational development books in recent years is First, Break All the Rules by Marcus Buckingham and Curt Coffman (Simon & Schuster, 1999). This book is built on the a.n.a.lysis the authors performed on qualitative data. The reviews sell potential readers by promising to help us "see into the minds" of successful managers and leaders in successful companies. The overwhelming success of this book is just one modern-day example of the power of qualitative measures.
Quant.i.tative Data.
Quant.i.tative data usually means numbers-objective measures without emotion. This includes all of the gauges in your car. They also include information from automated systems like automated-call tools, which tell you how many calls were answered, how long it took for them to be answered, and how long the call lasted.
The debate used to be that one form of data was better than another. It was argued that quant.i.tative data was better because it avoided the natural inconsistencies of data based on emotional opinions. Then the quant.i.tative camp argued that someone could rate your product high or low on a satisfaction scale for many reasons other than the products' quality. Some factors that could go into a qualitative evaluation of your service or product could include: The time of day the question was asked The mood the respondent was in before you asked the question Past experiences of the respondent with similar products or services The temperature of the room The lighting The attractiveness of the person asking the question If the interviewer has a foreign accent The list can go on forever. Quant.i.tative data on the other hand avoids these variances and gets directly to the things that can be counted. Some examples in the same type of scenario could include: The number of customers who bought your product The number of times a customer buys the product The amount of money the customer paid for your product What other products the customer bought The number of product returns The proponents of quant.i.tative information would argue that this is much more reliable and, therefore, meaningful data.
I'm sure you've guessed that neither camp is entirely correct. I'm going to suggest using a mix of both types of data.
Quant.i.tative and Qualitative Data.
For the most part, the flaws with qualitative data can be best alleviated by including some quant.i.tative data-and vice versa. Qualitative data, when taken in isolation, is hard to trust because of the many factors that can lead to the information you collect. If a customer says that they love your product or service, but never buy it, the warm fuzzy you receive from the positive feedback will not help when the company goes out of business. Quant.i.tative data on the number of sales and repeat customers can help provide faith in the qualitative feedback.
If we look at quant.i.tative data by itself, we risk making some unwise decisions. If our entire inventory of a test product sells out in one day, we may decide that it is a hot item and we should expect to sell many more. Without qualitative data to support this a.s.sumption, we may go into ma.s.s production and invest large sums. Qualitative questions could have informed us of why the item sold out so fast. We may learn that the causes for the immediate success were unlikely to recur and therefore we may need to do more research and development before going full speed ahead. Perhaps the product sold out because a confused customer was sent to the store to buy a lot of product X and instead bought a lot of your product by mistake. Perhaps it sold quickly because it was a new product with a novel look, but when asked, the customers a.s.sured you they'd not buy it again-that they didn't like it.
Not only should you use both types of data (and the accompanying data collection methods), but you should also look to collect more than one of each. And of course, once you do, you have to investigate the results.
You may believe qualitative measures are more obvious indicators. Yet even when we ask a customer if she is satisfied with a product, and she answers emphatically, "yes," her response doesn't mean she was truly satisfied. The only "fact" we know is that the respondent said she was satisfied.
Even in the case of automated-call software, the results are only indicators.
Quant.i.tative data, while objective, are still only indicators. If you don't know why the numbers are what they are, you will end up guessing at the reasons behind the numbers. If you guess at the causes, you are guessing at the answer.
Metrics (indicators) require interpretation to be used properly.
I advocate using triangulation (see Chapter 7) for getting a better read on the full answer to any root question. This would direct us not to take qualitative or quant.i.tative data alone. The great debate between which is better is unnecessary. You should use some of each in your recipe.
Recap.
The following are principles to remember: Metrics are only indicators.
Metrics are not facts. Even when you have a high level of confidence in their accuracy, don't elevate them to the status of truth.
The only proper response to a metric is to investigate.
When you tell the story by adding prose, you are explaining what the metrics are indicating so that better decisions can be made, or improvement opportunities identified, or progress determined.
There are two main categories of indicators: Qualitative and Quant.i.tative. Qualitative is subjective in nature and usually an expression of opinion. Quant.i.tative is objective in nature and compiled using automated, impartial tools.
Metrics by themselves don't provide the answers; they help us ask the right questions and take the right actions.
Metrics require interpretation to be useful.
Even the interpretation is open to interpretation-metrics aren't about providing truth, they're about providing insight.
Conclusion.
Metrics are only indicators. This doesn't mean they aren't valid or accurate. Even the most objective, accurate, and valid metrics should only be treated as indicators. From my days in the Air Force, I learned that "perception is reality." This is true for metrics. One of the major reasons I insist on providing an explanation to accompany your charts, graphs, and tables is to limit the variance in perceptions of your metrics. The interpretation of your metrics should not be left up to the viewer. You should do the work and due diligence, and investigate what the metrics are telling you. You should take the results of your investigation to form thoughtful conclusions based on data. These should be provided in the explanation for the metric.
You will then do your best to sell your interpretation of the metric to your audience. Even with that, you have to accept that your interpretation is open to interpretation by those viewing your metrics. You also have to accept that your well-defined and fully told story is, in the end, only an indicator. It should be a well-explained indicator and one that your diagnostics have correctly interpreted; but it is an indicator nonetheless. This requires healthy humility on your part.
Remember, metrics are only a tool. They are not meant to be more.