Home

Metrics: How to Improve Key Business Results Part 18

Metrics: How to Improve Key Business Results - novelonlinefull.com

You’re read light novel Metrics: How to Improve Key Business Results Part 18 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

Other options included using Reichheld's "Promoter to Detractor" ratios. This was actually attempted for over a year before I finally gave into the inevitable. (I couldn't get the third party to use a 10-point scale and the concept of promoter to detractor was too complex for some of the managers.) The consistency provided by using percentages quickly became expected by the workforce. I still believe in the value of the promoter to detractor a.n.a.lysis, but for the purpose of the report card, I added percentage satisfied to the measurements for our particular audience.

FREDERICK REICHHELD'S PROMOTER-TO-DETRACTOR SCALE.

According to Reichheld's The Ultimate Question (Harvard Business Press, 2006), the most important question you should ask is: "Would you recommend this service (or product) to a friend or coworker?" Using a 10-point scale for the answers, with 1 being "definitely not" and 10 being "definitely yes," if a respondent gives you a 9 or 10, she is considered a "promoter"-someone who would encourage others to use your service (or buy your product). If the respondent gives you a score of 6 or less, she is a "detractor." Detractors will actively discourage others from using your service (or buying your product). Numbers 7 and 8 are considered "neutral" answers-meaning that you can't predict if they will promote or detract from your reputation. You need to have a ratio of two promoters for every detractor to translate to growth (neutrals are not counted). The higher the ratio, the better your word-of mouth-advertising, and the more likely your business (the number of customers) will grow.

Figure 9-7 shows the ratio of promoters to detractors. This ratio can be a negative if you have more detractors than promoters. In the Service Desk case, we had a high ratio. For example, in December 2009, for every detractor (1-3 rating) the Service Desk had over 50 promoters (those rating them as a 5).

Figure 9-7. Ratio of promoters to detractors for overall customer satisfaction.

The ratios were impressive. I took some liberties translating Reichheld's methodology. The 5s equated to 9s or 10s and 1 through 3 equated to 1 through 6. The 4s equated to 7s and 8s. I stopped using the terms "promoter" and "detractor" since we weren't using the proper question. It was more meaningful to simply say that the measure reflected the ratio of "Highly Satisfied" (5s) to those who could not say they were satisfied (3s or less). This in itself was more meaningful than an average, but still not as clear as I would have liked.

"What about 4s?" was a common question that I received when I revealed the data this way. Explaining that 4s were "truly neutral" didn't sway most people. The service provider thought we were losing the "satisfieds" and that a 3 was neutral. My correlating the values to Reichheld's formula was hard for them to accept. I believe this was in large part due to fear of the measures and that they wouldn't look as good as they could.

Using the ratio of highly satisfied to not satisfied may seem logical to you (it does to me). But I found that this wasn't the norm for customer satisfaction surveys. The Service Desk had been reporting this data for over a year and they always reported it as an average score like 4.7 (out of 5). It seemed as if the best way to show the results would be to use a Likert Scale.

I looked at all Service Desk reports for the past three years. The first year showed an average score of 4.7 for the year. The following year was 4.76 and a 4.8 for the most recent year. Besides a slight upward trend, I couldn't figure out what the data meant. Was the average good or bad? Well, the third party provided benchmarks for our industry and for all users of their service. So now we could see that we were above average in the case of our scores. But, I still felt a little lost. I didn't see how 4.8, 4.9, or 4.58 meant anything. Granted, if the average were 5.0, I could know that all scores were fives. This would mean that 100 percent of the customers were highly satisfied with our services. But as soon as the average score fell below that mythical result, I had trouble knowing what it meant. Even when I added in the total number of responses, I did not know what the average rating meant. Figure 9-8 shows why it was hard to comprehend the meaning.

Figure 9-8. Customer satisfaction rating score as an average.

So, the average score lacked meaning and comparing the highly satisfied to the not satisfied was a little confusing. A third choice was to use the percentage satisfied. I could understand quickly that a certain percentage of our customers (those who used our service) were either satisfied (4 or 5) or not satisfied (1, 2, or 3). Even with this I received arguments about the nuance of the meaning of "not satisfied." I had more than a few managers who wanted 3s not to be counted since they thought on a 5-point scale that 3s were neutral.

I had to explain that "neutral" meant the respondent, while not "dissatisfied" couldn't say "satisfied" either. The chart wasn't comparing satisfied to "dissatisfied"-but satisfied to "not satisfied." Notice that the same managers who wanted to include neutral scores in the first ratio (5s to 13s) wanted to drop neutral scores if they thought it would make their department look worse. Figure 9-9 shows why "percentage satisfied" was a simpler way to interpret the data.

Figure 9-9. Percentage of satisfied customers.

We were on a pretty steady streak of adding measures to the original plan, so we had no reservations when it came to customer satisfaction. We realized that we were only looking at feedback from those who had problems. This could give us a skewed view of our customers' overall satisfaction with our services. We would never hear from customers who wouldn't call our Service Desk because they didn't like our services. Or we could miss those who liked our services, but either didn't choose to fill out a survey or just hadn't used it in the current year. Basically, we wanted to hear from customers who hadn't called into the Service Desk. We wanted to hear from the rest of our customer base, which was not reflected in our Usage measures.

The answer was an annual customer satisfaction survey. We sent a survey that not only asked the basic questions about satisfaction with our services, but also which services were seen as most important to the customer. This helped with the other services we included in the report card. We also asked the "who is the preferred source for trouble resolution" question, which we used for Usage. This annual survey provided many useful measures besides customer satisfaction. Table 9-8 shows the first breakout of data for this category.

For the annual survey we were able to show the same percentage satisfied, but pulled from a different context. This may not seem too important, but it was very useful to allow for different viewpoints. One telling result was that the scores from the annual survey were considerably "lower" than those for trouble resolution. This flew in the face of what the departments expected (this rang true for all of the services). The staff incorrectly predicted that the scores for trouble resolution would be worse than those for annual surveys. They figured that customers filling out the trouble-resolution surveys were predisposed to be unhappy since they had a "problem," whereas the annual survey had a good chance of catching the customer in a neutral or good mood.

But the data proved them to be totally wrong. The resolution survey scores were significantly higher than the annual survey. This led to further investigation to determine why there was such a drastic difference (and one that went against the predictions). The investigation wasn't intended to improve the annual survey numbers-it was intended to provide understanding and, from that understanding, possible ideas.

One conclusion was that the resolution was done so well, (fast, accurate, and with a high rate of success) that the customer was so pleased that he gave great ratings. Conversely the annual survey reflected simply ambivalence at the time. It wasn't that the annual survey numbers were bad-they were still very good. But they were low in comparison to the astronomically good scores received for trouble resolutions.

Another conclusion was that some of the respondents to the annual survey (especially a considerable amount of negative scores) were given by respondents who hadn't used the IT services-especially in the case of the service desk. Since the IT organization had a poor reputation from a few years prior when service delivery was way below par, the respondents were rating the IT department based on this poor reputation. This is akin to the perception of j.a.panese manufacturing in the mid-20th century. If you said "made In j.a.pan" it meant that the item was junk. If it broke easily, didn't work, or failed to work more often than it did work-you would say, "it must have been made in j.a.pan."

Today, that reputation has been essentially reversed. Now "Made in j.a.pan" describes the height of quality. j.a.panese-made cars are more respected for quality than American-made automobiles. We won't get into the story of how an American helped make this happen (look up the story of W. Edwards Deming) because he couldn't get our own manufacturing industry to listen. The point is that the j.a.panese had to overcome their negative reputation. It wasn't as simple as just delivering higher-quality products. Their potential customer base had to be convinced to give them a try. Those who only knew j.a.panese products as the answer to a joke had to be won over. The same was true for a good portion of our customer base.

Unfortunately those detractors Reichheld discusses can seriously damage your organization's reputation. If a good portion of your customer base is bad-mouthing your services and products, you will need to counteract that. Hoping and waiting for them to come around through attrition is a dangerous path to travel. You may find yourself out of business well before the customers realize that their perception is outdated and that the reality is that you were providing a healthy service.

The measures and follow-on investigation pointed to the need of a marketing program, and not a change in the service, processes, or products.

The Higher Education TechQual+ Project: An example.

Timothy Chester's Higher Education TechQual+ project is a great example of how an annual survey can help provide not only satisfaction data, but also insights into what the customer sees as important. Tim is the CIO of the University of Georgia, and for the last six years, his pet project has been the development of the TechQual+ Project. The purpose of the project is to a.s.sess what faculty, students, and staff want from information technology organizations in higher education. It is primarily a tool for a higher education organization to find out its customers' perceptions of its services.

The TechQual+ project's goal is to find a common "language" for IT pract.i.tioners and IT users. This is part of what makes Chester's efforts special. But, the first brick in the project's foundation is "that the end user perspective" is the key to the "definition of performance indicators for IT organizations." In other words, the customers' viewpoints are critical to the success of the IT organization and the meaningfulness of any metric program.

Chester writes, "With end-user-focused data in hand, one can easily understand failures in service delivery as one-time mistakes, as opposed to urban myths of recurring problems in IT."1 In the Protocol Guide for TechQual+, Tim Chester explains that the tool's key purpose is to allow "IT leaders to respond to the requests of both administrators and accreditation bodies, who increasingly request evidence of successful outcomes." The project intends to give IT organizations a tool for compiling evidence to answer these requests.

Chester goes on to explain, "[For] IT organizations, demonstrating the effective delivery of technology services is vital to the establishment of appreciation, respect, and trustworthiness..."

This project lists the most crucial inputs for its purpose as valid and reliable effectiveness measures of IT services. Chester also believes that while standardized performance measures are highly needed, the higher education IT industry is still far off from meeting this need.

TechQual+ attempts to provide measures that can be understood and used by the organization's customers, provide a database for comparing results between inst.i.tutions, and an easy to use survey tool for producing the data. One of the defining points of the project is that TechQual+ defines outcomes "from an end-user point of view." Chester understands the need for more than a customer satisfaction survey and uses his tool to capture the customers' viewpoints on any and all facets of what the Answer Key identifies as Product/Service Health.

__________________.

1 www.techqual.org.

This project fits in well with what I've presented in this book. It is a great way to "ask the customers" for their input. It can provide a means for gathering not only the customer's evaluation of how well a service is provided but what the customers' expectations are. Where I have relied on the service provider to interpret the customers' expectations, the methods offered in TechQual+ can be used to build a range from customer responses. This is definitely a methodology worth looking into.

TechQual+'s approach is based on evaluating the following three measures: The minimum acceptable level of service (Minimum Expectations) The desired level of service (Desired Expectations) How well the customer feels the service meets these expectations (Perceived Performance) The results of these measures are used to develop a "Zone of Tolerance," an "Adequacy Gap Score," and a "Superiority Gap Score," described as follows: The Zone of Tolerance: The range between minimum and desired expectations (what the Report Card calls simply "Meets Expectations").

The Adequacy Gap Score: The difference between the "perceived" performance and the minimum expectation.

The Superiority Gap Score: The difference between the desired and perceived performance.

You should see how these "scores" correlate to the Report Card's scores. If you look at the charts offered for each measure in the Report Card, you could determine the Zone of Tolerance (the range of Meets Expectations) and those values that represent a positive or negative Adequacy or Superiority gap score.

The beauty of the TechQual+ Project is that the results reflect not only the customers' expectations (gathered through a survey instrument) but also the perceived service health (also through a survey). It is an excellent feedback tool. I highly recommend that you look into using the tool (it's free) or implementing the concepts offered by it, in your survey instruments. When used in conjunction with your objective measures (Delivery and Usage) it gives a fuller picture of the health of your service. You can use the TechQual+ or other survey tool for the Customer Satisfaction part of the Report Card. While it is labeled "Customer Satisfaction," you'll see that the questions you can ask in the survey are not restricted to this area. You can (and should) ask for feedback on the importance of the services you're measuring. It can be especially useful for getting input on the range of expectations.

Two major areas of difference between the Report Card and TechQual+ should be obvious. The Report Card attempts to use objective measures collected in other ways besides the survey method. Triangulation demands that you use different collection methods and different sources. The Report Card, while also using expectations, treats "Superior" (exceeding expectations) performance as an anomaly.

The conclusion? The TechQual+ Project (and other survey-based innovative tools) should be looked into-especially as a solution for the Customer Satisfaction part of the Report Card and for gathering information on the expectations for all of the measures.

Applying Expectations.

You may have noticed that the charts offered throughout this chapter are "meaningful." Part of this is the inclusion of the expectations for each measure. Imagine if the measure lacked this qualifying characteristic. Go back and look at the Customer Satisfaction chart (Figure 9-9) again. Notice it doesn't have expectations. I left them out for two reasons. The first is that we didn't have them when we started, but we could still produce the basic charts you've seen so far. Secondly, as mentioned earlier, many times the data can help you determine what is "normal." When we look at "normal" coupled with the service provider's a.s.sessment of the level of "normality" of the performance during the reported time period, we can make a good estimate of the expectations.

Now let's add the percentage values, for at least the most recent year. This makes the chart (Figure 9-10) easier to read.

Figure 9-10. Customer Satisfaction: the percentage satisfied with the values for the last year A little easier to read. At a glance we can see how we faired vs. the previous year. We can also see if we have upward or downward trends (three data points in succession that move up or down). While you can still see trending (up or down) you won't know if the data is "good," "bad," or "indifferent." So before we get to expectations, this chart already tells us to look at AugOct 2010. What was happening? What was causing the steady incline? Was it something we needed to look at more closely?

One of the most important steps we had to take was to develop expectations. As explained in the chapter on expectations, you can't always ask all of your customers for their feedback on this topic. Depending on the size of your customer base, the expectations can range widely. SLAs help, but they don't always reflect the customers' expectations. Sometimes they only represent the contractually agreed-upon requirements.

If the Service Desk didn't know what the expectations for this should be, we could use the data to tell us. You can start with the SLA if you have one, collect customer feedback, and then bounce that against the department's opinion. I have invariably found that the department's expectations are always higher than the SLA or what I would propose. Most people are harder on themselves than their customers are on them.

We sat down with the Service Desk department. We met with the manager, the a.n.a.lysts, and the department's director. Our task was to develop a set of expectations for each measure from the customer's point of view. If the customers' expectations needed to be calibrated, a separate marketing effort might be required, but until they were successfully adjusted, we had to go with the current customer viewpoint. Figure 9-11 shows Customer Satisfaction with expectations added. As you can see, we set the expectation level between 90 and 95 percent satisfied.

Figure 9-11. Customer Satisfaction with expectations As you can see in Figure 9-11, it is easy to see which points require further investigation. Besides the upward trend from AugOct 2010, we can also look into the anomalies above 95 percent satisfaction. This was, for the most part, uneventful. Discussion was healthy and it was educational for everyone to see what each other thought of as the customers' expectations.

You can imagine how the discussion around how fast the customer expected the phone to be answered (by an a.n.a.lyst) ran. Some thought 60 seconds was adequate. Others thought that customers were willing to wait minutes. Others thought if it took more than 10 seconds in today's "now" culture, it would be considered far too long.

In the end, we came up with ranges of expectation for each measure. The key was to find a common language so that the expectations would be consistently presented. For each measure, we not only identified what was "good" or "bad" but what percentages would be expected.

Let's look at abandoned call rates, for example. The first question was, what is good and bad? Lower abandoned rates were good. Higher rates were bad. After we had established the direction of "good" and "bad," we had to determine what the expectations were. What percentage of calls could be abandoned and not upset the customer? What percentage was "expected" from a healthy (good) service? What percentage would represent dissatisfaction? When would the customer say "That's too high an abandon rate. Fix it!"? What rate would be so low that the customer would be impressed? Even surprised?

When we had trouble identifying the expectations, we'd play the estimation game. I'd start with the obvious: What exceeds expectations?

Someone would respond, "I can't tell you what exceeds expectations..."

"OK. Let's work on it together," I'd reply. The reluctance to offer an estimate has many possible causes. Luckily you don't have to eliminate the causes, just deal with the effects.

I asked questions to help get to an estimate: "Would no abandoned calls be above the customers' expectations?"

"You mean that they would always get through?"

"Sure. No waiting on hold."

"They'd love that!"

"Sure," I'd say, "but would they expect it?"

An a.n.a.lyst would say, "No. They know we aren't manned with enough people to do that."

"So they wouldn't expect that level of service?"

"Nope."

"How about getting through within 30 seconds?"

"Yeah, they should expect that."

I'd ask: "But do they?"

"Well, yeah," someone would say, "for the most part. They know sometimes it's busy."

"So sometimes they expect that it will take longer than 30 seconds?"

"Sure, like on Mondays, or when a system is down, or we're installing new software."

"OK," I'd say, "So how often is that?"

"Maybe 10 percent of the time."

I'd sum up: "OK, so far, we can say then that 90 percent of the time they expect to speak to an a.n.a.lyst in less than 30 seconds?"

"I'd say they'd be happy with that."

Since we always want ranges (not targets or thresholds) I pressed for more. "So, would they be happy with 85 percent of the time? Or 80 percent? If they call ten times in a month, they should expect to talk to an a.n.a.lyst within 30 seconds eight of those ten times, and the other two times would take longer?"

"Sure...that'd be OK."

"How about three out of four times? Would they still be happy or would they not be satisfied?""

"Well, they may not like that. I mean, it's close."

Again I sum up: "OK. Let's say then that the customer would expect to get through to an a.n.a.lyst within thirty seconds, 75 to 90 percent of the time. Does that sound about right?"

"Sure."

Eventually I got to numbers that fit expectations. Once we plotted those expectations we performed the litmus test explained in the chapter on Expectations. We did this with each of the measures. We looked at the data over the years and checked the expectations against the department's perception of how well the service was provided during that time period. They knew when they had had a bad month. The relationship between the measures and the expectations should reflect their independent a.s.sessment of the quality of the service for that time period.

If we go back to the Metric Development Plan, you see the need to identify the schedule for collection and reporting. These items are not addressed separately. While we were defining the measures to use, we worked on the expectations. While we worked on the expectations we also identified the frequency of reporting and the time span for evaluation. We opted for monthly across-the-board reports, with roll-ups to the calendar year. It could have as easily been weekly, quarterly or only calendar, academic, or fiscal year.

Recap.

In our real-life example, you can see that even when a mandate is given to implement a metrics program, you can get to a root question to allow the effort to be driven by a foundational need. In the case given, I was lucky that the leadership's information need was easily interpreted as a Product/Service Health question. This led cleanly to the development of effectiveness metrics for each key service.

Being able to stay in the first quadrant of the Answer Key mitigated much of the risks with implementing a large-scale metrics program in an organization that had not successfully done this in the past. It also made it possible to focus the effort. This focus allowed me to develop the Report Card methodology.

The use of triangulation and expectations were critical to the success of the program-partly because it gave a better picture of the answers to the root question and partly because they helped to remove much of the fear, uncertainty, and doubt that normally accompanies metrics.

Conclusion.

Now that I've laid the groundwork for the Report Card, it's time to finish the effort. I needed a way to make the "big" picture the groundwork supported meaningful for the director's level, the CIO, and beyond. If the organization was to publish the metrics for its upper leadership, it would also have to be ready for other audiences, including shareholders or stakeholders. Customers might also want to see results of how well the organization serves them.

This requires that the metric be easily modified to show different views for different audiences. Of course, it also had to be meaningful to the service provider.

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Ms. Doctor Divine

Ms. Doctor Divine

Ms. Doctor Divine Chapter 2665: Mission 51 Author(s) : 9000 Dreams View : 1,636,436
Cultivation Chat Group

Cultivation Chat Group

Cultivation Chat Group Chapter 3056: Chapter 3054: Lady Kunna's Side Hustle Author(s) : 圣骑士的传说, Legend Of The Paladin View : 4,369,423

Metrics: How to Improve Key Business Results Part 18 summary

You're reading Metrics: How to Improve Key Business Results. This manga has been translated by Updating. Author(s): Martin Klubeck. Already has 1199 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

NovelOnlineFull.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to NovelOnlineFull.com