Wednesday, 10 December 2008

Data Driven

At the start of any performance improvement initiative, there is a question of whether the initiative is going to have real impact on what matters to the organization. Will the retail sales training change behavior in ways that improve customer satisfaction? Will the performance support tool provided to financial advisors increase customer loyalty? Will the employee engagement intervention provide only short-term benefit, or will it have a longer-term effect on engagement and retention?

If you want to really improve the numbers via a performance improvement initiative then you need to start and end with the data.

Using a data driven approach to performance improvement is a passion of mine. As I look back at various projects that have done this, a widely applicable model emerges for data driven performance improvement initiatives. Understanding this model is important in order to be able to apply it within different situations in order to help drive behavior change that ultimately leads to improvement in metrics.

THE PROCESS AND MODEL

At its simplest, the model is based on providing metrics that suggest possible tactical interventions, support the creation of action plans to improve the metrics and track the changes in the metrics so that performers can see their progress and continually improve. Additional specifics around this model will be introduced below, but it is easiest to understand the model through an example.

This system comes out of an implementation where the focus was improving customer satisfaction in retail stores that was built by my company, TechEmpower, as a custom solution for the retailer. In this situation, customer satisfaction is the key driver in profitability, same store sales growth, and basically every metric that matters to this organization. Customer satisfaction data is collected in an ongoing basis using a variety of customer survey instruments. These surveys focus on overall customer satisfaction, intent to repurchase, and a specific set of key contributing factors. For example, one question asks whether “Associates were able to help me locate products.”



In this case, the performance improvement process began when performance analysts reviewed the customer satisfaction metrics and conducted interviews with a wide variety of practitioners, especially associates, store managers and district managers. The interviews were used to determine best practices, find interventions that had worked for store managers, understand in-store dynamics and the dynamics between store managers and district managers.

Based on the interviews, the performance analysts defined an initial set of targeted interventions that specifically targeted key contributing behaviors closely aligned to the surveys. For example, there were four initial interventions defined that would help a store manager improve the store’s scores on “knowledge of product location.” The defined interventions focused on communications, associate training opportunities, follow-up opportunities, and other elements that had been successful in other stores.

Once the interventions were defined, the custom web software system illustrated in the figure above was piloted with a cross-section of stores. There was significant communication and support provided to both store managers and district managers in order for them to understand the system, how it worked and how it could help them improve customer satisfaction. Because customer satisfaction data was already a primary metric with significant compensation implications, there was no need to motivate them but there was need to help them understand what was happening.

The system is designed to run along 3 month cycles with store managers and district managers targeting improvements for particular metrics. At the beginning of the cycle, store managers receive a satisfaction report. This report showed the numbers in a form similar to existing reports and showed comparison with similar stores and against organizational benchmarks.

Store managers review these numbers and then are asked to come up with action plans against particular metrics where improvement was needed. To do this, the store manager clicks on a link to review templates of action plans that were based on best practices from other stores. Each action plan consists of a series of steps that included things like pre-shift meetings/training, on-the-fly in-store follow-up, job aids for employees such as store layout guides, games, etc. Each item has a relative date that indicates when it should be completed. Managers can make notes and modify the action plan as they see fit. Once they are comfortable with their plan, they send it to the district manager for review. The district manager reviews the plan, discusses it with the store manager, suggests possible modifications, and then the store manager commits to the plan.

Once the plan is approved, the store manager is responsible for executing to the plan, marking completion, making notes on issues, and providing status to the district manager. Most action plans last four to six weeks. Both the store manager and district manager receive periodic reminders of required actions. As part of these email reminders, there is subtle coaching. For example, performance analysts have determined suggested conversations that district managers should have with the store manager or things they might try on their next store visit associated with the particular intervention. The district manager is given these suggestions electronically based on the planned execution of the action plan. This is not shown to the store manager as part of the action plan, and it has been found to be an important part of effectively engaging the district managers to help get change to occur.
Once the store manager has marked the entire plan as completed, an assessment is sent to the store and district managers. This assessment briefly asks whether the store manager and district managers felt they were able to effectively implement the intervention and offers an important opportunity for them to provide input around the interventions. Their ratings are also critical in determining why some interventions are working or not working.

At the next reporting cycle, the system shows store managers and district managers the before and after metrics that corresponded to the timing of the action plan. We also show how their results compare with other stores who had recently executed a similar action plan.

This marks the beginning another action plan cycle. The store managers review their customer satisfaction data and are again asked to make action plans. In most cases, we add to the action plan for this cycle a series of follow-up steps to continue the changed behavior associated with the prior action plan.

If you look at what’s happening more broadly, the larger process is now able to take advantage of some very interesting data. Because we have before and after data tied to specific interventions, we have clear numbers on what impact interventions had on the metrics. For example, two interventions were designed to help store managers improve the scores around “knowledge of store layout.” One intervention used an overarching contest, with a series of shift meetings to go through content using a job aid, a series of actions by key associates that would quiz and grade other associates on their knowledge, but all encompassed within the overall fun contest. The other intervention used a series of scavenger hunts designed to teach associates product location in a fun way. Both interventions were found to have positive impact on survey scores for “knowledge of store layout.” However, one of the interventions was found to be more effective. I’m intentionally not going to tell you which, because I’m not sure we understand why nor can we generalize this. We are also trying to see if modifications will improve the other intervention to make it more effective. The bottom line is that we quickly found out what interventions were most effective. We also were able to see how modifications to the pre-defined interventions done by store managers as part of the action planning process affected the outcomes. Some modifications were found to be more effective than the pre-defined interventions, which allowed us to extract additional best practice information.

Overall, this approach had significant impact on key metrics, helped capture and spread best practices. It also had a few surprises. In particular, we were often surprised at what was effective and what had marginal impact. We were also often surprised by tangential effects. For example, interventions aimed at improving knowledge of store layout among employees has positive impact on quite a few other factors such as “store offered the products and services I wanted,” “products are located where I expect them,” “staff enjoys serving me,” and to a lesser extent several other factors. In hindsight it makes sense, but it also indicates that stores that lag in those factors can be helped by also targeting associate knowledge.

The pilot ran for nine months, three cycles of three months each. It showed significant improvement as compared to stores that had the same data reported but did not have the system in place. Of course, there were sizable variations in the effectiveness of particular interventions and also in interventions across different stores and with different district managers involved. Still, the changes in the numbers made the costs of implementing the system seem like a rounding error as compared to the effect of improvement in customer satisfaction.

The system continues to improve over time. And when we say “the system,” the software and approach has not changed much, but our understanding of how to improve satisfaction continues to get better. As we work with this system, we continually collaborate to design more and different kinds of interventions, modify or remove interventions that don’t work, and explore high scoring stores to try to find out how they get better results.

So why was this system successful when clearly this retailer, like many other retailers, had been focused on customer satisfaction for a long time across various initiatives? In other words, this organization already provided these metrics to managers, trained and coached store managers and district managers on improving customer satisfaction, placed an emphasis on customer satisfaction via compensation, and used a variety of other techniques. Most store managers and district managers would tell you that they already were working hard to improve satisfaction in the stores. In fact, there was significant skepticism about the possibility of getting real results.

So what did this system do that was different than what they had been doing before? In some ways, it really wasn’t different than what this organization was already doing; it simply enabled the process in more effective ways and gave visibility into what was really happening so that we could push the things that worked and get rid of what didn’t work. In particular, if you look at the system, it addresses gaps that are common in many organizations:
  • Delivers best practices from across the organization at the time and point of need
  • Provides metrics in conjunction with practical, actionable suggestions
  • Enables and supports appropriate interaction in manager-subordinate relationships that ensures communication and builds skills in both parties
  • Tracks the effectiveness of interventions to form a continuous improvement cycle to determine what best practices could be most effectively implemented to improve satisfaction.
From the previous description, it should be clear that the beauty of this kind of data driven approach is that it supports a common-sense model, but does it in a way that allows greater success.

ADDITIONAL DOMAINS

Data driven performance improvement systems have been used across many different types of organizations, different audiences, and different metrics. Further, there are a variety of different types of systems that support similar models and processes.

Several call center software providers use systems that are very similar to this approach. You’ll often hear a call center tell you, “This call may be monitored for quality purposes.” That message tells you that the call center is recording calls so that quality monitoring evaluations can be done on each agent each month. The agent is judged on various criteria such as structure of the call, product knowledge, use of script or verbiage, and interaction skills. The agent is also being evaluated based on other metrics such as time on the call, time to resolution, number of contacts to resolve, etc. Most of these metrics and techniques are well established in call centers.

Verint, a leading call center software provider, uses these metrics in a process very similar to the retail example described above. Supervisors evaluate an agent’s performance based on these metrics and then can define a series of knowledge or skill based learning or coaching steps. For example, they might assign a particular eLearning module that would be provided to the agent during an appropriate time based on the workforce management system. The agent takes the course, which includes a test to ensure understanding of the material. At this point the Verint system ensures that additional calls are recorded on this agent in order for the supervisor to make the evaluation if the agent has made strides of improvement in a specific area.

In addition to specific agent skills, the Verint system is also used to track broader trends and issues. Because you get before and after metrics, you have visibility in changes in performance based on particular eLearning modules.

Oscar Alban, a Principal and Global Market Consultant at Verint, “Many companies are now taking this these practices into the enterprise. The area that we see this happening to is the back-office where agents are doing a lot of data entry–type work. The same way contact center agents are evaluated on how well they interact with customers, back office agents are evaluated on the quality of the work they are performing. For example if back-office agents are inputting loan application information, they are judged on the amount of errors and the correct use of the online systems they must use. If they are found to have deficiencies in any area, then they are coached or are required to take an online training course in order to improve.” Verint believes this model applies to many performance needs within the enterprise.

Gallup uses a similar approach, but targeted at employee engagement. Gallup collects initial employee engagement numbers using a simple 12-question survey called the Q12. These numbers are rolled-up to aggregate engagement for managers based on the survey responses of all direct and indirect reports. The roll-up also accounts for engagement scores for particular divisions, job functions, and other slices. Gallup provides comparison across the organization based on demographics supplied by the company and also with other organizations that have used the instrument. This gives good visibility into engagement throughout the organization.

Gallup also provides a structure for action planning and feedback sessions that are designed to help managers improve engagement. Gallup generally administers the surveys annually. This allows them to show year-over-year impact of different interventions. For example, they can compare the engagement scores and change in engagement scores for managers whose subordinates rated their manager’s feedback sessions in the top two boxes (highest ratings) compared with managers who did not hold feedback sessions or whose feedback session was not rated highly. Not surprisingly, engagement scores consistently have a positive correlation with effective feedback sessions.

There are many examples beyond the three cited here. Just based on these examples, it is clear that this same model can apply to a wide variety of industries, job functions, and metrics. Metrics can come from a variety of existing data sources such as product sales numbers, pipeline activity, customer satisfaction, customer loyalty, evaluations, etc. Metrics can also come from new sources as in the case of Gallup, where a new survey is used to derive the basis for interventions. These might be measures of employee satisfaction, employee engagement, skills assessments, best practice behavior assessments, or other performance assessments. In general, using existing business metrics will have the most impact and often have the advantage of alignment within the organization around these metrics. For example, compensation is often aligned with existing metrics. Using metrics that are new to the organization will come with minimally a need for communicating the connection between these numbers and the bottom line.

COMMON CHALLENGES

When you implement this kind of solution, there are a variety of common challenges that are encountered.

Right Metrics Collected

As stated above, there are a wide variety of possible metrics that can be tied to particular performance interventions. However, in the case that metrics don’t exist or are not being collected, then additional work is required not only to gather the input metrics, but to convince the organization that these are the right metrics. Assessments and intermediate factors can and often should be used, but they must be believed and have real impact for all involved.

Slow-Changing Data and Slow Collection Intervals

Many metrics change slowly and may not be collected often enough so you have immediate visibility into the impact. In these cases, we’ve used various data points as proxies for the key metrics. For example, if customer loyalty is the ultimate metric, you should likely focus on intermediate factors that you know contributes to loyalty such as recency and frequency of contact, customer satisfaction, and employee knowledge. For metrics where you only have annual cycles, you may want to focus on a series of interventions over the year. Alternatively, you may want to define targeted follow-up assessments to determine how performance has changed.

Data Not Tied to Specific Performance/Behavior

Customer loyalty is again a good example of this challenge. Knowing that you are not performing well on customer loyalty does not provide enough detail to know what interventions are needed. In the case of customer satisfaction at the store level, the survey questions asked about specific performance, skills or knowledge you expected of the store employees – “Were they able to direct you to products?” or “Were they knowledgeable of product location in the store?” Poor scores on these questions suggest specific performance interventions.

In the case of customer loyalty, you need to look at the wide variety of performance / behaviors that collectively contribute to customer loyalty and define metrics that link to those behaviors. In a financial advisor scenario, we’ve seen this attacked by looking at metrics such as frequency of contact, customer satisfaction, products sold, employee satisfaction. With appropriate survey researchers involved, you often will gain insights over time into how these behavior-based numbers relate to customer loyalty. But, the bottom line is that you likely need additional assessment instruments that can arrive at more actionable metrics.

CONCLUSIONS

The real beauty of a data driven model for performance improvement is that it focuses on key elements of behavior change within a proven framework. More specifically, it directs actions that align with metrics that are already understood and important. It helps ensure commitment to action. It provides critical communication support, for example helping district managers communicate effectively with store managers around metrics and what they are doing. In helps hold the people involved accountable to each other and to taking action in a meaningful way. And, the system ties interventions to key metrics for continuous improvement.

One of the interesting experiences in working on these types of solutions is that it’s not always obvious what interventions will work. In many cases, we were surprised when certain interventions had significant impact and other similar interventions did not. Sometimes we would ultimately trace it back to problems that managers encountered during the implementation of the intervention that we had not anticipated. In other words, it sounded good on paper, but ultimately it really didn’t work for managers. For example, several of the games or contests we designed didn’t work out as anticipated. Managers quickly found interest faded quickly and small external rewards didn’t necessarily motivate associates. Interestingly, other games or contests worked quite well. This provide real opportunity to modify or substitute interventions. We also found ourselves modifying interventions based on the feedback of managers who had good and bad results from their implementation.
The other surprise was that very simple interventions would many times be the most effective. Providing a manager with a well-structured series of simple steps, such as what we refer to a “meeting in-a-box” and “follow-up in-a-box” would often turn out to have very good results. These interventions were provided as web pages, documents, templates, etc. that the manager could use and modify for their purposes. There was, of course, lots of guidance in how to use these resources effectively as part of the intervention. In many cases, the interventions were based on information and documents that was being used in some stores but not widely recognized or adopted. Because of the system, we then were able to use similar interventions in other cases. But, because practicality of interventions is paramount, we still had challenges with the design of those interventions.

Of course, this points to the real power of this approach. By having a means to understand what interventions work and don’t work, and having a means to get interventions out into the organization, we have a way of really making a difference. Obviously, starting and ending with the data is the key.

In 2009, I'm hoping that I will get to work on a lot more data driven performance improvement projects.

No comments:

Post a Comment

Girls Generation - Korean