Tris Brown asked:
It’s still hard to believe. Last year I ran a marathon â€” all 26.2 miles of it.
I also completed more than 40 measurement projects related to sales performance development. Which would you rather tackle?
Like running a marathon, measuring the business results of soft skills is viewed by most people with a mixture of fear and loathing. They think measurement demands vast amounts of time, effort, and expense. And though it’s supposed to be good for you, many people think bottom-line measurement is just too risky. As a result, many don’t even attempt it.
It’s time for a change in thinking. The truth is that the measurement starting line is very wide and the race is not reserved for the genetically gifted few. Ordinary people with no special training are conducting meaningful measurement and revolutionizing their organizations in the process. You can, too.
During the past several years, I’ve helped dozens of companies determine whether their performance development efforts are making a difference. Large or small, all of those organizations valued – and had attempted – some form of training measurement. A few succeeded brilliantly. Others displayed conviction and technical know-how, but stumbled short of the finish line.
What follows are five measurement lessons learned the hard way, in the trenches with primarily Fortune 500 companies. These lessons are grounded in the principles of collaboration and common sense. They have proven to be invaluable guideposts for line sales management and training functions alike. Using these lessons has enabled our firm to complete 40 to 50 measurement projects every year. By adhering to these same principles, your organization can also consistently track its progress in meeting specific business goals, challenges, and needs.
Lesson 1: Focus on the Business
You’ve heard often that effective performance development must be linked to the goals and objectives of your organization. It’s true. That principle is called “alignment,” and it also applies to measurement. Strong alignment is the genesis of all successful measurement.
Now, every measurement project we launch with our clients begins and ends with a detailed view of their business goals, challenges, and needs. That sounds deceptively simple. In practice, most failed measurement efforts lack a clear connection to the desired business outcomes. There is a critical distinction between training goals, challenges, and needs and business goals, challenges, and needs.
Both line and training functions tend to see performance measurement on their own terms. How does that
One reason is that the training group will focus, often exclusively, on measuring participant reactions (smile sheets) and classroom learning (pre- and post-tests). This is familiar territory for professional educators. If, by chance, the initiative fizzles, then evidence that “learning has taken place” is a tempting defense. This approach may suffice for technical or product training, but it doesn’t fly for tracking the effects of negotiation, leadership, or consultative selling skills development.
I recently asked a group of 10 training directors from the divisions of an $80 billion corporation about their measurement efforts over the last 12 months. Most had tracked participant reaction and classroom learning (Kirkpatrick’s levels 1 and 2Â¹), but only two divisions had linked training to new behaviors (level 3). None had quantified the actual business results (level 4). The measurement efforts of this corporation were, in fact, typical of those I’ve encountered in other organizations.
Tried-and-true smile sheets and pre- and post-tests are valuable tools, but meaningful measurement demands greater insight into driving business issues.
Another problem is that sales executives tend to develop bottom-line myopia. They want to measure performance development solely by monthly or quarterly numbers, sometimes to the exclusion of all other indicators of performance.
One line executive at a large, high-tech company was focused primarily on tracking closed business. “To win in this market, our people need to be better negotiators,” he said. He was right. But a deeper analysis revealed a more complex picture. His division’s margins had slipped, competition had increased, and discounting had become a crutch that account executives used to close deals. Major accounts expected and got deep discounts, so the company’s competitive allowance sustained the vicious cycle of discounting.
In that case, we discovered that the critical measure of the company’s negotiation skills training was not the amount of closed business, but rather the reduction in the use of the competitive allowance.
Together, line and training functions must dig deeply into the underlying forces that affect revenue, customer or client relationships, and business results. There are no shortcuts.
How will you know your measurement project is focusing on the business results? One sure sign occurs when the director of training and the vice president of sales meet to talk about improving performance and growing the business. If that sounds unlikely, keep reading.
Lesson 2: Build a Bridge Between Line and Training
Meaningful measurement requires collaboration. Focus on your organization’s business issues provides a shared purpose and a sense of mission. It is the most fundamental reason for building a relationship between line and training functions. So, why doesn’t it happen more often?
I’ve observed an almost universal tendency: Training professionals don’t initiate enough, and line executives don’t participate enough.
For example, the training professionals at a medical equipment company asked me to help them devise a way to track the bottom-line impact of their sales training. I suggested that we get input from the vice president on what to measure, but they resisted. They said, “We want to have this done before we go to him.” Not surprisingly, the measurement project never got off the ground.
Beware of measurement in a vacuum. Often the training group is made solely responsible for measuring the effects of performance development. Training professionals may try to select specific measures and collect sensitive data on their own. Without insight and involvement from the line organization, they’re forced to guess at critical measures and cajole other departments for data and resources. Frustration is a common result.
In one major telecommunications company, the accounting department actually refused to provide the training group access to the necessary sales numbers. Measurement cannot be delegated to training departments without b organizational ties.
What about line executives? There is a major difference between management support and management involvement.
For example, busy executives at a bio-tech company were extremely supportive of performance development. They rallied the troops and signed the checks. But they were reluctant to personally invest time and become involved in measurement efforts. The board wanted to see results, but the measurement effort stalled. How was that problem solved?
We put on a pot of coffee, brought together the line and training functions, and walked away with specific business objectives linked to the training. Measurement then focused on key business issues, such as growing revenue in the 20 top accounts and insulating them from competitive threats.
Instead of pointing fingers, training professionals have the responsibility to initiate aggressively, and it’s the responsibility of line executives to participate actively. In every case of successful bottom-line measurement I’ve seen, both line and training functions were deeply involved in tracking progress toward common goals.
Lesson 3: Track Progress, Not Proof
Nothing keeps organizations from attempting measurement more tha
n a proof mentality. If your objective is to track the impact of performance development in your organization, I’ve found that absolute proof is impossible â€” and totally unnecessary.
At a recent conference, I had the opportunity to talk with Donald Kirkpatrick. I asked, “Since you introduced the Four Levels in 1959, have you ever seen indisputable proof?” Without hesitation, he said, “No, I’ve never seen it.” But he quickly added, “I’ve seen a lot of good evidence, though.”
For pharmaceutical companies seeking FDA approval of a new drug, or for physicists splitting the atom, the search for proof is appropriate and necessary. Such empirical researchers ask “Does this work?” But those of us charged with performance improvement should ask “Will this work?” – long before the training is rolled out. We must always look for evidence that a program has worked in other organizations with similar struggles before implementing it. Then, our detailed view of business goals, challenges, and needs becomes the standard against which we collect evidence of progress after the implementation.
Throughout this article, you’ve seen the phrase “tracking progress” used to describe training measurement. The word “progress” is the Latin root of the English word “evolution”. Ultimately, the idea is to track the evolution of your organization from its current state of performance to a higher, more productive, more efficient future state. In measurement, we gather evidence that progress is taking place.
Listen to the discussions in your management meetings. People are asking, “Will we make our numbers this year? Are margins improving?” They’re looking for indicators of progress toward a goal. The real questions to be answered by measurement are “How has this helped?” and “In what ways?” This common-sense approach works beautifully.
For example, the direct sales force in one midsize telecommunications company was plagued with extremely high turnover (80 percent) and low performance. The vice president of operations said, “It was painfully obvious to me that we had a big problem.â€ Part of his company’s solution was to implement a consultative selling skills program.
Three months into the performance development effort, our tracking showed that the company’s sales reps had steadily increased their productivity by 42 percent. A group of new reps achieved their quota in just two months rather than the usual six months or longer. The turnover rate fell steadily to an acceptable 28 percent, well below industry norms. When compared to the baseline and to reps not yet trained, those were compelling signs of progress.
Along the way, the company also trained managers to coach more effectively, tweaked its compensation plan, and reinforced new skills consistently. All of those factors undoubtedly contributed to the stellar results. We never proved that the sales training worked. But, as Kirkpatrick would say, we found “a lot of good evidence.”
Tracking progress, not obtaining proof, takes pressure off the people doing the tracking and shifts it onto the people doing the performing, where it belongs.
Lesson 4: You’re Probably Already Doing Measurement
There is a widely held perception that bottom-line measurement is arduous and expensive. That’s not surprising. So often, we’ve heard that this level of measurement is the most difficult by far. But, professionals concerned with sales performance development are discovering that it’s just not true.
Recently, I was swapping notes with the person responsible for measurement at a major U.S. computer company. He had successfully completed four bottom-line tracking projects – three more than originally planned. As we talked after a meeting, he confided, “I’ve realized it’s easier than doing a survey.” I agree.
You can complete a fairly rigorous analysis of bottom-line performance before lunch, with a spreadsheet and a cup of coffee. It’s possible if you align performance development with the business, if the line and training folks work together, and if your aim is to track progress – not obtain proof. And if you tap into existing data, solid results are easily within your grasp.
Most organizations are swimming in data. These days, companies maintain tracking systems for sales activity, inventory, scheduling, accounting, and prospect management. Most field sales, support, and service teams enter and swap data using laptop computers. Additionally, there are ISO 9000 standards, sales quotas, and performance reviews. In essence, every organization under the sun is already doing measurement.
The good news is that all that wonderful data already exists. The challenge is to select a few key performance indicators that are linked to a performance development initiative. How? Here’s one example: Last year, in our work with a Big Six accounting firm, we faced a mountain of options for the bottom-line measurement of negotiation skills. To make matters worse, the firm had extremely sophisticated internal data systems. After several hours of fruitless guesswork, we set up a meeting with the director of finance for the tax practice. We asked, “What do the practice partners look at on a monthly basis to monitor the health of the business?” With his answer, we hit pay dirt.
He unveiled a list of 13 metrics requested every month by the managing senior partners. From that list, two key measures were associated directly with the firm’s negotiation skills training. They included rateper-hour and percent-of-standard rate billed. By comparing those numbers before and after the workshops, we tracked the firm’s progress toward greater profitability.
Moral: Always look for the data currently being used to manage the business at the executive level.
For example, I frequently ask a vice president of sales for a sample of his or her monthly reports. If that information is important to the company’s leaders, then it’s critical to performance and is most likely accurate. That’s a powerful way to develop alignment between the measurement effort and the life-pulse of an organization.
But what if the data used by the leadership team is not enough for tracking progress? There are alternatives, as you’ll see in Lesson 5.
Lesson 5: Measurement is Simply Tracking Cause and Effect
The most common question I hear when working with clients to develop bottom-line measurement is “What should we track?â€ The answer is cause and effect – a principle that applies to any type of performance development.
Revenue, for example, is the result of something. We consider it to be a lagging indicator – or an effect – of performance in the field. In contrast, leading indicators – or causes – of revenue are building new customer relationships, qualifying opportunities, presenting solutions, and closing business.
The powerful distinction between leading and lagging indicators pinpoints the most strategic measures of performance. When combined with deep insight into business issues and knowledge of the specific metrics used by a company’s business leaders, it takes the guesswork out of tracking progress. Here’s an example:
I met with both a director of sales and the training coordinator at a major electronics company to develop bottom-line measurement. The company’s business goals included increasing revenue by 20 percent and maintaining current levels of profitability. The business challenges included an over-reliance on demonstrations to sell products. In addition, the reps were getting trapped at the technical level and had limited influence with actual decision makers.
The company decided to implement a consultative selling skills program. Our team selected two leading and two lagging indicators from data available on the company’s contact management system and accounting system.
The leading indicators included establishing three-by-three contacts (by calling people at executive, department, and u
ser levels) and tracking how often an actual decision maker was
present for system demonstrations. The vice president of sales lamented, “We have a tendency to demo for the janitor.” More importantly, improvements in those areas would result in progress toward the revenue goal.
The lagging indicators included tracking increases (in dollars) in the size of the systems sold and changes in the ratio of product presentations to closed deals (win rate). Those measures were both manageable and highly strategic.
Objectives that sound like galvanizing, synergizing, and energizing are well-intentioned, but nearly impossible to measure. By tracking both causes and effects, we ensure that a measurement is grounded in the most tangible behaviors and outcomes.
Like running a marathon, meaningful measurement gives training and development staying power in an organization. The best approach is to simplify. Just one or two leading and lagging indicators of improved performance may be all that is needed to run the race and cross the finish line a winner.
Â¹Donald Kirkpatrick, an internationally recognized expert in the field of training program development and evaluation, introduced his four-level model of evaluating training in 1959. The levels, which are still used today, are 1 Reaction, 2 Learning, 3 Behavior, and 4 Results.