Sunday, March 09, 2008

The Argument Against "CASCADING"

I'll admit up front that the title of this post may be a bit misleading. But it does point to an age old problem in implementing a Balanced Scorecard...specifically, how companies respond to a critical choice point encountered in designing the scorecard itself.

One of the most important choices companies encounter during the very early stages of scorecard design and architecture is the decision of whether or not to "cascade" KPI's and to what level of detail.

The temptation of many is to cascade to the n'th degree, and build what I call the "never ending tree structure"...one that allows companies to keep breaking down measures and indicators until they can't break them down any more. In fact, some software applications actually encourage this process by building this "tree structure" orientation into the administrative interface itself.

In a weird sort of way, this is a self perpetuating prophesy. Software designers and some users are by their very nature analytic thinkers. You know the type- the kind of people who over intellectualize every problem they encounter. The ones who prefer to model and analyze everything they encounter, right down to their spouses if they would let them. And believe it or not, society needs these people. CSI agents, NASA scientists, golf or baseball swing coaches- all of these are great career choices for the hyper analytic crowd. But if its great performance management and business excellence you crave- stand clear!

What you want is enough "breakdown" analysis to make your objectives relevant to the managers and employees that accountable for driving positive change, but little or no more than that. Usually that means 2-3 levels tops, with maybe a level or two of trending where necessary. But just because your system or IT solution will enable you to go down 10 levels ( "pointing and clicking" on every bar chart data element until the cows come home) doesn't mean you should build that into your enterprise performance management solution. And just because you may need that level of detail for a custom report for one of your corporate CSI-type analysts to do his job, doesn't mean it should be a central design principal in your performance management process and supporting application.

Here are a few tips when faced with the IT capability of "cascading to your hearts content":

  1. The level to which you drill down should be no more than 2-3 levels from your highest level business objective- any more will begin to lose that critical "line of sight" I've discussed in previous posts.
  2. The lowest level KPI or business metric you select should be both measurable and MANAGEABLE- For example, you can drill all the way down to the tire or lug nut on the truck in your delivery fleet, but that level is rarely anyone's key accountability and hence may not be as "manageable" as you may think. And if by chance it is, then make it part of another context, or another scorecard, related to that specific business function- not part of your enterprise scorecard.
  3. Every KPI should be tied to one or more high impact initiatives designed to drive business improvement, with the total # of strategic initiatives across all KPI's less than 20-30. Beyond that, the organization will begin to lose critical focus
  4. Keep your scorecard layout simple and easy to understand, avoiding complicated multidimensional analytic graphics or causality relationships- leave these for the custom panels or your analytic core
  5. Make EVERY view in your scorecard something that your CEO and Board COULD understand if they saw it. That's not to say they would typically view those screens, but they should be able to make a mental connection to something they care about.
In short, don't let your software capability drive your EPM process, but rather let your EPM process drive your software. After all, its called EPM for a reason. Said another way, if God had intended the ultimate in hyper analytic solutions for your EPM process, it would probably be called something like Micro Analytic Performance Management- maybe a cool solution for the BI crowd, but not something that should be top of mind for your management team.

-b

Enterprise Performance Management- Getting Past the "Buzz"

Since I started writing about Enterprise Performance Management (EPM) a little over 10 years ago, we sought to escape the "flash in the pan" buzz of the next big "management THING". Our readers and clients had appeared to embrace EPM for what it was- the cornerstone of what the enterprise should be built upon...the foundation of a business... much like that of the Balanced Scorecard, taken to its logical end state.

That is until I opened the latest BI rag (whose name I will not mention because, after all, I do like writing for them :) and saw two new articles that essentially spoke about EPM synonymously with the plethora of scorecard and dashboard APPLICATIONS espoused by the likes of SAP, Oracle, and even some "bolt-on" solutions of the BI "boutiques".

OK-let's straighten this out once and for all- EPM is NOT an application, it is a business PROCESS. Not only is it A business process, it is (or should be) THE central business process of the enterprise. Sure, they are connected, but here's the real acid test. If you are an implementer of these systems or tools, ask youself how much time you spent (or intend to spend) on the following activities as you implemented your so called EPM software:

  1. Affirming your strategy, and translating it into something your front line employees can easily understand
  2. Refining and your objectives and aligning your management team to them
  3. Translating your key objectives into measureable and benchmarkable KPI's
  4. Doing the requisite analysis (benchmaking, trending, analyzing, forecasting) targets for each of these KPI's
  5. Linking your key initiatives to the KPI's they are designed to improve, and prioritizing (and de-prioritizing) them according to these linkages
  6. Defining ownership and individual accountability for each KPI
  7. Defining the reports and analysis needed by these individuals and workgroups to enable them to be successful
  8. Linking appraisals and reward systems to the achievement of KPI's and business metrics
  9. Defining and mapping the process required to MANAGE KPI achievement
  10. Training management and supervisors in the EPM PROCESS
  11. Shaping and reshaping culture by "walking the walk", and surrounding the EPM process with the required investments in change management (the people side of change)
  12. Defining the best technology solution to enable all of the above
  13. Selecting and designing the technology solution
  14. Implementing said technology
If the time, energy and resources you spend on #13 and 14 is more that 1/3 the resources spent on #'s 1-12, you've got a very "unbalanced" EPM solution in the works. Getting the EPM system to the point of real value add requires that degree of "footwork" , and if your're not yet ready to make that investment, please don't waste your IT dollar.

While some linguistics experts may not agree with me technically, EPM is not a "thing", but a process- not a noun but an active VERB. It needs to be spoken about, and treated as one. Part of changing the culture is first recognizing when the above is not the case, and taking an active role in calling that out when you see it (or READ it!!!)

-b

Monday, March 03, 2008

The MEASURE of a REAL Business Partner

I am often asked by my clients, what measures are most appropriate for functions that are outsourced...particularly in terms of Vendor or Business Partner measures, accountabilities, and ultimately their compensation.

Here's my take on it. This question is really NOT about measures at all per se, but far more about the NATURE of the business relationship itself.

Let's face it, there is no shortage of vendors claiming to be the "business partners" of their clients. The first part of answering the measurement and compensation question is figuring out which vendors are, and which vendors are not, true business partners. A good acid test for this is to ask whether the contract you have with the partner is based on a "task list" of deliverables, versus a set of real business outcomes. If it is the former, then you should face the reality that your contractor relationship is just that- a contractor/ commodity based/ perpetually "low bid" kind of relationship, and probably not worthy of a partner performance/ partner pricing conversation. Just measure the vendor on a $ per widget/ widget quality basis and be done with it. But don't expect them to do any more than produce good widgets.

On the other hand, if you genuinely do share business outcomes as the basis of your contract, then the measurement/ performance question gets much easier. Why? Because if your partner is genuinely accountable for YOUR business outcomes (and you for his, as I will discuss later), then it would only make logical sense that these measures would also end up on YOUR corporate scorecard. And that means you shouldn't have to spend time coming up with a NEW or creative set of measures, but rather a delegation, if you will, of measures that you already have.

A good example of this are the partnerships Utilities have with their Vegetation Management (tree trimming) function, which incidentally and surprisingly is often the utility's #1 O&M line item. These contracts range from the rather elementary level of $ per manhour, to the more sophisticated cost per tree or cost per span. But the ones in which a real "business partnership" exists are opting for measures like # of tree-driven interruptions, or related frequency and severity measures. For you utility foresters out there, this translates into indicators like Tree-CAIDI or Tree-SAIDI. Pretty cool huh? The real message here is that when you have a true business partnership, the measures you use to track their performance are the very same measures you use to track yours. A true win-win so to speak.

And the beauty of this is that it works both ways. A colleague of mine once told me that during each of his monthly client update meetings, his client would ask him- "how are YOU doing?" and "Is this contract making money/ profits for YOU?"- suggesting that having the vendor (my colleague/partner in this case) make money is as equally important to the client- a truly radical thought.

Another respected peer of mine told me that "the "master-slave" contractor relationship is "dead" because it will always produce "average" performance--that it is a model based primarily on distrust- essentially producing "just enough to get by" behavior. The partner model turns this on its head, and has the client saying to the partner "I want to make you as wildly successful as you make me" ".

So the long and short of it is that this is not a question of what you should measure or pay a contractor for, but rather a question of whether the contractor is really a business partner. versus a basic commodity type vendor. If the latter is the case, then you should be spending your time ensuring that the measure of success that you choose is something that should show up on BOTH of your scorecards, and be given equal attention.