Big Fail – 3 Ways To Miss The Mark When Measuring Internal Services

(and a Big Announcement)

Over the last couple decades the PRS partners and associates have built practical performance measurement frameworks for internal services in a number of functional areas e.g. information and records management, IT operations, mail operations, finance and comptrollership, procurement and contracting, materiel and assets management, real property and facilities management, and human resources. We’ve also worked with some of the ‘tough to measure’ areas like policy and advocacy.

From our experience, here’s 3 common points of failure when measuring internal services like these:

Big Fail 1 – No logic model. Even very smart people (and I know you are one ;-) can ‘brainstorm’ performance indicators indefinitely, without necessarily hitting on the ‘critical few’ that accurately reflect organizational performance. The missing element is a logic model that presents, in a visual way, the relationships between the inputs, activities, outputs and outcomes delivered by a program.

Using a logic model, we can achieve agreement and share our understanding of the relationships between the resources we have to deliver the program, the activities we plan to carry out, and the results we hope to achieve. The logic model also helps us achieve consensus with respect to our assumptions about how our work is to be conducted to achieve success.

You need to get this consensus in place before you think about creating performance measures. You need a logic model.

Big Fail 2 – Poor measure definition. ‘Poor’ in this case can mean inadequate, incomplete, or inaccurate. You need to be very practical and thorough in defining your measures, to a level of detail that will support the repeatability and defensibility of the measure. You need to document a strong measure purpose, data sources, data formats, frequency of data availability for collection and reporting, the OPI for collection of the data, the OPI for analysis of the data (not necessarily the same OPI) and most particularly the manipulations and calculations necessary to determine a result.

For example, if we determine that a measure is needed for ‘Budget Variance’, here’s how it could be defined:

Measure purpose: This measure is used to gauge a unit or section’s cumulative financial performance at the end of each reporting period. 

Data Source/Lead: the data is obtained from the financial system, and is compiled monthly.  Data will be retrieved by <Data Collection Lead> and submitted to <Performance Measurement Coordinator>, with preliminary analysis (e.g. Situation-Implication-Recommendation) by the 10th of each month.

Measure definition: $$ value and % variance planned spending vs actual.

Variances calculated as follows:

A – Planned expenditures from budget

B – Actual expenditures

$$ Variance = A-B

% Variance = (A-B)/A*100

By the way, we use simple 1-page Indicator Definition Templates to ensure consistency and completeness of our definitions i.e. all indicators collect the same kinds of metadata in the same format. We also use Indicator Reporting Templates for the same reason.

Big Fail 3 – Seeking perfection. You can waste a lot of cycle time trying to find the ‘perfect’ indicators. The right approach is to treat the whole ‘development and deployment of performance measurement’ as a learning exercise, i.e. get some data, determine what its telling you, repeat as necessary, and/or adjust the measures to be more useful/practical/actionable.

A corollary of this is it’s better to start with a relatively small handful of measures, and then add as your experience grows. Less to throw away!

Return to Home page from Measuring Internal Services

Now the Big Announcement . . .

Huzzah! PRS is partnering with well-known and well-respected BMB Consulting to launch a new learning initiative for public sector managers – Action Working Groups.

The Action Working Groups will consist of peer-supported and consultant-facilitated workshops, intended to leverage the work that our firms have carried out over the past three to four years in relation to the development of logic models and performance frameworks for Internal Services categories as well as policy, regulatory, science and tech, and funding programs. We’ll be showcasing tools and templates that we use in our practice as well.

The working groups will also build on developmental work that has been undertaken by individual departments and agencies that might wish to participate in the initiative.  In each working group, we expect to have perhaps 2 representatives from each of 4 or 5 departments, for each focus area. So you get both peer interaction and support, and some expert facilitation as well.

And as an aside, we’re delighted to be working with John Harrison, the principal at BMB, to make this happen. Scott and Charlie and our new Director Walter Zubrycky have known and worked with John for about 20 years.

I can honestly say we couldn’t hope to do this without John’s depth of experience in both public sector executive training and public sector performance measurement.

Right now we plan to limit each AWG to 8-10 participants, a good number for useful interaction but also for individual attention to each other’s challenges.

More details will be forthcoming before we officially launch (first workshops beginning April 2014), but feel free to contact me or Charlie Snelling 613 744-4084 if you would like to get on the list. We’ll follow up with you a little closer to launch.


Scott Kelland, President

Performance Reporting Solutions

613 302-3924

PracticalPRS -Simple, cost-effective performance reporting for the Public Sector

Get your Free Guide to Public Sector Performance Measurement

Enter Your E-mail Address
Enter Your First Name (optional)

Don't worry — your e-mail address is totally secure.
I promise to use it only to send you Public Sector Performance Journal.