Results-Based Accountability™ Advice


 

3.1

What are the basic ideas behind performance accountability?



The Short Answer

(from 1.1)

1. Choose among the many approaches to performance measurement.

Make sure it makes sense to you, make sure it is useful to managers, make sure it addresses the most important measures, those that tell you whether and to what extent clients/customers are better off, and make sure it get you from talk to action quickly and with minimal paper.


2. Whatever system you use should:

  • Start with ends, work backward to means. What do we want? How will we recognize it? What will it take to get there?

  • Be clear and disciplined about language.

  • Use plain language, not exclusionary jargon. 

  • Keep accountability for populations separate from accountability for programs and agencies.

  • Identify end conditions of well-being for populations (results or outcomes) for children, adults, families and communities. 

  • Identify end conditions of well-being for customers or clients (customer or client results)

  • Use data (indicators and performance measures) to gauge success or failure against a baseline.

  • Use data to drive a disciplined business-like decision making process to do better.

  • Involve a broad set of partners.

  • Get from talk to action as quickly as possible.

 


3. The approach to performance measurement discussed in this guide breaks with past work in a number of ways.  

  • It skips weeks,  months and sometime years of analysis, flow charts, program descriptions and other "preparation," and goes directly to the identification of performance measures.  Most people know their program well enough to identify performance measures right away without weeks or months of preliminaries. See Get to the Point Planning

  • The process sorts the measures into common sense plain English categories (How much did we do? How well did we do it? Is anyone better off?). The 3 to 5 most important "headline" performance measures are chosen from among the data you already have. 

  • And these measures are then used in a disciplined process to engage partners and get from talk to actions necessary to improve performance. The entire process is summarized in 7 questions, and a first pass at it can be accomplished in about an hour, not weeks months or years. Every iteration of the 7 questions improves the action plan. 

 

 

 

 



Full Answer

(from 1.1)

(1) Choices: First let it be said that there is no right or wrong way to do performance measurement work. There are many different approaches to performance measurement which have been written about, presented and used over the years. As you think about which of these many approaches to use for your organization, you need to be a good consumer. Think about which approach works best. You have choices. Don't just take the first thing that comes along. Having said that, it is also true that not all approaches are equally good. There is along history in this work of doing performance measurement "for show," and generating a lot of useless paper in the process. 


Here are some criteria to think about as you scan the field for what to do.:


Does it make sense: First and foremost does the approach make sense to you. Use your common sense in making this judgement. Can you explain it to others in your organization. Do you think it will make sense to them? 


Is it useful?  If it is not useful don't do it. Beware the processes that produce a lot of useless paper. The process should use concise understandable formats that actually help managers manage programs. If the material is useful to managers it will be useful to everyone else in the system (budget people, senior management staff, legislators etc). If it doesn't it won't. 


Does it address client or customer well-being. The most important performance measures are measures of whether and to what extent your clients customers are better off. The method you use should place this kind of measurement at the center of the work and not take forever to get there.


Does it get you from talk to action: This should not be an academic exercise. The purpose of performance measurement is to improve performance. Does the method you choose help you do that? This means a disciplined and common sense way of getting from identifying performance measures to actually using them to do better.


(2) Performance Accountability Thinking Process:


Here is a straightforward approach to performance measurement which meets the above criteria.


Be clear about what program or agency is being measured. The first order of business in picking the right performance measures is being clear about what program or agency is being measured. This is a "fence drawing" problem. First we draw a fence around the thing to be measured. It could be a program, like child care center , or a component of a program with some organizational identity, like infant child care. Or it could be an entire organization or agency, like a residential treatment center, or a department of social services. Or it could be an entire service system, like the entire child welfare or child care service system, involving many agencies and their programs.


Next we ask ourselves a few questions about what's inside the fence. Who are our customers? Customers include the direct recipients or beneficiaries of the service. But they also include others who depend on the program's performance, like related programs and partners. For example, the customers of child care program include the children of the program, but also the parents of those children, and also the local elementary school where many of these children will enter kindergarten. It is important to consider the full range of customers, because, just like in business, success depends on doing a good job for your customers.


Consider the different types of  performance measures and choose the most important.


Not all performance measures are of equal importance. All performance measures fit into one of four categories, derived from the intersection of quantity and quality vs. effort and effect


 


QUANTITY


QUALITY


EFFORT



What did we do?
How much service did we deliver?


How well did we do it?
How well did we deliver service?


EFFECT



Is anyone better off (#)?
How much change for the better did we produce


Is anyone better off (%)?
What quality of change for the better did we produce?


See the attached chart which shows how the 4 Quadrants account for all the standard terms used in past and present performance measurement systems.


The most important measures tell us whether our clients or customers are better off as a consequence of receiving the service ( quality of effect: lower right quadrant). We call these measures "client or customer results" These are measures which gauge the effect of the service on peoples lives.


Usually, in programs which directly deliver services to people client results have to do with four dimensions of "better-offness." skills/knowledge, attitude, behavior and circumstance. Did their skills or knowledge improve; did their attitude change for the better, did their behavior change for the better, is their life circumstance improved in some demonstrable way? So, for example, if you are overseeing a child care program, you would want to measure such things as the percent of children with basic literacy skills (skills), the percent of children with a positive self image (attitude); the percent of children exhibiting disruptive behavior (behavior) and the percent of children who are up to date on their immunizations, and the percent who go on to succeed in 1st grade (circumstance).


The second most important measures are those that tell whether the service and its related functions are done well (quality of effort: upper right quadrant). These measures include such things as timeliness of service, accessibility, cultural competence, turnover rate and morale of staff. These measures can be used by managers to steer the administration of the program. If things are late, you work to make them timely. If turnover is high, you  work to retain staff.


Don't accept lack of control as an excuse. Now the first thing you're going to say is "Wait a minute. What does child care have to do with whether or not children are up to date on immunizations? This is a good example of a performance measure where child care has very little control over whether the circumstance improves. Child care can make a contribution to the immunization status of its clients. Quality child care can help parents and children understand the importance of regular preventive health care and can help parents understand and access the health care system. But child care by itself can not do these things. So isn't it unfair to track immunization rates for children in care?


If you look at the other measures listed for child care (literacy skills, self image, disruptive behavior, first grade success) you will notice that these measures are also beyond the capacity of the child care provider to completely control. And the point is that all programs performance measures are affected by many factors beyond the particular program's control. This lack of control is usually used as an excuse for not doing performance measurement at all. Turnover rate, staff morale, you name it is "beyond my control".


In fact, the more important the performance measure (e.g. children successful in 1st grade) the less control the program has over it. This is a paradox at the heart of doing performance measurement well. If control were the overriding criteria for performance measures then there would be no performance measures at all. The first thing that we must do in performance measurement is get past the control excuse, and acknowledge that we must use measures we do not completely control.


Create a performance accountability system useful to managers. - one that takes this control paradox into account. We do this in three ways. First, we ask managers to assess their performance on these measures - not on the basis of some absolute standard - or on how other providers are doing - but on whether they are doing better than their own history. We do this using the same technique used for cross community indicators: the notion of baseline. For each performance measure we ask managers to present a baseline of the history of their program's performance, and where their performance is headed. We ask them to do better than their own baseline.


This is the central way in which businesses use data. How are we doing compared to our own history. Later when you have the sophistication and the data, you can begin to develop and use comparisons to the performance of other similar providers with similar mixes of easy and hard cases. And later still, we can compare to standards, when we know what good performance looks like.


In some services, like child care, we have progressed to the point where we have standards for the first type of performance measure above. In child care we know what quality service delivery looks like. We have standards for staffing ratios, percent of staff with certain qualifications, timeliness of service, safety etc.


Next we ask managers to think about the partners who have a role to play in doing better. Programs cannot produce the most important results for customers by themselves.


And, finally managers must ask and answer: What works to improve performance?" Out of this thinking we ask managers to present their best thinking about what needs to be done.


This thinking process is summarized in the Seven Questions Central to Performance Accountability. These questions should be asked and answered at every intersection between a supervisor and a subordinate throughout the system.


(3) "Get to the Point" Planning
     Excerpt from "A Guide to the Developing and Using Performance Measures." (revised)


"Notice how we skip right past mission, vision, values, purpose, goals, objectives, logic models, and flow charts and go right to performance measures. Now this goes against the orthodoxy of the planning and budgeting profession, but it is possible and even desirable to do this. First, it gets people into the work right away. Second, it gets us past the tyranny of planning systems which decree that the work is linear and that program measurements must somehow be derived from higher level statements of purpose. Baloney.


There is no reason to start with agency mission. It can, in fact, be argued that, by working down from results and up from programs, agency mission statements become a byproduct of this work. Mission statements and their attendants, retainers and attorneys help articulate why the agency exists - how it contributes to improving results - and generally how it goes about doing this. But there is no reason to wait for the perfect articulation of mission before getting about the business of selecting performance measures.


You can go back and do all the mission(ary) stuff later if you want. It is probably a good idea for agencies to be able to state what they are about in a few phrases. But it is unnecessarily time consuming and burdensome to try to develop performance measures from these statements, as if it is a matter of mathematical derivation. Unless you are thinking of creating a brand new agency, most people who face performance measurement challenges have programs that need performance measurement in practical forms right now.


Think about it this way: Results-Based Accountability tells us whether a program should exist or not as part of our larger strategy to improve ("turn the curve") on child and family well-being. Performance measurement picks up at this point; takes as given that the program needs to be there, and moves on to the next step of answering whether it's working or not.


"Traditional" planning systems spend an inordinate amount of time before people actually get to talk about how to measure performance. By going straight to the business of selecting performance measures, we ease the frustration - and associated cynicism - which goes with complex planning processes. We also go to the heart of what may be the benefit of performance measurement, namely a disciplined way to use data in the day-to-day management of programs. In the same way that processes can be both top/down and bottom up, we might think about this approach as both outside/in and inside/out.


Another benefit of this four part system is its simplicity, and (arguable) common sense. Many performance measurement systems suffer from the creation of so many special terms and variations on special terms that it is hard to keep them straight. ( Ten or more types of performance measures are not uncommon.) Some of this problem derives from the fact that these systems often do not distinguish population results and indicators from program and agency performance measures at the beginning, and create unnecessary complexity trying to keep this straight. Another related problem comes from an attempt to strictly define how many "levels" there are to a performance system. Many performance systems call performance measures by different names at different levels of the organization . This doesn't work well because there are varying numbers of organizational and programmatic layers in different organizations. In the four-quadrant approach, we have effectively 3 types of performance measures (How much did we do? How well did we do it? Is anyone better off?) and a single framework which is repeated, in more or less the same way, through as many levels as exist in a given organization."


(4) The Matter of Cause and Effect:  Very often the question is asked, "How do I know how much my program contributed to improvements in client or community well-being?" The answer is complex and not entirely satisfying:


To start, chaos and complexity theory tells us that cause and effect in complex systems is difficult if not impossible to determine. Social systems, population behaviors and clients' lives are complex systems, and therefore the causes of changes in population or individual behavior (attitude or circumstance) are difficult if not impossible to know with any certainty.


The best way to know what is possible to know about cause and effect is research. The most conclusive research on cause and effect involves control groups. This kind of research can demonstrate the extent to which there is a correlation between effort and effect. Such research is a valuable tool in identifying what works and in crafting a strategy to turn a curve. 


In most cases control group research is not possible. Where control groups are not possible, it is often possible to find a comparable program, population or jurisdiction. For example, you could compare your program's performance (most importantly client results) with comparable programs serving similar populations. You could compare your program's performance with the results for the general population (e.g. the repeat teen pregnancy rate for young women in the program, compared to the rate for the state, county, city or neighborhood where the program resides.) Or you could compare your program's performance to the performance of programs in other jurisdictions. If the comparisons are real, you have circumstantial evidence that your program contributed to the difference. The greater the difference the greater the implied contribution.


With regard to the general population effects of a program, it is important to remember that it is rare that any program by itself can turn an indicator curve at the population level. Population effects almost always require the combined effort of many partners. It is, therefore, almost always unfair to judge a program on the extent to which the general population indicators have changed (e.g. we should not judge a teen pregnancy prevention program serving 30 young women, by whether the county teen pregnancy rate was better than baseline.) The relationship of a program to population effects is one of contribution. This means that what the program does for its clients is its contribution to a larger strategy.  


For more on this subject, see "The Matter of Evidence," which can be read at the FPSI website.


See also the Four Types of Progress  included in a prototype progress report.


The Web RAguide.org

 

 

1997 - 2010 by FPSI Mark Friedman
All Rights Reserved