2.7 How do we select indicators for a result?

The Short Answer

1. Start by assessing the result in terms of everyday experience, what we see hear, or feel about children ready for school or stable families.

2. Brainstorm a list of candidate indicators. Each entry is a data statement, e.g. % of children reading at grade level, rate of foster children per 100,000.

3. Assess each entry on the basis of communication power (do people understand it) proxy power (does it represent the result) and data power (do you have quality data on a timely basis).

4. Develop a three part list: the 3 or 4 headline indicators you could use in the public square, the secondary indicators you will use for the story behind the curve and other behind the scenes work, and a data development agenda so you can get better data in the future.

Full Answer

(1) Indicators (or benchmarks) are measures which help quantify the achievement of a result. They answer the question “How would we recognize these results in measurable terms if we fell over them?” So, for example, the rate of low-birthweight babies helps quantify whether we’re getting healthy births or not. Third grade reading scores help quantify whether children are succeeding in school today, and whether they were ready for school three years ago. The crime rate helps quantify whether we are living in safe communities, etc.

(2) All indicators and performance measures take two forms: a lay definition and a technical definition. The lay definition (e.g. teen pregnancy rate) should rate high on communication power, something that laypersons can understand. The technical definition describes exactly how the data element is constructed and where the data comes from. For example: The teen pregnancy rate is the total number of births to teens, as reported by the largest hospitals in the county, divided by the total population of females age 12 to 17, calculated as the percent of such age group in the last census, multiplied time the current CPS estimate of total county population, adjusted for population growth.

It is actually important to reach agreement on both the lay and technical definitions of the indicator (or performance measure) in the selection process itself. This may require some help from a data expert, and may take some time to completely resolve. Get as far as you can in the first session without getting hung up, and then refine the definition in future meetings of the group. Here’s why this is important:

Every time you change the technical construction of an indicator you create a new indicator! This new indicator must be considered (rated on CPD powers) against other choices. When talking about variations most of the discussion is about data power (Do we have it? Is it any good?) and proxy power (Does it represent what we want it to represent?). Take the example of “rate of domestic violence,” the lay definition of an indicator usually used as a proxy for a result like “safe and stable families.” When it comes to technical definition, there are a lot of choices, particularly for the numerator. Is this total of monthly incidences reported from police records, or from requests for help from domestic violence shelters and programs, or from a household survey of prevalence conducted by the local university, but produced only once? In some communities, police records are thought to be unreliable and to understate incidence. Shelter records undercount incidence because on those women who seek shelter are counted. Neither count is unduplicated. Either might still be a good proxy, provided that these problems are fully explained in the “story behind the curve,” and figured in the interpretation of the data.

(3) The plain truth is that it is often hard to find good data about the well-being of children and families.  Data for young children is particularly difficult. We often don’t count things until children enter school. Data systems for young children lag behind data systems for all children, which lag behind data systems used by government which lag behind data systems used by business and the private sector. To compound the problem, what we count is usually things that have gone wrong: child abuse, child neglect, injury, death, hospitalizations etc. Very rarely do we count positive situations, characteristics or events.[2

In spite of these problems, it is possible to find indicators for child and family well-being.  It is important first to revisit the purpose of choosing indicators for a result. It is to help us know how we could recognize this desired condition of well-being, and how we can know if we are making progress. Without indicator data, we are left to argue about perceptions and anecdotes which come to our attention through the media or other sources. If we are to be business-like about improving the conditions of well-being for these children, then we must be business-like about using data to steer our decisions and assess our progress.

(3) Here is a step by step way of identifying indicators:

!  Start by assessing experience: How do we experience children healthy and ready for school, or children succeeding in school or stable and self sufficient families?  Partners around the table can create a working list of Aexperiences@ in a brainstorm session. It is possible to add to this list from consultation with community members, professionals, parents and the academic community. By experience, we mean, how do we see, hear, or feel the condition? What do we see on the street? What do we see in our everyday work and personal lives? Remember that different cultures and communities may experience health and school readiness in different ways.

There are two reasons for starting with experience. First, each experience is a pointer to a potential indicator. If we experience children absent from child care or kindergarten due to illness, we can possibly count absentee rates in child care or kindergarten. If we experience children playing safely on playgrounds, we can possibly count rates of playground injury for young children .

The second reason for starting with experience is that it grounds the work in the common sense view of every day citizens. Too often, planning processes are the province of professionals and providers who talk in esoteric and inaccessible ways. If this work is to take hold in the community and energize the community to take action, it is necessary to build and communicate the work in clear and common sense ways. This is not an argument against rigor and discipline. Quite the opposite. It is an argument to start the disciplined thinking process where our partners and our constituents are.

Finally, the way we experience results can be used to drive the thinking and planning process where indicator data is insufficient. We may have trouble finding good data to assess whether children are well nourished or have good motor skills at school entry. This does not mean that these conditions are unimportant. We can think together about Awhat works@ to produce these conditions and use this thinking to fashion our action plan. See 2.9 What do we do if we don’t have any good data at all?

!  Develop a set of candidate indicators:  The collaborative or working group should brainstorm a list of candidate indicators. In most cases, it should not be necessary to start from scratch. Many states and counties have developed a set of results and indicators and have published report cards presenting actual data on the indicators. There are a number of resources available to help which can be accessed on line. See Resources and References for organization and website connections.  The Foundation Consortium has developed a guide to indicators in California.[3]  And communities may have unique resources in this area if, for example, they have commissioned surveys of  families or youth.

Remember : It is important to include as many members of the community as possible in this thinking process. And be sure to tap the expertise of your partners in the academic community, some of whom have spent their whole careers thinking about these very questions.

A word about the notion of leading and lagging indicators. In economics, we have leading and lagging indicators of the health of the economy. Leading indicators are indicators which show a change of direction before the change appears in the general economy (e.g. orders for durable good). Lagging indicators reflect the change in the economy after it has happened (e.g. unemployment rates). When it comes to the well-being of young children (prenatal to age 5) much of the data we have are lagging indicators. The percentage of 3rd graders reading at grade level is a lagging indicator of how ready those children were for school 3 or 4 years earlier. These are still valuable measures. And it is possible to gear the planning process around AWhat would it take to produce better 3rd grade reading scores four years from now?@ Lagging indicators bring a healthy and useful perspective.

!  Choose the best of what‘s available:

Shortcut Method
for Choosing Indicators

HEADLINE MEASURES: Identify the candidate indicators for which there is (good) data. This means decent data is available today (or could be produced with little effort). Circle each one of these measures with a colored marker. Ask “If you had to talk about the result in a public place with just one of these circled measures, which one would it be?” Put a star by the answer. Then ask “If you could have a second measure… and a third?” You should identify no more than 4 or 5 measures. These choices represent a working list of headline measures for the result.

DATA DEVELOPMENT AGENDA: Ask “If you could buy one of the measures for which you don’t have data, which one would it be?” Mark that with a different colored marker. “If you could have a second measure… and a third?” List 4 or 5 measures. These is the beginning of  your data development agenda in priority order.

Given a set of candidate indicators, it is then possible to use criteria to select the best indicators to represent the result. Using the best of what=s available necessarily means that this will be aboutapproximation and compromise. If we had a thousand measures, we could still not fully capture the health and readiness of young children. We use data to approximate these conditions and to stand as proxies for them. There are three criteria which can be used to identify the best measures:

Communication Power: Does the indicator communicate to a broad range of audiences? It is possible to think of this in terms of the public square test. If you had to stand in a public square and explain to your neighbors “what we mean, in this community, by children healthy and ready for school,” what two or three pieces of data would you use? Obviously you could bring a thick report to the square and begin a long recitation, but the crowd would thin quickly. It is hard for people to listen to, absorb or understand more than a few pieces of data at time. They must be common sense, and compelling, not arcane and bureaucratic. Communication power means that the data must have clarity with diverse audiences.

Proxy Power: Does the indicator say something of central importance about the result? (Or is it peripheral?) Can this measure stand as a proxy for the plain English statement of well-being? What pieces of data really get at the heart of the matter?

Another simple truth about indicators is that they run in herds. If one indicator is going in the right direction, often others are as well. You do not need 20 indicators telling you the same thing. Pick the indicators which have the greatest proxy power, i.e. those which are most likely to match the direction of the other indicators in the herd

Data Power: Do we have quality data on a timely basis? We need data which is reliable and  consistent. And we need timely data so we can see progress – or the lack thereof –  on a regular and frequent basis. Problems with data availability, quality or timeliness can be addressed as part of the data development agenda

!  Identify primary and secondary indicators, and a data development agenda. When you have assessed the candidate indicators using these criteria, you will have sorted indicators into three categories:

Primary indicators: those 3 or 4 most important measures which can be used as proxies in the public process for the result.  You could use 20 or 40, but peoples’ eyes would glaze over. We need a handful of measures to tell us how we=re doing at the highest level.

Secondary indicators: All the other data that=s any good. We will use these measures in assessing the story behind the baselines, and in the Abehind the scenes@ planning work. We do not throw away good data. We need every bit of information we can get our hands on to do this work well.

A data development agenda: It is essential that we include investments in new and better data as an active part of our work. This means the creation of a data development agenda – a set of priorities of where we need to get better.

It is a judgement call about how much to spend on such an agenda. Spending for data or any other administrative function should be carefully balanced with spending which directly benefits children and their families. As a rule such spending should not exceed 5 to 10% of a budget.  And data investments are only part of that amount. This means that other partners will have to make contributions to this effort. And it means that not all data has to be of the highest research quality. At this stage of our learning about how to use data to make decisions, it is OK to use sampling and other techniques to get usable information that may not meet strict academic research standards.[6]

(4) Do not create compound indicators: When constructing indicators it is best if you do not combine indicators and targets. For example: Teen Pregnancies will decrease to no more than 5 per 1000 births. There are several  reasons for this. First you do not want to change your indicator every time you change your target.. Second, you set yourself up for a very particular kind of damaging criticism. If you define the indicator in terms of a specific level of accomplishment or increase, then you will be often be asked “are you there yet?” You will be backed into reporting your status as failure if you are not yet at your defined target. Third you inadvertently communicate that a certain level of failure is OK. Our indicator is 90% high school graduation rate, suggests that it’s OK if 10% do not graduate. Finally, it is important to set targets in relationship to a baseline for them to be believable. There are many ways to do this (LINK). But putting the target into the definition of the indicator itself makes this much more difficult. The baseline will change, the sense of what is possible will change, and targets should  change accordingly.

Marc2.7 How do we select indicators for a result?