Introduction
Evaluation is an essential element of any program. It is the way you can show others – funders, your Supreme Court, other stakeholders and the public – that your program is a good investment in time and money, and that you are serving the public well. Evaluation provides an in-depth analysis of the strengths and weaknesses of a program, its capacity to achieve the goals assigned to it, and its impact on those who use it at a particular point in time. It should result in a set of actionable recommendations that you can undertake to strengthen and/or promote your program. Thus, it will also help you to optimize your program – that is, to build on what works and to improve on what doesn’t, thereby making your program the most effective and efficient it can be. For these reasons, you should consider an evaluation as part of the development or reassessment of your program.
An effective evaluation will be a collaboration between court administrators, judges, ADR program staff and the evaluator. Court administrators understand court processes and what information is available for the evaluation, or is feasible to collect. Judges have their own perspectives on the goals of the program. ADR program staff understand the program and the issues they face. The evaluator knows what needs to be done in order to get the information required to achieve the evaluation goals. Thus, a good evaluator will work closely with you to be sure the correct evaluation questions are being asked, the appropriate data is being collected, and the recommendations for improvement are feasible and make sense within the context of the court and program.
Note that while you may want an evaluation to prove how well your program is doing, a reputable evaluator cannot promise any particular results. If you are concerned about the results, you may want to reach an agreement with your evaluator before the evaluation that you will have the option of not making the evaluation report public.
When to Conduct an Evaluation
Evaluation is valuable no matter when it’s undertaken, and it can be done at any time in your program’s life. Generally, though, evaluation is done:
- Soon after a program is first launched, as a way to see if it is working as planned
- When a program has substantially changed, to see if changes have had the desired effect
- When issues arise, to examine the issues and possible ways to address them
- When it’s been a while since the last evaluation, to make sure the program is still working the way it was designed
- When a funder requires it
When you do the evaluation and the reasons you are doing it will determine the type of evaluation you will do as well as the overall design of the evaluation
First Steps
Determine Who Should Be Involved in the Evaluation Project
Many people have knowledge that would be helpful in planning your evaluation, as well as specific interests in the services that your program delivers. Their knowledge and vested interest in the program make them valuable partners in planning your evaluation and will result in a better evaluation that provides useful information about your program.
You may want to form an evaluation committee made up of representatives from the groups below to make these decisions, particularly if you are considering a large, comprehensive evaluation. For smaller projects, you may want to obtain input less formally, through conversations, emails, or at stakeholder meetings.
Judges
You will want to include one or more judges in the planning. Each judge will have a different perspective based on their relationship to the program. For example, a judge who refers cases will view the program differently than who has administrative responsibilities. You will want to get input from judges with these different perspectives, and to have one or two of them on the evaluation committee, if you have one.
Staff
If the program is already in place, administrative staff have expertise about the program that can contribute to the evaluation design. If the program is in the design stage, getting input from court staff will help to design the evaluation in a way that is more likely to make it flow smoothly with existing court systems and with the new court ADR system. Another benefit to staff input is that the process of deciding what to evaluate may get them thinking about the program in a different way. For this reason, a program staff member should be on the evaluation committee.
Getting staff input on evaluation will also help to get their cooperation with the evaluation. It’s common for staff members to feel threatened by evaluation because they feel they’re being judged. When they’re brought into the planning process, they take ownership of the evaluation and are more likely to embrace it.
Program Neutrals
The individuals who provide the ADR services will have a unique on-the-ground perspective on the program to contribute to planning the evaluation. They may have noticed strengths and challenges that should be explored in the evaluation. You may want to include an experienced and active neutral on your committee.
Funders/Decision-Makers
If possible, get funder or decision-maker input on what is important to them, particularly if they have authority over future funding or whether to continue the program. If they think the findings and recommendations from the evaluation are useful and important, they will take the evaluation more seriously when making decisions about the program.
Users of the Program, Especially Lawyers, If They Participate
Program users also have their own perspective on what the program should do. Findings and recommendations from the evaluation should take into account what is important to them because they are the ones who will be most affected by any changes to the program. It may be useful to have one or two lawyers on your evaluation committee.
Those Who Will Be Cooperating With/Assisting The Evaluation
If there are people outside the program who will be involved to any great degree in the evaluation, such as court clerks, you should consult with them about the evaluation plan because 1) they know what the challenges are to getting the data and 2) they need to buy into the process if they are going to be asked to add to their work load.
Inform Staff and Stakeholders
When you decide to go forward with an evaluation, it will be helpful to give staff and stakeholders who weren’t a part of the planning process a heads up about the evaluation and ask for their cooperation. This will help the evaluator later when she begins asking questions and obtaining information about the program as part of the evaluation planning process. Be sure to answer any questions and remind people, particularly staff, that the idea is to improve the program, not to find fault with it or them.
Pitfalls to Avoid When Making Decisions About Your Program
Don’t dismiss the importance of evaluation to the quality of your program.
Evaluation provides you with the information you need to make good decisions about the direction of your program, as well as to make program sustainability more likely.
Don’t leave staff and stakeholders out of the loop. Staff and stakeholders are assets.
Use their knowledge and experience to ensure the evaluation is asking the right questions and making the right recommendations. Inform them of the evaluation to be sure they are onboard and will cooperate with the evaluator.
Selecting the Evaluator
Choosing your evaluator is one of the first things you will do after you have decided that an evaluation is needed. The person you identify to evaluate your program should have experience with program evaluation. You may have someone on staff with that expertise, or you may have to hire an outside evaluator. An outside evaluator brings objectivity and an outside perspective to your evaluation, and keeps staff free to continue working on the program. A staff evaluator brings specific program knowledge and reduced cost.
Hiring an Outside Evaluator
Resolution Systems Institute conducts evaluations for court ADR programs. See "Program Evaluation" for more information on this service.
Selecting the right outside evaluator is important to the quality, usefulness and credibility of the evaluation. In choosing who to evaluate your program, you should consider the evaluator's past work, particularly whether she has had prior experience evaluating ADR programs and working with courts. You should also think about how closely your staff will work with her. If you want the evaluator to work largely independently, it is more important that she be knowledgeable about court systems and ADR. If, as we recommend, you plan on staff being highly involved in the evaluation design and other aspects of the evaluation, the evaluator can learn the substantive issues involved from you and your staff. If you hire someone who isn’t knowledgeable about ADR, be prepared to spend a lot of time explaining how ADR works, the theory behind it and the specific issues involved.
Once you have chosen an evaluator, she should be up-front as to what is possible based on the constraints to the evaluation (see What constraints do you face in evaluating the program?, below) as well as any limitations the evaluation may have based on those constraints. The evaluator should also maintain communication with the court and discuss any problems with the evaluation as they arise.
Court/Program Staff
You should not rule out self-conducted evaluation for fear that it will lack credibility. If the evaluation is done with proper methods, documentation of results, and sound analysis, the evaluation can be as credible as one conducted by an outside evaluator. The strengths that self-conducted evaluations bring are an in-depth knowledge of the program, the court and the constraints that exist. If staff is going to conduct the evaluation, they should have a good foundation in evaluation design, methods, and analysis, and a solid understanding of basic statistics. If the evaluation is comparative or involves complex groupings, sophisticated statistical analysis will be necessary in order for the findings to be valid. This would require a more advanced knowledge of statistics.
Both options require funds and staff time. Grant funding may offset the cost of an evaluator. For example, state courts may be able to find assistance from the State Justice Institute. To reduce the burden on staff, the evaluator should work with you to develop a data collection protocol that limits the amount of time staff spend on the evaluation project.
The University of Michigan's My Environmental Education Evaluation Resource Assistant (MEERA) has resources that are useful for any program on deciding on whether to use an internal or external evaluator, as well as hiring and working with an external evaluator.
The Evaluation Plan
The evaluation plan contains the why, what and how of the evaluation itself. It hinges on the goal (the why), which will dictate much of what to evaluate and how to do it. The goal answers the question of what will be evaluated. The two together - the goal and what will be evaluated - answer the question of how the evaluation will be carried out. The evaluation plan is written by your evaluator based on your decisions about what you want to accomplish and their knowledge of how to go about it.
Goal of the Evaluation and Evaluation Questions
The first step will be to decide what you want out of the evaluation, which in turn will determine what the evaluation questions are. There are a few possibilities, such as the goals and questions listed below.
Goal: Find out if your program is doing what you want it to do.
This would lead to evaluation questions such as:
- Is the program resolving cases?
- Is the program providing parties an opportunity to voice their perspective on the dispute and to be heard?
- Is the program increasing compliance with the case outcome?
​Goal: Determine if the program is functioning as it should.
Possible evaluation questions include:
- Are deadlines being met?
- Are attorneys providing the necessary documents prior to ADR?
- Are neutrals following protocol?
Goal: Learn about how the program affects participants.
This could involve the following evaluation questions:
- Are participants requesting an opportunity to participate in the program?
- How often and how long are participants waiting for mediation to start?
- How do participants rate their experience?
Type of Evaluation to Be Done
Your evaluation questions will determine what type of evaluation you want to do. In general, there are three types of evaluation: one that examines your program’s processes, one that examines its outcomes and one that looks at the impacts of the program on the case or court. Of course, an evaluation can do any combination of these or all three. Your evaluator will decide what type of evaluation is required in order to achieve the goals you have for the evaluation. This may be done within her proposal, or if the evaluator was brought on in a less formal manner, after initial meetings.
PROCESS EVALUATION
A process evaluation examines the processes you put in place to move a case through mediation. Its objective is to identify and address issues, if any. It is used when you’re interested in whether the program is functioning as it should and how it affects participants. Below are some examples of questions asked in a process evaluation.
1. Is the process being followed?
This could cover the following questions: If mediators are supposed to prepare the parties before the mediators, are they? If a particular protocol is to be followed for cases involving intimate partner violence, is it being followed? If mediators are supposed to be filing reports, are they?
2. Is the process working?
For example, if you have a 60-day deadline for mediation to take place, is that deadline easily met? If not, what is the reason? Does the deadline need to be revised? If your rule has a protocol in place for dealing with cases involving intimate partner violence, is that protocol leading to appropriate measures to keep the abused parent safe?
Because the intended result of an evaluation of program processes is to make improvements, it is especially effective when done early in a program’s life. However, it can be done at any time. Targeted assessment, in which only a specific part of the process is examined, can be done as well. This might be done after a change is made to the program process or in response to feedback from program participants, mediators or staff.
OUTCOME EVALUATION
An outcome evaluation is done in order to know whether your goals for the program are being met. It looks at the value your program provides to parties, the court, attorneys – anyone and any organization your program is supposed to help. Such outcomes could include settlement rate, impact on parties’ relationship, party perception of procedural justice, the number of post-mediation hearings, and so forth.
IMPACT EVALUATION
An evaluation of a program’s impacts looks not just at what the outcomes are, but whether it has further impacts. Generally speaking, it looks at causation: does your program cause a change? For example, does the program lead to a lower time to case closure? Does it lower costs? Does it lead parties to have a more positive opinion of the court? For this type of evaluation, cases going through ADR are generally compared to those that go through the traditional process. Comparative designs are necessarily more complicated and costly than designs without a comparative component because more cases are needed for the comparison and because getting the correct comparison groups can be challenging. You therefore, may want to limit your evaluation types to process and outcome.
What Constraints Do You Face in Evaluating the Program?
The constraints you face in doing the evaluation will determine what is possible to do, and most likely will help you focus on the most important issues. The constraints will affect the questions you ask, the sample size you use and other aspects of design. Although you will consider constraints when determining what the evaluation plan will be, you will want to discuss them with the evaluator to make sure that what you want can be done within those limitations.
MONEY AND TIME
The two greatest constraints on evaluation design are money and time. Most courts have limited resources to spend on evaluating their programs. This limits the complexity and extent of any evaluation. Time is not just the amount of time that your staff may be required to spend on the evaluation, but timeliness as well. It is much better to do a limited evaluation and communicate findings and recommendations when they are still timely enough to be useful than to have a large, complex evaluation that is published too late to be of any use. However, if your evaluation needs to be more long-term, the evaluator can produce interim reports that provide either information about specific parts of the program or information about a limited time period.
Apply This Principle to Deciding What to Evaluate
Money and time may come into play when deciding what to evaluate. For example, if you want to know whether your program has any post-disposition effects, such as reduction in enforcement actions or lower recidivism, this will require that the evaluation either use older cases or be open for a longer period of time. The first may be limited by the data available at that time: the evaluator may need more information than is available in order to conduct the analyses required. The second may be limited by time and money.
STAFF
No matter who does the evaluation, some staff time will be needed to conduct it, whether for gathering data, entering data, being on hand to explain processes, etc. If your program is, like many, stretched in terms of staff time, how and what your staff will be doing in support of the evaluation is an important question to answer before deciding how to proceed with the evaluation.
DATA
Some evaluation questions might require data that is either difficult or impossible to gather. This is particularly true if the question requires baseline data for comparative purposes. Baseline data is case data from before the start of the mediation program, such as how long it was taking cases to get to resolution, how many motions were being filed per case, the outcomes of cases, and so on. You will need to take these constraints into consideration when developing your evaluation plan. Program and court staff will be particularly useful in identifying these constraints.
POLITICAL
There are times in which certain evaluation questions, or the findings from them, may bring on a backlash from individuals with conflicting viewpoints. Understanding what these could be ahead of time will help the evaluator – especially an outside evaluator who may not be aware of particular dynamics – to approach such questions with sensitivity.
All of these constraints have an effect on the evaluation goals, design and process. Decision-making regarding the evaluation should take existing constraints into consideration. For example, the overarching evaluation questions will require certain data. Each of the above constraints may make it unfeasible to collect the data to answer those questions, which will then lead to changing or eliminating the evaluation questions. In turn, that will lead back to the question of how to accomplish the goals of the evaluation.
Evaluation Methods
The evaluator will determine the best evaluation methods to use in order to achieve the goals of the evaluation. Methods include:
- Participant surveys
- Interviews of parties, neutrals, judges and court staff
- Observations of ADR sessions
- Focus groups with program users and/or stakeholders
- Collection of case data from the case management system
The constraints identified above will impact what methods can be used. Some methods are much more time consuming (and therefore expensive) than others. For example, interviews and observations take a lot of time and money at all stages of the evaluation: data collection, data entry and analysis. But those approaches also provide rich information about your program that you cannot get from less intensive methods.
LENGTH OF EVALUATION PERIOD
The time during which data is collected should be as short as possible in order to place the smallest burden possible on the program and court and to provide timely results. However, it needs to be long enough to get the data necessary for analysis. If your program has few cases going to mediation, the evaluation period will necessarily be longer than if your program has a higher volume of cases. Comparative designs or those examining differences between particular cases will require more data and thus more time than simpler designs.
WHICH CASES TO INCLUDE
The evaluator will work with you to decide whether all cases sent to ADR will be included or whether a random sample will be more feasible. She will also determine if some cases are not representative and therefore should not be included.
Pitfalls to Avoid After Deciding to Do An Evaluation
Don’t conduct the evaluation without the expertise to do so.
Only a well-constructed and properly analyzed evaluation can provide you with reliable information to help you address issues and demonstrate its effectiveness.
Don’t take the evaluation and put it on a shelf. Use it.
The evaluation process does not end with the completion of the evaluation report. The next part of the process is to make any recommended changes and/or to take steps to ensure the sustainability of your program.
The Evaluation Process
Before starting the evaluation, you should remind staff and stakeholders of the evaluation and ask them to work with the evaluator. The evaluator will explain to staff what she needs from them. Most likely, she will ask the chief or presiding judge to request that stakeholders and judges take part in surveys or interviews.
Pilot Period
Your evaluation should begin with a pilot period to be sure the evaluation will work smoothly. During this time, the evaluator will test surveys to make sure that they are reliable and valid. She will also work with you to do test runs of the data collection to clear up any issues with either the data or the process to collect it.
Study Period
The study period is the period of time in which data is collected. During this time, data will be collected from the court’s and program’s case management systems. At the same time, the evaluator will oversee the distribution of surveys, conduct interviews, observe mediations or court processes and conduct any other activities that designed to yield information about the program.
During the study period, the evaluator should consistently monitor the data collection and entry processes. This keeps everyone on track, makes sure that the data is being collected on a consistent basis, and ensures that any problems in data collection or entry are discovered early on and fixed. The evaluator should supervise the entire process.
Communication is also important. During the course of the study, the evaluator should maintain ongoing communication with the program and court about how things are going with the evaluation. If the evaluation is being done by someone outside the program, this communication may take a little more effort; but, it is essential.
Analysis
After the data collection time has ended, the evaluator will need time to make the data ready for analysis and then to analyze the data. This step is as important to the quality of the evaluation as the data collection is. Data analysis may include simple frequency and descriptive statistics, such as how many cases went through the program and what types of cases these were. Or it may include more advanced analyses that attempt to identify the cause of differences in outcomes (e.g., time to disposition, settlement, satisfaction) among cases. If qualitative methods are used, such as interviews and focus groups, the evaluator will need to code responses in order to identify trends and relationships among those responses.
These analyses will be based on the evaluator’s understanding of the program and the context in which it functions. They will also require a solid understanding of statistical analysis. These understandings are essential in order to make proper assumptions about causation and interpretations of the data. This may require the evaluator to consult with court personnel and program staff.
Reporting
The evaluator’s report should be provided to court and program staff in draft form for feedback before it is finalized. Two things to keep in mind are that the evaluation should be credible and it should be accessible.
THE EVALUATION NEEDS TO BE CREDIBLE
Objectivity and transparency are essential for the evaluation findings to be accepted. To ensure objectivity, the evaluator will need to present both the strengths and weaknesses of the program. Further, she will need to present positive results in a manner in which it does not come across as a sales or advocacy piece for the program, and she will need to avoid minimizing the results that are less positive. Because the evaluation will include recommendations for the future, program weaknesses can be presented as opportunities for improvement, and strengths can be highlighted.
To ensure transparency, the report should include a thorough description of the methodology and the analyses that led to conclusions. If the evaluation was comparative, it should also include statistics that show whether any differences were significant.
THE REPORT SHOULD BE ACCESSIBLE AND ACTIONABLE
The evaluation does not mean anything unless it is read and any recommendations are acted on. This requires wide dissemination of findings and recommendations in many forms. The primary product should be a substantial written report that includes methodology, data tables and statistics. These provide the basis for the findings and recommendations, give credibility to the evaluation, and help other researchers understand the significance of the results.
However, the evaluator should provide more than just a long written report. Busy professionals do not tend to have the time to wade through pages and pages of data. Different formats for communicating findings and recommendations will help ensure that your evaluation will be useful. Some ideas for increasing use of the evaluation are:
- An executive summary: A short version of the report is much more likely to be read and, if you are printing the report, it can be disseminated to a wide audience at less expense than a full-length report.
- A one- or two-page overview: This would include the evaluation goals, questions, method, findings and recommendations, using charts and infographics to succinctly convey the information.
- An oral report to a group of diverse stakeholders: If information is often exchanged at monthly meetings, have the evaluator present a report at that time. Discussion can lead to greater understanding and use of the findings and recommendations.
- Visuals: The more the evaluator uses quality, pertinent graphics, the easier it will be for people to absorb information from the evaluation.
- Specific recommendations about how to change the program: Specific recommendations are more likely to be acted upon than vague ones. These should be included in any summarized version of the report and in any oral presentation.
To help other courts, make your report meaningful to outside audiences. Your evaluation may be prompted by the court, legislature or another entity that requested it, but others will look to it for insight, too. Other courts may use the evaluation as a way to judge whether to create a program or to justify a similar program in their own jurisdiction. Other evaluators may use it to inform them on how a program they are evaluating compares to other programs. Neutral practitioners may use it to see if their manner of practice is effective. Because others will be using the evaluation, provide detailed information on how the program is set up to function, and how it really functions. Provide detailed information about how the evaluation was conducted and any questions that could not be answered.
Use the Evaluation
An evaluation is basically a waste of time and money if it’s not used to make improvements, educate stakeholders, encourage funding or help other programs. Once the evaluation is completed, figure out how to use the findings and recommendations.
- Discuss the findings and recommendations with court personnel and program staff. Work with them to determine the best methods for implementing the recommended program improvements.
- If the evaluation points to needed policy changes, work with policy-makers to make those changes.
- Share the evaluation report with funders and other decision-makers who have authority to support your program. Point out the strengths the evaluation identifies and explain how you are improving your program based on the evaluation.
- Reach out to others working in court ADR to let them know about the evaluation. Consider presenting with your evaluator at professional conferences to spread the word about what you learned. You might be able to promote your findings via blog posts, social media, articles in local newspapers and reports on local television news.
Conclusion
An evaluation that is done well will give you invaluable information that can help to improve the quality of your program, gain further funding of the program, assure stakeholders of the program's worth, and, most importantly, enhance the experience of the program participants. Well-conducted evaluations can also add to the overall knowledge about the impact of court ADR on the courts, litigants, and even the community.