Sometimes even longstanding programs can get off track and need help figuring out how to get back on track. Such was the case with the District of Columbia Superior Court’s mandatory Child Protection Mediation Program. Although the program had been evaluated twice before it engaged RSI to conduct another evaluation in 2017, the court needed help determining exactly what needed to be evaluated. The previous evaluations — the most recent of which had occurred 12 years prior — looked at the program’s outcomes and impacts. These had been found to be positive overall. The program’s processes, however, had never been examined by an objective outsider. Meanwhile, participating attorneys were complaining to the court about some aspects of the program. So while the court wanted to know whether the program was still achieving the results reported in earlier evaluations, RSI discerned that the evaluation also needed to delve into how well the program was working for all those involved, and whether any changes should be made to improve program processes.
To provide the answers the court needed, RSI did what we always do for a large evaluation project:
- Meet with judges, court staff, and representatives of organizations involved in the program to determine what the evaluation questions should be
- Assess what is needed to answer the evaluation questions
- Work with court and program staff to decide on data sources and data collection processes
- Supervise data collection
- Analyze the data
- Report back to the court
Meeting with Interested Parties
The first step in preparing for the evaluation was for RSI Director of Research Jennifer Shack to talk with program staff and other interested parties to get a better understanding of what prompted the evaluation and what the evaluation needed to examine. It was clear from these discussions that the evaluation goals needed to change in order to address issues with the program process. What the court needed to know was how the program was perceived by those using it and what could be done to address the participants’ complaints.
The Evaluation Plan: Developing the Evaluation Questions
After getting more information about the program from the staff and interested parties, Jen refined the evaluation plan, which now included a comprehensive examination of the process along with an analysis of the program’s outcomes and impact, as requested by the court. The evaluation was designed with a mixed methods approach, meaning that multiple sources of information would be used to examine the same aspect of the program. These sources included:
- Focus groups of mediators and the professionals who participate in mediation
- Interviews of judges and parents
- Post-mediation surveys of all mediation participants and mediators
- Court case management data from all cases filed
- Program case management data from all mediations conducted
Using multiple sources of information allowed Jen to provide the court with a more complete understanding of program functioning, participant perspectives, and the issues uncovered in the planning phase and focus groups.
Before collecting data, Jen developed surveys and interview protocols, with input from court and program staff. She also worked with program staff to develop new data fields for the spreadsheet they used to track and manage cases. These new fields would provide information on some of the process issues identified by interested parties: timing problems, reporting issues, and the amount of time participants had to wait for mediation to begin.
The first step of data collection was to conduct focus groups with the attorneys and social workers who participate in the mediation and the mediators who conduct them, and to interview the judges who hear child protection cases. This not only provided important information about how the process was working for them, but also helped to further refine the variables to include in the evaluation. For example, the issue of redundancy was clarified – when the mediation occurred too soon after the family team meeting (which is held as soon as a petition for court intervention for a child is filed), it was seen as covering the same ground, leading to an unproductive discussion.
RSI provided the court with a comprehensive report into the program’s performance and process, with actionable recommendations to the court and a program for improvement. We also provided the court with a short summary report for public distribution and one-page summaries of the evaluation findings.
Prior to submitting the final report, Jen presented its preliminary findings to the Abuse and Neglect Subcommittee, which provided feedback, most prominently on the recommendations for improvement. These were incorporated into the final report.
As a result of the evaluation, the court came away knowing that its program was achieving most of its goals. It also gained a better understanding of how the parents, attorneys and social workers viewed the program. In particular, the court learned what benefits each group gained from the program and what issues it should address to make mediation both more effective and more efficient. Most attorneys and social workers made it clear that they wanted the program to continue because it helped them to exchange information and helped hold everyone accountable for their part in moving the case forward. But they also were frustrated with the inefficiency of the mediations. Parents felt heard and gained an understanding of what next steps they should take. The court also learned that the mediators wanted more opportunities to learn and that they needed more instruction.
RSI provided the court with a number of recommendations on how to improve the mediation experience for all involved. These included changes to the timing of referral, steps to reduce the amount of time people spent waiting for mediation to start, ways to make the mediation more efficient, a method for enhancing understanding and accountability, and more.