Beyond the Review: Where Reports End and Results Begin
Beyond the Review: Where Reports End and Results Begin
How clear findings and priorities drive lasting improvement in educator preparation
The missing link between reviews and results
If you lead an educator preparation program or oversee the reviews of programs, you know what it means to be surrounded by data - and how difficult it can be to turn that data into real improvement. Enrollment dashboards, observation results, surveys, candidate coursework, and assessments fill your folders and, hopefully, inform your conversations with stakeholders. But how much of that information truly reveals what matters most?
The challenge is rarely a lack of data. It is that much of what is collected does not illuminate the performance of candidates or the systems and people responsible for developing their instructional practice. And even when it does, the next steps often remain unclear.
Too many continuous improvement efforts end with long inventories and detailed reports but little tangible direction about what to do next. The difference between activity and improvement is not more evidence. It is the discipline to translate evidence into findings and findings into prioritized action. Doing that well takes more than good intentions. It requires expert review teams trained in a shared and rigorous methodology - people who know educator preparation from the inside, across all parts of programming, and can distinguish isolated examples from true patterns of practice at both the systems and teacher educator levels.
Evidence is the starting point, not the destination
Evidence should capture how preparation actually operates for candidates and teacher educators, not only what is planned or intended. The goal is not to collect more data but to collect the right evidence that highlights how programs, teacher educators, and candidates typically perform.
Doing this well requires both discipline and expertise. Strong reviews depend on teams that apply consistent, evidence-based methods to analyze performance, verify patterns, and separate signal from noise. Without that structure and skill, data collection becomes a compliance exercise, or worse a skewed reflection of program quality, rather than a driver of improvement.
In an EdPrep Partners Program Performance Review there are three major types of evidence examined:
- Artifacts. These include syllabi, course materials & assignments, handbooks, observation tools, mentor and teacher educator training, data sets, candidate assignment samples, among many others. Within artifacts there are both inputs such as what programs design or expect, and outputs such as what candidates or teacher educators actually produce. Artifacts reveal how preparation is structured on paper and how that design translates into candidate or faculty performance.
- Observations. These capture how stakeholders enact the preparation. This includes live instruction of both candidates and faculty members, observation and feedback cycles with clinical supervisors, etc. Within observations there are inputs such as the systems and routines that make these experiences possible, and outputs such as the observed actions, language, and feedback that both demonstrate and shift practices.
- Interviews. These surface what people understand about roles, systems, and the impact of actions, and how they describe their own practices and contributions. Faculty, clinical supervisors, candidates, mentors, and partners alike provide context that connects what is designed to what is enacted. Interviews include inputs such as intentions, structures, and beliefs and outputs such as reflections on outcomes and evidence of follow through or perceived needs to improve the system.
Collecting these three types of evidence is only the first step. High-quality Program
Performance Reviews evaluate each source through two essential lenses.
- Evidence should be representative of what typically happens for most candidates and teacher educators rather than an isolated example.
- Evidence should be reinforced by multiple sources rather than driven by a single piece that tells a different story.
Together, these perspectives separate isolated examples from reliable patterns. They help reviewers determine whether the evidence reflects a single strong case or a consistent way of working across the program. When convergence confirms a pattern, that evidence becomes the foundation for a credible finding. When evidence conflicts, the gaps between sources often reveal where systems are misaligned or breaking down.
Evidence observed through these two lenses paints a clear picture of how preparation functions in practice. It shows not only what exists but how consistently and effectively it operates for the people most central to preparation - teacher educators and candidates.
Findings make meaning leaders can use
A finding is not a restatement of evidence. It is a clear explanation of a system pattern grounded in multiple sources. Interpretation is a professional discipline in educator preparation. It is the work of translating evidence into meaning that leaders, faculty, and partners can use to make decisions and take action.
Strong findings do four things:
- Identify the pattern that holds true across multiple sources
- Locate the root cause within the system
- Explain why it matters for candidate readiness and P 12 learning
- Point to the leverage that would change it at scale
This is where many reviews fall short. They inventory, but do not interpret. They describe but do not prioritize. High-quality findings require expert teams with evaluative skill, and with deep content knowledge in teacher preparation, including people who have served and led in educator preparation and who can synthesize, not summarize, distinguishing isolated signals from recurring systematic patterns.
At EdPrep Partners, our approach to Program Performance Reviews centers on how structures operate in practice. We look at how faculty label and model methods, pedagogy, and content-pedagogy, how candidates rehearse and receive feedback, how faculty, clinical supervisors, and mentors alike coach to shared criteria, and how these actions connect back to what candidates learn in coursework and to what P-12 districts expect in classrooms.
When these pieces align, candidates progress from analysis to rehearsal to enactment with increasing skill. When they do not, even the strongest intentions to develop candidates fail to take hold.
Findings emerge only when the weight of evidence points in the same direction and any conflicting data have been resolved. Findings connect structures to experience and experience to outcomes. They give leaders a set of truths they can act on - clear, credible insights that anchor improvement in evidence and strengthen how programs develop their candidates.
Prioritized action is most likely to lead to action
Not every finding carries the same weight, and trying to change everything that needs improvement within a program at once dilutes impact. Prioritization is not about doing less. It is about focusing energy and attention on the few actions that matter most, first.
To determine what comes first, we use three guiding considerations:
- Impact on candidate experience and performance
- Feasibility given time, people, and resources
- Dependencies that must be in place so early progress does not collapse
The outcome is a concise set of recommendations rooted in EdPrep Partners’ ‘14 Levers for Quality Teacher Preparation’, with clear owners, milestones, and supports. A short-cycle plan builds early momentum, and a year one roadmap sequences the larger shifts that follow. At EdPrep Partners we “roll up our sleeves” and model and complete these actions alongside programs, and regular progress checks confirm whether core actions are happening with quality and consistency. Quarterly reviews test whether those changes are reflected in candidate performance and partner outcomes.
How lasting improvement takes hold
The strength of any review is measured not only by the quality of its findings but by what happens next. Many reviews stop at recommendations, leaving programs without the clarity, systems, or support to act. At EdPrep Partners, our approach is different. We connect evidence, findings, and action through four complementary approaches that we take alongside programs to sustain improvement over time.
Focus on locus of control. Reviews center on decisions that leaders, faculty, clinical supervisors, and partners can make — and on the stakeholders they can most directly influence and impact now. This keeps the work grounded, practical, and moving forward.
Begin by investing in teacher educator practice. Quality lives in the people who label, model, develop, coach, and give feedback. When faculty and clinical supervisors use shared definitions, clear look-fors aligned to a candidate’s developmental trajectory, and consistent teacher educator practices for how they will develop candidates, candidates improve faster and quality strengthens across the program.
Keep score with simple routines. Leaders do not need a new data warehouse or elaborate dashboards, though those can help. What matters most is a short set of leading indicators that confirm core practices are happening — and a few strong routines that create space to check in, validate impact on candidate development and partner outcomes, and plan whether to expand, adjust, or retire a practice.
Provide hands-on technical assistance to design, implement, and sustain change. Improvement is not sustained through reports or one-time recommendations; it requires partnership. Our technical assistance model pairs expert guidance with on-the-ground support — modeling teacher educator practices, facilitating capacity-building, and embedding the routines that ensure programs can sustain, scale, and continuously strengthen their systems. Through strategic planning, coaching, and progress monitoring, we help teams design, do, and sustain the work long after the review concludes.
Together, these approaches create system-level improvement. Coursework and clinical experiences align. Teacher educator practice strengthens. Partner expectations are met. Candidates enter classrooms ready to teach.
From findings to action
EdPrep Partners’ Program Performance Reviews produce a focused set of findings that lead to a clear set of prioritized actions - the few changes that make the greatest difference. We then work alongside leaders, faculty, and clinical supervisors to implement those actions with precision and care, building the routines that sustain and scale quality teacher preparation. Because every child deserves an excellent educator, and every candidate deserves excellent preparation. Let’s deliver both.
This approach is adaptable across pathways and designed to strengthen the systems programs already have while aligning to what partners need most.
The field does not need more reports. It needs Program Performance Reviews that lead to tangible improvements in what candidates experience and how teacher educators prepare them. When evidence becomes findings, and findings become focused action, improvement stops being a plan and becomes practice - the way preparation was always meant to operate: by teaching well.
Let’s make teacher preparation better together.
Stay Connected
If you're interested in learning more, exploring collaboration or technical assistance, or just want to catch up, we’d love to connect:
About EdPrep Partners
Elevating Teacher Preparation. Accelerating Change.
EdPrep Partners is a national technical assistance center and non-profit. EdPrep Partners delivers a coordinated, high-impact, hands-on technical assistance model that connects diagnostics with the support to make the changes. Our approach moves beyond surface-level recommendations, embedding research-backed, scalable, and sustainable practices that most dramatically improve the quality of educator preparation—while equipping educator preparation programs, districts, state agencies, and funders with the tools and insights needed to drive systemic, lasting change.



