Anne Hyslop, an independent education consultant and former senior policy adviser with the U.S. Department of Education, answered questions by email about state plans for complying with the federal Every Student Succeeds Act. Hyslop was part of a team that reviewed state plans for Bellwether Education Partners and the Collaborative on Student Success.
SCHOOL MATTERS: Based on the review, how does Indiana’s ESSA plan match up with other states? What do you see there that looks good or bad?
ANNE HYSLOP: Indiana’s plan was a strong one in many respects, particularly its plan for improving low-performing schools and determining when they can exit school improvement status and for valuing students’ academic growth as well as academic proficiency. And unlike some of the other plans reviewed by the peers in the second round, Indiana didn’t have any significant red flags.
There are some discrete issues, however, that could be addressed to strengthen the plan. For example, Indiana does not specifically incorporate subgroup data when it calculates school grades. As a result, the peers were concerned that schools that did well overall and earned As or Bs in the school rating system could be masking very low-performing individual groups of students, like English learners, low-income students, or students with disabilities. Similarly, there are some concerns with specific indicators that Indiana would like to use to hold schools accountable, such as its measure of student attendance and its graduation rate calculation.
SM: The reviewers seem to give Indiana high marks for a) its plan to provide support for low-performing schools and b) its plan for how schools will exit improvement status. How does that compare with what you’ve seen from other states?
AH: In general, states’ proposed strategies to support school improvement were often vague — consisting mostly of promised technical assistance, guidance, or resources. States made a lot of promises, but offered few details for how they’d deliver on them. And often, states refused to indicate any strong actions they’d take if schools failed to improve over time.
Indiana stood apart because its plan was much more detailed, clearly explaining what both school districts and the state would be expected to do and how the interventions will be tailored and increase in intensity over time. Indiana plans to provide templates and other tools to support districts and schools in examining their data to understand why the school was identified and to conduct a needs assessment to inform its improvement plan. There was also a concrete plan for how the state will distribute required federal funds in smart ways to support improvement efforts and prioritize strong evidence-based interventions — something many states didn’t describe.
Many states we reviewed proposed that schools should be able to exit improvement status simply by showing progress in a single year, which didn’t take into account whether those gains were sustainable. Others proposed permitting schools to exit improvement by improving their standing relative to other schools — even if their ranking improved only because other schools’ performance declined. Indiana did neither, and proposed a set of criteria that would ensure schools were on a sustained, positive trajectory by requiring schools (or an individual subgroup, if that was why the school was identified) to earn a “C” grade for two years. It hit all the right notes, and is a model for other states.
SM: A weakness for Indiana is how much of the components of its plan are up in the air, thanks to the legislature and State Board of Education – new assessments, changing high-school graduation requirements, a shift to SAT/ACT as a high-school accountability exam, etc. Is that just part of the landscape in K-12 at this time, or is Indiana making it hard on itself?
AH: Indiana isn’t the only state in the throes of changes to its assessments or other policies. Several states, such as Rhode Island and New Hampshire, will be implementing new tests this spring, and others will be changing in the future. This was echoed in findings released this week in a survey from the Center on Education Policy. Changing assessments definitely raised the degree of difficulty in the ESSA plan process. This holds for changes to other components of the accountability system as well. For example, other states hadn’t finalized every aspect of their school grades when they submitted them to the U.S. Department of Education, but Indiana could be more clear about its process to submit a revised plan once the state determined the final weighting of each component.
All of these changes make it more challenging for a state to finalize its long-term goals for achievement and school grades or other accountability system components. For Indiana, it will be critical to update the plan once these pieces fall into place — reviewing new data from new assessments to make sure its goals and expectations for schools remain rigorous, monitoring how any changes to graduation requirements necessitate different approaches to the state’s accountability indicators, and so on.
SM: The report also suggests it’s a weakness that Indiana doesn’t include subgroup performance in its A-F accountability. Is anyone getting this right? Is there any state to look to as a model?
AH: This was an area where nearly every state struggled, and one where I think the most work is needed in many state plans. It’s a huge issue. While no state is a perfect exemplar, the strongest plans included specific policies that made sure subgroup performance was reflected in the overall school grades rating system. For example, subgroup performance on some or all of the indicators was given a specific percentage weight in states like Minnesota, Ohio, and Tennessee. Others put in place a policy to ensure schools with the highest overall ratings didn’t have a consistently underperforming subgroup of students — like Rhode Island proposed.
SM: From your posts on Twitter, it sounded like you were, to put it mildly, disappointed that states didn’t reach further to draft creative plans under the ESSA framework. How were they falling short?
AH: One of the most disappointing things I saw across the board was that states submitted bare-bones plans. In many cases, they submitted the minimum amount of information that the U.S. Department of Education requested. For example, very few states talked about how they planned to spend funds that must be set-aside specifically to help the lowest-performing schools. Nationally, this amounts to about $1 billion per year, and using these funds effectively will be critical to support local capacity to improve schools.
States have a lot of flexibility in how to distribute these funds, and it was a wasted opportunity to exclude any discussion of how they will do so in their plans. Similarly, many states didn’t provide sufficient detail in their plans. They might have indicated they planned to use an A-F school grading system to differentiate between school performance, but then fail to give the grading scale that distinguished an A grade from a B or C, or to explain how multiple measures came together to provide an overall score for a particular indicator, like college and career readiness. Or, states didn’t explain how measures were calculated — which students would be included in the denominator of a calculation, and which in the numerator. These small details matter and determine how effective the overall accountability system will be in advancing equity for all students, but were too often missing from the plans.
Both states and the Education Department are to blame here. Nothing prevented states from adding further detail, and some voluntarily did so in order to provide a more comprehensive picture of what they were doing to support kids and schools. But the department should have put forward a more robust template for states to submit their plans that included additional questions.
In addition, there were many plans I read that simply didn’t comply with ESSA. I saw states that proposed to identify schools with struggling subgroups based on overall school results, states that weren’t including required indicators and choosing to weigh academic factors equal to non-academic ones, and states proposing to use a higher n-size (the minimum number of students that must be in a group in order for it to count for accountability purposes) for subgroups than for students overall. These state plans were particularly disheartening, since mere compliance with ESSA should be the floor — not the ceiling.
SM: Why does it matter if states are submitting strong plans? Wasn’t the thrust of ESSA to give states more freedom to do what they want?
AH: Without a doubt, ESSA gave more flexibility to states to determine how to evaluate school quality and support schools that were struggling to provide all students an excellent education. But having more flexibility means that more decisions will need to be made at the state level, and states will need greater capacity and thoughtful planning to carry them out effectively and know how to best take advantage of the flexibility they have. That’s why it was so disappointing to see plans being submitted that were incomplete, or unclear, or full of vague promises without any indication of how states could deliver them. After two years, I would have expected states to have a much clearer theory of action and set of goals to support all schools — and all students — to receive a high-quality education.
SM: If they’re on the wrong track, what can be done to steer them right, and who will do it?
AH: I think a lot of actors could step up to help strengthen states’ efforts. For starters, the U.S. Department of Education should carefully examine each plan and note any areas that warrant revisions to meet the law’s requirements, and could go further by providing guidance and technical assistance. State and local advocates should continue to put pressure on their state officials and carefully monitor implementation.
These plans aren’t set in stone; many states indicated they still need to make final decisions to refine their plans, and many state policies are in flux. The kinds of stakeholder engagement and awareness that marked the initial development must continue, and when states are falling short on their promises, advocates should call them out for it and press for changes.
I also think there’s a huge role for journalists, researchers, and analysts to keep track of what states are doing and the results they’re seeing, so that we can share best practices and understand how states are creating conditions that lead to meaningful improvement.