Policy & advocacy posts - Teach. Learn. Grow. The education blog https://www.nwea.org/blog/category/policy-advocacy/ The education blog Wed, 15 Jan 2025 23:34:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 New data: How far off is COVID academic recovery? https://www.nwea.org/blog/2024/new-data-how-far-off-is-covid-academic-recovery/ https://www.nwea.org/blog/2024/new-data-how-far-off-is-covid-academic-recovery/#respond Tue, 30 Jul 2024 12:00:00 +0000 https://blog-stage.cms-dev.nwea.org/?p=23154 The road to COVID academic recovery is far from over. That’s the sobering conclusion of a new research brief on student learning following COVID-19 school closures, from […]

The post New data: How far off is COVID academic recovery? appeared first on Teach. Learn. Grow..

]]>

The road to COVID academic recovery is far from over. That’s the sobering conclusion of a new research brief on student learning following COVID-19 school closures, from NWEA researchers including Megan Kuhfeld and myself, Karyn Lewis.

The research analyzed scores from approximately 7.7 million US students in grades 3–8 who took MAP® Growth™ during the 2023–24 school year. We found that students were learning and growing, but at rates that fell short of pre-pandemic levels. Based on the latest data, the average student requires the equivalent of 4.8 months of additional schooling to catch up to pre-COVID levels in reading and 4.4 months in math.

These findings come at a particularly important moment. Over the course of 2020 and 2021, Congress invested $190 billion to help schools return to normal operations and to address student learning loss. Recent research suggests that money helped, but with the last of the money expiring this fall, our findings show that student growth rates continue to lag and that kids remain far behind their peers from just a few years ago. It will take significantly more money, focus, and interventions to get students fully back on track.

New data: How did COVID affect student learning?

In the spring of 2022, we were happy to report that student growth had returned to, or even exceeded, pre-pandemic rates. While the road to full recovery looked long, there were at least initial signs of a rebound as schools returned to their normal operating conditions.

But momentum stalled in 2023 as students struggled to sustain that higher rate of growth. This year’s results are just as alarming. Our research found that achievement gains in 2023–24 lagged pre-pandemic trends in all but the youngest cohort of students, falling short of pre-pandemic averages by 1–21% in reading and by 2–14% in math.

As noted by the Washington Post in their coverage of our release, the NWEA results are broadly in line with what other interim test scores are showing. There are slight variations across multiple measures based on grade level and magnitude of the need, but the broad story is consistent that students, particularly the most marginalized students, need much more support.

These findings present a less optimistic picture of COVID academic recovery compared to reports using state assessment data. Interim assessments and state summative tests serve different purposes and measure achievement with varying levels of specificity. Interim assessments, like MAP Growth, are designed to provide more detailed, frequent insights into student progress throughout the year, allowing for a continuous measurement of growth and achievement. This can highlight more nuanced trends and immediate impacts. In contrast, state summative tests occur once a year and categorize achievement into broader levels, like below basic, basic, and proficient. This lack of nuance may mask important changes in achievement for students who are further from benchmarks. The differing methodologies and purposes of these assessments likely play a significant role in the observed discrepancies in recovery data.

Another key difference is how our team tracks recovery. State summative tests typically examine COVID impacts cross-sectionally. For example, how does the achievement of third-graders in 2024 compare to the achievement of third-graders in 2023? In contrast, since MAP Growth is administered multiple times throughout the year, we can use longitudinal models to understand how cohorts of students are progressing toward recovery. For example, how much progress did this year’s third-graders make from the year before? This approach captures more incremental changes and trends that state assessments might miss.

COVID achievement gaps remain, especially for marginalized groups

At the end of the 2023–24 school year, across all grade levels, our research estimates that the average student will require the equivalent of 4.8 months of additional schooling to catch up to pre-COVID levels in reading and 4.4 months in math.

The averages hide differences across grade levels and student groups. For instance, middle school students have lost more ground on MAP Growth than younger students have, and low-income, Black, and Hispanic students remain the furthest from making a full COVID academic recovery.

Paying off the COVID generation’s “educational debt”

Society owes the COVID generation of students a great educational debt, and it is a debt they are now carrying with compounding interest. Achievement disparities that predate the pandemic have been starkly exacerbated over the last four years, and the latest NWEA data shows that marginalized students need much more support to get back on track.

The effects of COVID continue to reverberate, even for the youngest students entering the education system years after the initial onset of the pandemic. Instead of treating COVID recovery interventions as temporary crisis-mitigation tactics, policymakers should be thinking about how to make targeted academic supports, such as high-dosage tutoring and summer programming, a permanent part of our new normal.

COVID academic recovery efforts will get much harder going forward as the federal relief dollars expire, but providing students with evidence-based intervention strategies is the only way to make meaningful progress.

To learn more, watch our webinar Special briefing: Recovery still elusive on demand.

The post New data: How far off is COVID academic recovery? appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2024/new-data-how-far-off-is-covid-academic-recovery/feed/ 0
5 keys to effective summer programs  https://www.nwea.org/blog/2024/5-keys-to-effective-summer-programs/ https://www.nwea.org/blog/2024/5-keys-to-effective-summer-programs/#respond Tue, 16 Jul 2024 12:00:00 +0000 https://blog-stage.cms-dev.nwea.org/?p=23130 Another school year has ended and many students across the country are enrolled in summer programs. At NWEA, we have long been champions for using the summer […]

The post 5 keys to effective summer programs  appeared first on Teach. Learn. Grow..

]]>

Another school year has ended and many students across the country are enrolled in summer programs. At NWEA, we have long been champions for using the summer months effectively. Regardless of the tests used to measure it, student learning tends to slow down, or even backslide, during the summer. Some student groups—especially rural students, English learners, and students with disabilities—lose even more ground.

But implementing effective summer programs is hard. Staffing challenges, burnout (for both students and staff), and the stigma of summer school can all affect how many students have access to summer programs and the quality of their experience. Fortunately, the research team at NWEA recently dug into studies on effective summer programs and found important takeaways for school and district leaders.

Here are five keys to effective summer programs.

1. Focus: Decide what your summer program is meant to accomplish

Unlike other academic recovery strategies like high-dosage tutoring, summer school programs often have a wide range of goals. Some focus solely on academics, like reading and math, while others promote positive behavioral outcomes, like student engagement and social-emotional skills. These are all laudable goals, but district leaders may have more success if they narrow their objectives.

For example, summer literacy programs have been found to increase reading achievement by a meaningful amount, especially for younger students in earlier grades. Similarly, summer math programs increase math achievement. However, the gains in these subjects tend to be concentrated in the program’s focus area. This may sound obvious, but policymakers should use summertime to target specific instructional gaps for particular students. If requiring summer program participation for students in need of additional instructional time, policymakers should ensure students are receiving additional support in the area or areas most important for their success.

2. Time and dosage: Prioritize more learning time to see bigger gains

Last year, NWEA researchers were part of the Road to Recovery evaluation study that found that summer learning programs boosted student learning in math, but those gains were proportional to the amount of instructional time students actually received.

In other words, if district leaders want to see consistently positive outcomes, they need to offer summer programs with sufficient instructional time. Research suggests those programs should be greater than three weeks, with at least three hours of instruction per day.

3. Targeting: Identify the students who need the most support

Districts with limited resources might be tempted to target summer programs to low-income students. But the research is not a slam dunk on this point. Summer programs are particularly beneficial to low-income students in reading, but they offer similar benefits in math to students across the income spectrum. Moreover, it’s not necessarily the case that low-income students suffer larger declines than their higher-income peers in the summer months.

Another line of research, however, suggests that students who gain the most during the school year tend to lose the most during the summer months. This may be a counterintuitive finding at first, but NWEA research has found that English learners and students with disabilities fit this pattern. These students benefit the most during the regular school year and tend to suffer the largest declines in the summer months. The lesson for district leaders, then, is to target summer programs based on academic disadvantage rather than income.

4. Curriculum: Use high-quality instructional materials

Given the short duration of summer programs, school staff will have only a limited time to prepare. As such, they need to be equipped with high-quality curricular tools and lesson plans.

The research base confirms, unsurprisingly, that summer programs are more effective when they use an evidence-based curriculum. And, ideally, the curriculum that educators use during summer should be the same one they use during the regular school year. School and district leaders can leverage the tools available to evaluate quality, such as approved state lists, to provide districts with access to high-quality instructional materials in the summer in addition to the school year.

5. Family engagement: Make parents and guardians allies

Parents and guardians are not always fully aware of how their child is performing, and they don’t always know about additional learning opportunities, such as free summer programs.

District leaders worried about low participation or engagement rates should take a closer look at their family engagement efforts, including hosting information sessions and conferences, sending daily text messages, or calling home to discuss absences. Messages emphasizing the importance of attendance can be particularly effective. Daily communication between teachers and families has been shown to increase on-time homework completion and reduce classroom behavioral issues. State leaders can help here, too, by revisiting parental engagement policies, guidance, and funding levers to better support districts in their efforts to support family.

In closing

When done well, summer programs can make a significant impact on academic and non-academic outcomes for students. The five steps outlined here are important places to start.

Read our full research brief for more practical guidance on designing high-quality summer programs.

The post 5 keys to effective summer programs  appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2024/5-keys-to-effective-summer-programs/feed/ 0
States should support student access to advanced math courses https://www.nwea.org/blog/2024/states-should-support-student-access-to-advanced-math-courses/ https://www.nwea.org/blog/2024/states-should-support-student-access-to-advanced-math-courses/#respond Tue, 28 May 2024 12:00:00 +0000 https://www.nwea.org/blog/?p=22278 For most of the 1990s and early 2000s, the percentage of eighth-graders taking Algebra 1 rose steadily as schools and districts encouraged more students to take advanced […]

The post States should support student access to advanced math courses appeared first on Teach. Learn. Grow..

]]>

For most of the 1990s and early 2000s, the percentage of eighth-graders taking Algebra 1 rose steadily as schools and districts encouraged more students to take advanced math courses. Then, after warnings that some students had been promoted too soon to be successful, the trend began to reverse. Nationally, we’re now back to the same levels as we were two decades ago. The result is that students who are ready for algebra in eighth grade aren’t taking it, with detrimental effects on their academic trajectories.

While the research is mixed on universal “algebra for all” policies, it is clear that for students who can be successful, taking Algebra 1 by eighth grade has real advantages. Those include higher achievement scores in later years, a higher probability of additional advanced-course taking, and higher college readiness scores.

The essential question for policymakers, then, is how to tell when students are ready for algebra. How can the system nudge them into more advanced coursework—and avoid placing them into classes for which they are not yet ready? While the issue is complex, we have examples of the types of tools that may be helpful in screening students for readiness and policies that can ensure they get opportunities to enroll in rigorous courses for which they are prepared.

Enrollment trends and the policy debate

In 2012, the National Assessment of Educational Progress (NAEP) long-term trend assessment asked students about their mathematics course enrollment. The result was an all-time high of 34% of 13-year-olds nationwide taking Algebra 1. Unfortunately, when this same question was asked again in 2023, that number had dropped to 24%.

Schools should use multiple data points—including grades, other assessment results, and student interest—to get the most complete picture of likely student success in advanced math courses.

When we look at state-specific data, the numbers can be even more alarming. In Texas, for example, the percentage of students taking Algebra 1 and scoring proficient on the state assessment at the end of the year dropped from 62% in 2019 to 46% in 2022.

Alongside this sharp decrease in enrollment, the policy debate over when students should enroll in Algebra 1 has reached new intensity. Nowhere has the conflict been more heated than in California, where policymakers and advocates disagree about whether it should be standard in eighth grade or ninth. The disputes revolve around the tension of all students having access to advanced math courses, with a particular focus on enrolling students in classes for which they are ready.

Measuring readiness for Algebra 1

At the heart of this national conversation is the essential question of how policymakers, district and school leaders, and even individual classroom teachers can tell when students are “ready” for algebra. Assessments of student progress, including MAP® Growth™, can be helpful indicators of readiness, in addition to other measures. Similar analyses could be done using other assessment tools.

In response to numerous requests from our partners, NWEA has released new guidance on how to use MAP Growth for placement decisions. Our guidance is based on multiple studies on what prior spring math scores put a student on track to either receive a proficient score on an end-of-course (EoC) test of Algebra 1 content or earn a “proficient” grade at the end of an Algebra 1 course, defined as a C or better. Our analysis found that students who received a MAP Growth RIT score in the range of 235 to 238 in the spring of seventh grade had a greater than 50% chance of passing an EoC assessment, or of earning a C or better in an Algebra 1 course. This accounts for roughly one-third of eligible students.

Placement decisions are more sound when they are based on multiple measures. Assessment data is useful in that clear linkages can be drawn between results and other outcomes of success, like future test scores and passing grades in key courses. However, whenever possible, schools should use multiple data points—including grades, other assessment results, and student interest—to get the most complete picture of likely student success in advanced math courses.

The risks of poor placement policies

Although implied, it is helpful to state the explicit risks brought about by poor placement policies. On the one hand, setting a bar that is too high excludes students who could be successful in a more rigorous course and excludes them from all the benefits that come from completing Algebra 1 in eighth grade. On the other hand, setting a bar that is too low sets students up to fail, frustrating them along with their teachers and families, and perhaps even discouraging the pursuit of rigorous academic opportunities in the future. Thus, the balance is incredibly important and worth getting right.

As an additional consideration, policies that require students (and/or their parents or guardians) to express affirmative interest in advanced math courses or that rely on teacher recommendations have been shown to result in the systematic under-enrollment of historically underserved students. So, imperfect as they are, policies that automatically screen students for likely success and enroll them in rigorous courses are more powerful in ensuring equitable access to these opportunities.

State placement policies

Given the data on declining enrollment in eighth-grade algebra, particularly for historically underserved students, policymakers have been taking action. Numerous states have enacted policies in recent years to automatically place “ready” students in Algebra 1.

In 2017, for example, North Carolina passed a law requiring that all students who score at the highest level on a state test be placed in an advanced math course the following year (unless they opt out). Known as automatic enrollment, such policies remove the need for teacher, student, or parent initiative when placing kids in advanced math courses. Since the law was implemented, there has been a steady increase in the percentage of students who are placed in advanced math courses in North Carolina, and in 2023, more than 95% of advanced-scoring eighth-graders were placed in such a course.

In 2023, Texas did something similar by requiring all school districts to enroll each sixth-grade student who scores in the top 40% of their fifth-grade state math assessment in advanced math courses. This was done with the explicit goal of enabling those students to enroll in Algebra 1 in eighth grade. Note that the choice of focusing on the top 40% matches our research findings, and perhaps other analyses, which show that approximately 40% of students are “ready” for Algebra 1 in eighth grade.

Federal legislation has been introduced in the House and Senate to incentivize such policies. Districts that receive funds under the proposed Advanced Coursework Equity Act would be required to screen students and automatically enroll them in advanced math courses with a specific focus on eighth-grade Algebra 1.

In closing

While we don’t yet have definitive research about whether Algebra 1 policies result in higher student success rates, they’re clearly having the desired effect of increasing the number of students enrolled in advanced math courses.

There can—and will be—debates over exactly how to determine readiness. In the interim, all schools should proactively screen students and automatically enroll them in eighth-grade algebra if they believe students can be successful.

The post States should support student access to advanced math courses appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2024/states-should-support-student-access-to-advanced-math-courses/feed/ 0
3 considerations on chronic absenteeism for education policymakers https://www.nwea.org/blog/2024/3-considerations-on-chronic-absenteeism-for-education-policymakers/ https://www.nwea.org/blog/2024/3-considerations-on-chronic-absenteeism-for-education-policymakers/#respond Tue, 12 Mar 2024 12:00:00 +0000 https://www.nwea.org/blog/?p=21889 The percentage of students displaying chronic absenteeism—defined as missing 10% or more of school for any reason—nearly doubled in the wake of the COVID-19 pandemic. The most […]

The post 3 considerations on chronic absenteeism for education policymakers appeared first on Teach. Learn. Grow..

]]>

The percentage of students displaying chronic absenteeism—defined as missing 10% or more of school for any reason—nearly doubled in the wake of the COVID-19 pandemic. The most recent data suggests those rates may be improving somewhat, but progress has been slow.

Attending school is important. So much so that in January, the Biden administration recommended increasing student attendance as one of its three evidence-based strategies that improve student learning.

What the research tells us about chronic absenteeism

In a new research paper, Jing Liu, Monica Lee, and I show that academic behaviors— including showing up to school regularly—are highly predictive of three longer-term outcomes:

  • Graduating from high school
  • Attending a four-year college
  • Persisting in college more than one year

We used detailed longitudinal data from ninth-graders in a large urban school district in California to evaluate the degree to which observable academic behaviors and student self-reported social-emotional learning (SEL) skills predict future educational attainment.

We defined “academic behaviors” as behaviors students exhibit in school, including attendance, chronic absenteeism, and rule-breaking resulting in suspension. Our definition for “SEL” is in line with the definition published by the Collaborative for Academic, Social, and Emotional Learning (CASEL): “SEL is the process through which all young people and adults acquire and apply the knowledge, skills, and attitudes to develop healthy identities, manage emotions and achieve personal and collective goals, feel and show empathy for others, establish and maintain supportive relationships, and make responsible and caring decisions.” We compared academic behaviors, including full- and part-day school absenteeism and suspensions, in ninth grade against SEL measures including self-management, self-efficacy, growth mindset, and social awareness.

Overall, the academic behaviors were much more predictive than SEL skills across the three longer-term outcomes. Specifically, we found that conditional on students’ achievement and demographic characteristics, ninth-grade academic behaviors were seven times more predictive of high school graduation, and two to three times better at predicting college attendance and persistence. Among the various academic behaviors, part-day absenteeism, or class skipping, is the most highly predictive of the longer-term outcomes.

In the sections below, I break down three important lessons for policymakers looking to understand—and ultimately improve—student outcomes.

1. Academic behaviors are easier to measure than SEL

Our research found that academic behaviors and SEL skills are related and that a student’s self-perceptions can manifest through academic behaviors such as attending school regularly and avoiding disciplinary infractions. Conversely, disengagement is commonly linked to feelings of isolation or lack of support, bullying, and a lack of sense of safety. Furthermore, schools with exclusionary discipline policies tend to have students with lower rates of academic connection and sense of belonging in their classrooms. On the positive side, programs focused on SEL development and restorative justice practices have led to reductions in absenteeism and/or suspensions.

However, academic behaviors and SEL are also not perfectly correlated, and they typically vary in terms of how the data is collected and the intended uses by districts. For example, we found that academic behaviors are far easier to collect since many states and districts already mandate the collection of most or all the necessary data.

Ninth-grade academic behaviors were more than seven times more predictive for high school graduation than the self-reported SEL scores.

Our study was only possible thanks to our district partner collecting survey responses on four SEL constructs:

  • Growth mindset
  • Self-efficacy
  • Self-management
  • Social awareness

The district administered an annual survey asking students to respond to a maximum of eight questions for each of the four SEL constructs, including how much they agreed with statements like the ones that follow:

  • “I am capable of learning anything.”
  • “I can do well on all my tests, even when they’re difficult.”

The district in our study was already collecting this data, but cost may be a barrier for less-resourced schools and districts. We observed a 67% response rate on the SEL survey.

In contrast, the academic behavior data we used may already be collected by many states and districts. Chronic absenteeism is easily observable and measurable for all students, making it less challenging than other measures to use as a proxy for disengagement. Attendance is marked daily, if not multiple times a day, for high school students and it is kept as administrative data. Additionally, school staff also tend to mark reasons why a student has missed school, such as for excused and unexcused reasons; this helps provide an understanding of whether the absence occurred due to legitimate reasons.

In sum, academic behaviors are easier to measure than SEL skills, but little is known about whether measures of SEL skills uniquely predict educational attainment such that their inclusion in early warning systems or accountability models would provide valuable information above and beyond typically included measures like absenteeism rates.

2.Academic behaviors were more predictive than SEL skills

The research literature consistently finds that attendance, discipline rates, and completion of certain high-stakes academic coursework are the strongest predictors of high school graduation above and beyond standardized test scores. We extended these results by comparing academic behaviors and SEL skills.

We found the predicted variance was much higher for academic behaviors, regardless of which long-run outcome we use. The chart below, figure 1 in our paper, shows the results. The predictive power of SEL skills is in black, compared to the observable academic behaviors in dark gray. In light gray we show a model that combined both sets of measures.

A bar graph shows that observable ninth-grade academic behaviors were more predictive of high school graduation, college attendance, and college persistence than self-reported SEL scores alone.

As the chart shows, ninth-grade academic behaviors were more than seven times more predictive for high school graduation than the self-reported SEL scores. This contrast becomes weaker for the longer-term outcomes like college attendance and college persistence, but academic behaviors still exhibit predictive power two to three times larger than SEL skills. The chart also shows that SEL skills added very little predictive value over and above the measures of attendance and other behaviors (the light gray versus the dark gray bars). Academic behaviors were better as a standalone measure, and they largely capture any unique contributions the SEL survey added.

Unfortunately, all our measures of noncognitive skills got weaker the further out we looked. When trying to predict college persistence, models that included academic behaviors, SEL skills, or both did little better than provide a baseline model that used only student demographics and ninth-grade GPA data. Importantly, we did see one notable difference when we looked across student groups: academic behaviors were once again much better predictors of high school graduation rates for low-achieving students, but we also found suggestive evidence that SEL skills played a bigger role for the postsecondary success of lower-performing students.

3. Policymakers may want to consider looking at partial attendance

As policymakers work to address the current spike in chronic absenteeism, our research suggests they should also look more closely at partial attendance. Students who miss only a class or part of the day may not show up in the chronic absenteeism numbers, but they may still be at risk of longer-term consequences.

Our data was pre-pandemic, and in our sample the average student missed about six full school days during ninth grade. (These numbers likely pale in comparison to the current absenteeism spike.) In contrast, part-day absenteeism was much more prevalent: the average student accrued about 17 part-day absences, or close to three times as many as full-day absences.

Moreover, part-day absences were a stronger predictor of longer-term outcomes than full-day absences were. The effects were especially large for our two post-secondary outcomes, which suggests that more granular measures of academic behaviors, which might already exist in existing school administrative data systems, could provide more useful information about student future academic trajectories that is currently not captured by more crude, commonly used measures such as full-day absences or suspensions.

In closing

Our study results should not be taken as evidence that SEL skills don’t matter, and we hope practitioners continue their efforts to measure and promote SEL skills. However, our findings start to unveil the untapped potential of developing more fine-grained behavioral measures, which are already being collected by school administrative data systems.

Given how strongly partial-day absenteeism predicts long-run outcomes, policymakers should consider tracking and monitoring it more closely than most currently do. Other academic behaviors, such as tardiness, office discipline referrals, and participation in extracurricular activities, are also relatively easy to measure and potentially contain rich information about students.

The post 3 considerations on chronic absenteeism for education policymakers appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2024/3-considerations-on-chronic-absenteeism-for-education-policymakers/feed/ 0
5 ways to maximize high-dosage tutoring https://www.nwea.org/blog/2024/5-ways-to-maximize-high-dosage-tutoring/ https://www.nwea.org/blog/2024/5-ways-to-maximize-high-dosage-tutoring/#respond Thu, 22 Feb 2024 13:00:00 +0000 https://www.nwea.org/blog/?p=21849 High-dosage tutoring is one of the most effective ways to help students quickly make up lost academic ground. Given the strong evidence base, largely from pre-pandemic years, […]

The post 5 ways to maximize high-dosage tutoring appeared first on Teach. Learn. Grow..

]]>

High-dosage tutoring is one of the most effective ways to help students quickly make up lost academic ground. Given the strong evidence base, largely from pre-pandemic years, showing that students who complete high-dosage tutoring post impressively large gains on test scores, many districts across the country have created or expanded tutoring programs. The Biden Administration even recommended high-dosage tutoring programs to help students recover from pandemic-related learning disruptions just last month.

Quality is critical. While the design of specific high-dosage tutoring varies, programs typically involve tutoring in one-on-one or small groups for sessions that are at least 30 minutes long and take place at least two to three times per week. Prior guidance on the best ways to implement high-dosage tutoring indicates schools should schedule sessions during the school day, keep tutoring one-to-one or in small groups, and incorporate other important features, like using high-quality, aligned curricula and fostering supportive tutor–student relationships.

In a new brief, our research team reviews the evidence base around high-dose tutoring, explains why focusing on high-dosage tutoring with at-risk students makes the most sense, and then highlights design principles to guide districts’ ongoing efforts. Here are five key takeaways from our review of the research.

1. Focus high-dosage tutoring programs on academically at-risk students

In too many parts of the country, students remain behind their pre-pandemic levels of achievement. Moreover, students who entered the pandemic with lower test scores experienced a larger drop in achievement compared to their higher-achieving peers. As a result, districts are facing an unprecedented number of students who qualify as academically at-risk, which we define as students who require intensive support outside of classroom instruction to learn grade-level skills or pass coursework necessary for later academic success and school completion.

Those academically at-risk students stand to benefit the most from intensive supports like high-dosage tutoring, which can help improve academic outcomes by personalizing instruction and fostering supportive relationships that build engagement and motivation in learning. Plus, high-dosage tutoring has consistently shown larger benefits for students’ test scores relative to other interventions targeted at low-achieving students, for example, technology-enabled programs, professional development, and curriculum reforms.

2. Use assessments and data to evaluate student skills targeted for intervention, monitor learning progress, and document other factors that affect learning

While interim and summative assessments can help identify academically at-risk students, districts can also use additional assessments—often formative and informal in nature—to evaluate the specific skills targeted for intervention and to document students’ existing strengths and areas for improvement so that tutoring can be tailored accordingly. For example, assessments that can guide progress monitoring of high-dosage tutoring programs include sub-tests for literacy or math skills (e.g., decoding, word recognition, numbers sense), teacher input, measures of student progress through the tutoring curriculum, and researcher-developed assessments designed to capture the specific skills targeted for intervention.

3. Be open to tutors with different skills and qualifications than classroom teachers, but monitor program implementation

To meet the scale of student need for academic support without breaking the bank, districts may want to consider hiring less skilled tutors to reduce program costs and overcome labor shortages. Because the small-group environment of tutoring is less complex than a regular classroom, research suggests districts can hire tutors across a range of experience and qualifications (e.g., volunteers or college students instead of certified educators) without sacrificing gains in student achievement.

Still, districts that go this route should consider implementing robust systems and procedures to ensure high-quality implementation. For example, districts could supply tutors with scripted instructional materials, intensive training, and ongoing supervision and feedback from on-site tutor supervisors. Districts can also observe and rate tutor sessions for fidelity and track data on student progress through the assigned curriculum.

4. Evaluate the effectiveness of high-dosage tutoring programs on specific skills and for specific students

Not surprisingly, high-dosage tutoring programs tend to show the biggest impact on skills targeted by the tutoring sessions. Recent literature reviews of early reading interventions show larger effect sizes for phonics- and fluency-related outcomes and smaller effect sizes for reading comprehension. We also see larger effect sizes in earlier versus later grade levels.

However, that means that interim and summative assessments may not be as responsive to high-dosage tutoring interventions if these assessments evaluate broader skillsets, such as all grade-level knowledge. Similarly, if districts are targeting high-dosage tutoring programs to certain at-risk student groups (e.g., multilingual learners, students with disabilities, chronically absent students), they may want to partner with researchers to evaluate the effects of the programs for these student groups.

5. Expect variation in the effectiveness of high-dosage tutoring programs

Although tutoring can be an effective tool for promoting academic recovery, there’s no guarantee it will work the same in every place, especially if program elements like personnel, curriculum, or scheduling vary. Evaluations of recovery programs such as summer school have found small effects on student achievement, with districts facing many challenges related to staffing, scheduling, student absences, and school-level capacity that have hindered program implementation and effectiveness.

The same types of problems could potentially affect district high-dosage tutoring programs. For example, tutoring program uptake and results can suffer when they depend on students to opt in. But, before writing it off entirely, districts should consider ways to strengthen their tutoring programs given the overwhelming evidence on high-dosage tutoring as an intervention strategy.

In closing

With the sunsetting of the federal ESSER funding, districts will need to be more strategic about how they design and sustain their high-dosage tutoring programs in the years to come. Districts that relied on the federal money to build out their tutoring programs should be thinking now about how to sustain those investments going forward, especially if their program is working but students are still behind.

As Accelerate CEO Kevin Huffman noted in The 74, high-dose tutoring programs have spread rapidly, thanks to local, state, and now national interest from the White House. Implementation details will matter immensely, but millions of young people stand to benefit if policymakers can sustain the political will. By learning from the research on high-dose tutoring programs, they can ensure high-dosage tutoring programs effectively serve the students who are most academically at risk and help them get back on track.

The post 5 ways to maximize high-dosage tutoring appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2024/5-ways-to-maximize-high-dosage-tutoring/feed/ 0
3 ways leaders can improve assessment literacy for parents and families https://www.nwea.org/blog/2023/3-ways-leaders-can-improve-assessment-literacy-for-parents-and-families/ https://www.nwea.org/blog/2023/3-ways-leaders-can-improve-assessment-literacy-for-parents-and-families/#respond Thu, 21 Dec 2023 13:00:00 +0000 https://www.nwea.org/blog/?p=20607 Declining student scores over the past year in reading, math, and civics on the Nation’s Report Card have confirmed a sobering truth that was already widely assumed […]

The post 3 ways leaders can improve assessment literacy for parents and families appeared first on Teach. Learn. Grow..

]]>

Declining student scores over the past year in reading, math, and civics on the Nation’s Report Card have confirmed a sobering truth that was already widely assumed by the public at large: students, already in need of extra academic support pre-pandemic, have fallen even further behind.

You wouldn’t know it, though, if you asked parents about their own child. Despite only about a third of the country’s students reading on grade level, 92% of parents believe their own child is doing so. That math, of course, doesn’t add up.

So, what’s the source of this disconnect? In some cases, it may be that teachers aren’t telling parents and guardians how their child is doing. A recent survey of 1,000 teachers across the country found that only 38% of teachers had received the training needed to effectively leverage student assessment data, and fewer than two-thirds report using assessment data to inform families of their students’ progress. Those two things, of course, go hand-in-hand: communicating about assessment data is challenging if you haven’t been trained how to do so.

Parents and other caregivers deserve transparency in their children’s learning, and their tendency to underestimate their students’ gaps makes it critical that teachers leverage objective measures of student learning in conversation with families. As two teachers with more than a decade of experience each, we’ve learned some strategies for doing so and have identified areas for leaders to lean in to help as well.

1. Avoid assumptions

In our schools, we often hear the assumption that parents and guardians don’t and can’t understand assessments. The terms are too jargony, or the scores too complex, or the language or cultural barriers too wide, and so we avoid the conversation altogether.

We agree that we must be mindful of these challenges in communicating with families about their students’ progress, but they should never be used as an excuse to not explain student academic progress. We shouldn’t assume that adults can’t understand, and we should always dig deeper in conversations with them to identify where gaps in their understanding do exist and work with them to close them.

2. Give families the tools and context they need

In the same way we scaffold learning for our students, we can scaffold for their adults by helping them understand the different types of assessments, what each one is designed to measure and, ultimately, what they tell us about their student’s academic progress.

When families struggle to understand their child’s assessment results, we can share examples of texts a student should be able to read at their grade level and compare them to texts the student is currently reading. We must also educate parents and other caregivers on what questions they should be asking in every interaction with their child’s teacher, like “Where is my child academically compared to where they are supposed to be?”

While teachers should play a role in sharing this information with families, it is also the responsibility of the district and school leaders to provide educators the training they need to do this well. And districts must create opportunities for adults to attend workshops to learn about assessments and what they can tell us, as well as how to be savvy about how they market and communicate these opportunities to families to ensure wide reach.

3. Communicate challenges transparently up front

It is human nature to want to avoid telling a parent or other caregiver that their child is struggling. But, when we sugarcoat this information or offer “compliment sandwiches” to soften the blow, we deprive guardians of the ability to learn and execute the steps needed to help them improve. Of course, we should be compassionate when talking to families, but we should also be direct.

Most adults believe their child is reading on grade level because they have been promoted to that grade, sometimes based on report card grades that did not accurately portray their abilities. Particularly in states that require passing grades on assessments to graduate, we cannot wait and let the final exam be the way families learn their child is behind. We must talk to them early and often about their students’ progress, offering them the opportunity to be a first-hand participant in their child’s learning journey.

The post 3 ways leaders can improve assessment literacy for parents and families appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/3-ways-leaders-can-improve-assessment-literacy-for-parents-and-families/feed/ 0
Aligning curriculum, assessments, and professional learning to better support teacher practice and student learning https://www.nwea.org/blog/2023/aligning-curriculum-assessments-and-professional-learning-to-better-support-teacher-practice-and-student-learning/ https://www.nwea.org/blog/2023/aligning-curriculum-assessments-and-professional-learning-to-better-support-teacher-practice-and-student-learning/#respond Tue, 19 Dec 2023 13:00:00 +0000 https://www.nwea.org/blog/?p=20601 The work of a teacher has infinite moving parts. There’s the upcoming required interim district assessment to prepare for. There’s the new school-wide social-emotional learning program to […]

The post Aligning curriculum, assessments, and professional learning to better support teacher practice and student learning appeared first on Teach. Learn. Grow..

]]>

The work of a teacher has infinite moving parts. There’s the upcoming required interim district assessment to prepare for. There’s the new school-wide social-emotional learning program to internalize and implement. There’s the rubric to design and the parent to call and the grade-team meeting to attend.

As teachers, we know it’s essential for these moving parts to coherently align with each other, and experts agree. Unfortunately, while critical, this is rarely the reality teachers experience, particularly when it comes to the way assessing student academic progress aligns with the rest of their work. A 2023 survey showed that only 29% of teachers said they received formative assessments aligned to their curricular materials, and only 38% said they received professional learning to support them in aligning assessment data to their everyday practice.

As current classroom teachers, we know that when these moving parts don’t align, it creates a disconnect between our day-to-day work of teaching and learning and the training and materials we receive to assess student progress. We believe if these moving parts were better aligned, assessments could not only produce the data districts and states need to evaluate progress but also add to the day-to-day learning experience rather than subtract from it. Here’s how.

Aligned curriculum

Every educator has had to put a pause on their daily teaching to prepare students for a midyear assessment that lived outside of the scope of their current curriculum. We believe in the power of having district-wide assessment data and understand how critical it is to be able to evaluate progress across schools. We’re not against this practice, but when you have to stop in the middle of your curriculum and switch to focus on something that feels wholly separate from your lessons, it disrupts the day-to-day flow of your classroom.

Assessments should be aligned to the curricular content we are currently delivering in our classrooms. They should be an integrated part of the learning experience that blends in authentically with what we’re already doing. It shouldn’t be a compliance exercise but rather a tool to help us gauge exactly where our students are in their learning journey and to inform how to help them excel.

Aligned data

Because our required interim assessments rarely align with the curriculum we’re teaching, finding direct connections between the data produced and the skills we’re currently teaching is challenging. Often, when we receive data back from these assessments, they simply confirm that the students we thought were in need of extra support are. We then ask ourselves, “What do I do with all of this information?” Understanding how to support students in mastering specific skills is rarely obvious from the data we receive. We need meaningful data.

Aligning assessments to school- or district-wide curricula would make the data more actionable by making explicit the connection between what we just taught, the results of the assessment, and what we’re teaching next. Ultimately, this would support us as we support our students in mastering content and skills.

Aligned training

Equally important to aligning curriculum and assessments is aligning assessments and professional learning. Often, in our experience, the training we receive around assessments is specific to how to administer the required test or share data back with the district. While these things are important, they do not help us understand how to leverage the data in our own teaching practices.

Professional learning related to assessments should be timed for when we actually receive assessment data. Research shows that effective professional learning leverages the instructional materials teachers are already using and offers an opportunity for real-life practice. In line with this, assessment-aligned training should walk us through how to leverage our assessment results in real time in order to improve student learning in our classroom. This will not only support teachers in assessing data in the moment, but also bolster their ability to do so in the future without this training.

Change is needed

By aligning the moving parts of delivering instruction and measuring its success, the education field can help teachers paint a clearer big picture to work within. If we do this, assessments will become an integral part of that picture, rather than the compliance exercise many teachers currently see them as.

The post Aligning curriculum, assessments, and professional learning to better support teacher practice and student learning appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/aligning-curriculum-assessments-and-professional-learning-to-better-support-teacher-practice-and-student-learning/feed/ 0
3 teacher recommendations for teaching with the state test https://www.nwea.org/blog/2023/3-teacher-recommendations-for-teaching-with-the-state-test/ https://www.nwea.org/blog/2023/3-teacher-recommendations-for-teaching-with-the-state-test/#respond Thu, 14 Dec 2023 13:00:00 +0000 https://www.nwea.org/blog/?p=20585 Two things are simultaneously true and inherently contradictory: Having a standardized source of student achievement data across districts is critical for evaluating progress and identifying student subgroups […]

The post 3 teacher recommendations for teaching <em>with</em> the state test appeared first on Teach. Learn. Grow..

]]>

Two things are simultaneously true and inherently contradictory: Having a standardized source of student achievement data across districts is critical for evaluating progress and identifying student subgroups in need of extra support, and yet learning is an acutely individualized and personal experience. When we step into the classroom each day, we tailor our instructional practices to the needs and backgrounds of students whom we know and for whom we care deeply. When the state asks our students to choose a, b, c, or d, it doesn’t know them—not even superficially.

This challenge helps explain why, despite 90% of teachers reporting that students should have a summative measure of their learning from the beginning to the end of the school year, the public—and particularly teacher—perception of state tests is overwhelmingly negative.

The American K–12 education system has struggled to reconcile this paradox for decades. How can we objectively measure student learning, without confining our students to a box? How can we shift focus away from the product (the test score) and to the process (the learning experience) without losing the ability to evaluate our progress objectively? How can we improve assessments to better serve students and teachers? How can assessments provide the information necessary to help teachers understand students’ progress and how to help them improve, while avoiding the punitive atmosphere that is often associated with state tests?

Can we find a way to bridge the disconnect between the classroom and the test, a way to standardize authenticity? We—four educators from four different districts and three different states who teach across grade levels, subject areas, and school types—have some suggestions.

1. Focus on skills over content knowledge and depth over breadth

The standards-based reform movement aimed to push K–12 education toward a focus on developing skills rather than regurgitating content knowledge. Sometimes, though, we feel that state assessments don’t align with this spirit. In fact, only 56% of teachers report that their state summative assessments in math and English accurately measure students’ mastery of standards.

State assessments should ask deeper questions about fewer topics, focusing on skills over content knowledge and depth over breadth. For example, instead of 30 questions that cover the span of US history, an assessment could focus squarely on federalism, asking questions in a way that allows students to demonstrate their ability to apply their transferable analytical skills to a single political concept.

2. Make state tests adaptable

In our classrooms, we provide multiple access points for each student to show proficiency. This practice is encouraged; we recognize it as a critical piece of differentiating learning for our students. This is particularly important for our English learners, who cannot possibly demonstrate their proficiency in math or science on a test written in English without being able to read it. And yet, given their standardized nature, state assessments rarely allow this.

State assessments should be adaptable. They should meet our students where they are, adjusting and adapting to their abilities as they move through the test, while still providing a numeric score that allows us to objectively determine their proficiency, assess their needs, and support them in ultimately reaching grade-level standards for college and career readiness. And every test should be available in any language that a student might speak, so that we can confidently say we are assessing subject-specific skills, not language capability.

3. Issue more frequent, shorter tests

The end-of-the-year state assessment is an anxiety-inducing drumroll moment for both teachers and students, the singular moment over the course of the year when students show what they learned when all teaching and learning is already said and done. This prevents teachers from being able to effectively leverage the data the assessments produce. Even if the data from this assessment is returned quickly, it is not useful to the students’ current teacher, who does not have enough time left in the school year to address their needs. This timing leaves little opportunity to learn from the data or to shift our instructional practice for that cohort and places too much emphasis on a single moment in time. Think of it this way: If your GPS doesn’t course correct you along the way, how can you possibly figure out that you’re lost before it’s too late?

We want tests to move in the direction where they’ll be more frequent, shorter, and connected to what students learn in their classrooms. To work toward solving this problem, state tests could be shorter and administered multiple times during the school year. These changes would allow teachers to make informed decisions about curriculum and teaching practices with their current students and could in the aggregate paint a picture of what a group of students learned over the course of the year. Essential to this, though, is ensuring that these shorter tests truly are significantly shorter, that they are quick, varied ways of measuring student learning that happens right in the classroom, and that they enhance the learning process without disrupting learning time or punishing schools.

Change is possible

Implementing the changes we have suggested will require thoughtful planning. There are big questions to answer: How will this affect instructional time? How will we ensure teachers have a voice in designing these assessments? How can these assessments support teachers’ instructional practice? What are the budget implications? We think, though, that it’s not only possible, but also absolutely essential to answer these questions to pave this path forward.

The instructional coach among us often encourages his educators to teach with the test, not to the test, allowing the assessment to work hand-in-hand with their instructional pedagogy. We believe in a world in which all teachers are given the tools—including effective state assessments—to do just that.

The post 3 teacher recommendations for teaching <em>with</em> the state test appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/3-teacher-recommendations-for-teaching-with-the-state-test/feed/ 0
Listening to teachers to fix the state testing quagmire https://www.nwea.org/blog/2023/listening-to-teachers-to-fix-the-state-testing-quagmire/ https://www.nwea.org/blog/2023/listening-to-teachers-to-fix-the-state-testing-quagmire/#respond Tue, 12 Dec 2023 13:00:00 +0000 https://www.nwea.org/blog/?p=20565 Across the country, teachers are beginning to plan for instruction after winter break. At some point during the second half of the school year, their calendars are […]

The post Listening to teachers to fix the state testing quagmire appeared first on Teach. Learn. Grow..

]]>

Across the country, teachers are beginning to plan for instruction after winter break. At some point during the second half of the school year, their calendars are marked “state assessments.” They’ve blocked off several days—maybe a week or more—for lessons to prepare students for the content on those tests. For some, it will be a race to the finish; for others, the tests will be a distraction from the curricular focus that has engaged students so far this school year.

We’ve heard from teachers that, though they believe deeply in measuring student learning, many existing state testing models can disrupt the flow of their instruction and don’t always deliver actionable information they can use to improve the learning experience for the students in their classroom. They want innovative assessment models that provide results sooner, limit the disruption in their classroom, and can still be used to ensure schools are serving all students.

An old problem

It is no secret in American education that our approach to summative assessment needs to change. Federal law requires schools to assess students from grades 3 through 8 and, once in high school, measure their achievement against state standards. Test developers try to design those summative tests so the results will help teachers and principals identify students who are falling behind and how to help them recover. But the data needs to be more precise, be more aligned to instruction, take less class time to gather, and arrive sooner to transform a teacher’s instructional approach.

Test developers have created innovative assessments that are shorter, more frequent, and able to deliver results on a quicker timetable. However, federal rules and procedures must change before states can pilot and adopt these creative solutions, and the assessments must be implemented in a way that ensures they still provide a valid end-of-year summative score that can be compared across schools and districts.

In the meantime, teachers must build their schedules around assessments they don’t consider helpful. As the summative assessments near, they may have to pause a series of lessons that engage their students deeply and switch to less exciting content on the test. When the results come back too late to have an impact, the testing feels like an exercise in frustration.

Exploring solutions

In the coming months, teachers who are members of Educators for Excellence’s (E4E) National Teacher Leader Council (NTLC) will explain to Teach. Learn. Grow. readers how assessments affect their classroom practice and how to improve them so they can use assessments to help their students. They will explain that they want a summative measure of student learning but don’t currently fully trust the data that comes back, can’t use that data to support their students, and find that the time to prepare and administer these tests is too disruptive.

The teachers’ articles will illuminate key findings in E4E’s national teacher survey, Voices from the Classroom. In the 2023 survey of 1,000 teachers in districts or public charter schools, E4E found the following:

  • Teachers believe deeply in the value of measuring student learning, as evidenced by 90% of teachers believing that students should have a summative measure of their learning from the beginning to the end of the school year, and 83% believing that teachers should be responsible for their academic progress.
  • Teachers say they need the tools to measure students’ progress through the school year. Only 29% of teachers said they received formative assessments aligned to their curricular materials, and only 38% said they received professional learning to support them in aligning assessment data to their everyday practice.
  • Teachers need to be taught how to use or communicate about test results. Only 38% of teachers had received the necessary training to leverage student assessment data effectively, and fewer than two-thirds reported using assessment data to inform parents and guardians of a student’s progress.

These teacher authors will provide practical advice. They will suggest making assessments adaptable to provide a comprehensive report on a student’s progress. They will describe the types of tools that parents and guardians use when reading their child’s assessment scores. They will explain how training would help them understand test results and apply what they learn to improve their instruction.

Sharing these ideas provides education leaders at every level with specific tasks that respond to what teachers want. We can make that happen with the right mix of innovative assessments, policies that support those assessments, and investments in teachers’ professional development and growth. Once we respond, teachers will have more effective tools to help their classroom practices; parents and guardians will get the information they need to support their children’s educational journey; and policymakers will have better measurement tools to track the success of education policies.

Once these suggestions become standard practices, teachers will be able to plan a calendar where their instruction is aligned with the statewide assessments their students take. They will be able to set aside time to review the assessment results and adjust their teaching to address students’ needs. We are ready to collaborate with policymakers, educators, and teachers to make this vision a reality.

The post Listening to teachers to fix the state testing quagmire appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/listening-to-teachers-to-fix-the-state-testing-quagmire/feed/ 0
3 academic interventions policymakers can support to help get students back on track  https://www.nwea.org/blog/2023/3-academic-interventions-policymakers-can-support-to-help-get-students-back-on-track/ https://www.nwea.org/blog/2023/3-academic-interventions-policymakers-can-support-to-help-get-students-back-on-track/#respond Thu, 05 Oct 2023 12:00:00 +0000 https://www.nwea.org/blog/?p=20409 According to the latest NWEA data, students last spring were, on average, 4.1 months behind pre-pandemic achievement levels in reading and 4.5 months behind in math. Students […]

The post 3 academic interventions policymakers can support to help get students back on track  appeared first on Teach. Learn. Grow..

]]>

According to the latest NWEA data, students last spring were, on average, 4.1 months behind pre-pandemic achievement levels in reading and 4.5 months behind in math. Students in middle school grades, students attending high-poverty schools, and students of color were the furthest behind, especially in math. These are big gaps, and they represent a daunting challenge: How, exactly, are schools supposed to help students get back on track? Which academic interventions will work best?

Recent research out of NWEA explores what teachers can do to support students. Our collaboration with researchers outside of our organization has also helped us develop recommendations for policymakers. They are presented in our new brief with EdResearch for Action, a project of the Annenberg Institute at Brown University and Results for America.

The brief highlights research-based academic interventions and digs into research to highlight the most promising interventions policymakers can support for accelerating student learning in math and reading. We focus on three interventions with the strongest research base—tutoring, summer school, and double-doses of math instruction—before discussing ways schools can create the conditions for academic acceleration practices to succeed. What follows is a short summary of our findings.

1. Tutoring

High-impact tutoring is widely considered to be one of the most effective ways to help students quickly make up lost academic ground. A review of almost 200 rigorous studies found that high-impact tutoring—defined as tutoring delivered two to three times per week for at least 30 minutes per session with four or fewer students in a group—is one of the few school-based interventions with large positive effects on both math and reading achievement.

The research on tutoring suggests that effective programs tend to include the following design principles: tutoring is conducted during the school day as opposed to outside of school, students have three or more sessions per week, and tutoring is led by skilled tutors, conducted in small groups of up to four students, and is aligned to the classroom curriculum.

However, not all high-impact tutoring programs are effective. Challenges with staffing, scheduling, and student engagement can prevent school systems from implementing a tutoring program as designed, which can result in diminished effects. Optional on-demand tutoring programs may be popular because they require fewer district resources to implement, but they are unlikely to be used by the students who need them the most. Recent evidence on a California charter district found that, without nudges to participate, only 19% of middle and high school students ever accessed the district’s tutoring platform. Students most in need of support were even less likely to log on, raising concerns that opt-in resources may exacerbate gaps rather than reduce them.

2. Summer school

Schools and districts have also had success offering intensive, short-term interventions during summer vacations and other school breaks. Reviews of the effects of summer learning programs across multiple studies find positive impacts on student test scores in math. The impacts on reading are more mixed, with some studies showing no impact and others finding positive gains consistent with those in math.

Summer learning programs with the strongest gains tend to feature high levels of student attendance (i.e., at least 20 days) and highly effective teachers. Devoting more time to instruction was also associated with more noticeable gains in both math and reading.

Learning programs during school year breaks that are typically shorter in duration than summer school, often called “vacation” or “acceleration” academies, can also be effective strategies for boosting student achievement. For example, week-long acceleration academies targeted at students as part of turnaround efforts in Lawrence, Massachusetts, led to gains in math and reading. The programs provided small-group instruction (10–12 students per group) in either subject and were taught by a carefully selected group of talented teachers. In total, students received about 25 hours of extra instruction per week at a cost of $800 per student, which included incentives for students for perfect attendance.

3. Double-dose math

Double-dose math classes have been shown to be effective in helping students increase their math proficiency. Double-dose courses can be offered as an additional, separate class period in the school day or as an extended period. In either case, they typically replace an elective and serve all students or target a subset of students needing extra support.

The best evidence on the effects of double-dose math classes comes from a study out of Chicago. All ninth-grade students with low math test scores were enrolled in a double-dose algebra support class in addition to their regular algebra class, which was typically taught by the same teacher. Students participating in the double-dose period increased their spring algebra test scores significantly. In contrast, double-dose courses and interventions in English, reading, and vocabulary have not yielded test score gains beyond those of business as usual.

Other options

The brief also includes a discussion of the pros and cons of other potential academic interventions, including after-school programs, computer-assisted learning programs, and extending the school day or year. Each of these initiatives have promise, but the evidence is mixed and includes important cautions and caveats.

In general, academic interventions that address specific opportunity gaps and provide scaffolding for grade-level content lead to larger gains in student achievement compared to merely reteaching content from previous years. Interventions also tend to have larger positive effects when they attend to students’ social and emotional needs alongside academic learning. For example, tutoring programs with the largest impacts have consistent tutor–student matches over time, with one plausible factor being sustained tutor–student relationships focused on clear academic and social-emotional goals. Research also suggests that greater alignment between academic interventions and classroom instruction will amplify the efficacy of interventions.

Ultimately, many students still require sustained, targeted interventions to help them get back on track academically. Districts may have particular reasons for choosing one specific program over another, but our brief can help them identify which to pursue and what to consider to maximize the impact of their efforts.

Ayesha Hashim contributed to this post.

The post 3 academic interventions policymakers can support to help get students back on track  appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/3-academic-interventions-policymakers-can-support-to-help-get-students-back-on-track/feed/ 0
Summer programs can be an effective treatment, but districts need to get the “dosage” right https://www.nwea.org/blog/2023/summer-programs-can-be-an-effective-treatment-but-districts-need-to-get-the-dosage-right/ https://www.nwea.org/blog/2023/summer-programs-can-be-an-effective-treatment-but-districts-need-to-get-the-dosage-right/#respond Tue, 26 Sep 2023 12:00:00 +0000 https://www.nwea.org/blog/?p=20367 The new school year is well underway nationwide, and it’s worth pausing a moment to reflect on the investments school districts made in summer learning. To help […]

The post Summer<strong> </strong>programs can be an effective treatment, but districts need to get the “dosage” right appeared first on Teach. Learn. Grow..

]]>

The new school year is well underway nationwide, and it’s worth pausing a moment to reflect on the investments school districts made in summer learning. To help students recover from large opportunity gaps, many districts bet on new or expanded summer programs. In a national survey from 2022, 70% of them reported providing new or expanded summer programming because of the pandemic. How well are those programs working? Are they enough to get students back on track?

A new study found that summer school programs in the study’s sample districts helped students make gains in math but not in reading. Moreover, because participation was far from universal, the gains made up just two to three percent  of a district’s total learning loss in math, and none in reading.

This research is part of a unique and ongoing partnership between CALDER at the American Institutes for Research (AIR), the Center for Education Policy and Research (CEPR) at Harvard, and my organization, NWEA. Out of our 11 partner school districts, eight provided data on their summer 2022 programs. These eight districts collectively enroll approximately 400,000 students who are disproportionately Black, Hispanic, and/or low-income.

Our findings on the impacts of these summer programs largely come down to what readers can think of as the “dosage” of the summer school treatment. Student gains were broadly in line with what might have been expected given prior research and the amount of added instructional time students actually received. But this points to a clear lesson for policymakers: students who fell behind during the pandemic will need much more support to catch up.

About the study

To measure the “dosage” of summer programs, we looked at participation, attendance, and the numbers of instructional hours students actually received in the program. For the purposes of this study, we focused on programs that offered at least some formal academic support in math and/or ELA, either alone or in conjunction with other enrichment activities. That means we excluded programs that focused exclusively on enrichment activities. While our study was focused on academic outcomes, it would be useful for future research to study the effects these programs had on other non-academic outcomes.

The districts in our sample offered summer programs that were between 15 and 20 days long. They offered anywhere from 45 minutes up to two hours of daily academic instructional time in math and reading. All told, the number of academic instructional time ranged from 23 to 67 hours across the programs.

In the districts where we could observe attendance, the overall participation rates varied substantially, ranging from 5 to 23 percent. Across our full sample, about one out of eight eligible students (13 percent) enrolled in an academic summer program.

Students who participated in summer programs tended to score substantially lower on the MAP® Growth™ interim assessment. Participation rates were also higher for historically underserved student subgroups, including students receiving special education services, English language learners, economically disadvantaged students, and Black and Hispanic students.

The proportion of days students actually attended also varied across the districts, from 58 to 80 percent, with an average of 69 percent. Across the districts, this translates to students attending about 10 to 14 days of summer school. Accounting for attendance and 60 to 120 minutes of instruction per subject, students received approximately 14 to 27 hours of additional instructional time per subject.

Summer programs were effective in math, but not reading

Students who attended summer programs tended to make small but statistically significant improvements in math achievement, but they had no statistically significant gains on reading tests relative to similar peers who did not attend. Our results are important not only for adding additional evidence to the use of summer programs to improve student learning but also to explain whether these programs can make noticeable headway addressing COVID-19 learning loss.

The positive effects we observe in math are not far off from what would have been expected from research on pre-pandemic estimates of the impact of summer school attendance. These estimates also account for the number of hours of instruction students actually received.

Our district results came out remarkably close to what might have been expected. The chart below (Figure 1 in our paper) compares the gains on math test scores that might have been expected given prior research (the red diamonds) versus the actual gains (in gray). While the estimates across districts vary, outside of Districts 7 and 8, all are positive for math test scores and quite close to the estimated effects.

A graph shows positive math test scores in most of the districts studied. The scores are close to the estimated scores for six of the eight districts studied.

In other words, students tended to benefit from the added instructional time in math. But the gains were relatively small, and that’s almost entirely a function of the “dosage” of the summer programs. Without giving students more added instructional time, we shouldn’t expect them to make much larger gains absent some dramatic improvements in the quality of instruction they receive.

The results were less positive in reading, where the overall effects were indistinguishable from zero. We speculate that one reason it may be more difficult to achieve reading gains than math gains for summer school participants relative to non-participants is because non-participants may also practice reading over the summer but may be less likely to practice math. This explanation would align with evidence that shows larger effects of school inputs on math achievement than reading.

The road to recovery remains large

Based on the latest NWEA research, students nationally remain far behind where their peers were before the pandemic. The average eighth-grade student, for example, was the equivalent of nine months behind in math and seven months behind in reading at the end of the 2022–23 school year.

There are some potential kernels of good news in this new report. We found that adding additional days of programming (with the same or more instructional time) resulted in additional, proportional gains for students, at least in math. The gains were also consistent across different student subgroups, suggesting that, when summer program space is limited, increasing the targeted recruitment and attendance of students who would most benefit may be an effective strategy for boosting achievement among students with the greatest academic needs. This is a particularly important finding given prior research showing that students with disabilities and English learners tend to suffer larger academic losses during the summer than their peers. Expanding summer school offerings may help address these gaps.

Notably, one of our sample districts highlights a potential path forward to boost participation rates in summer programs. It offered extended operating hours, provided childcare for working parents, and framed its summer programs as “summer camp”—an exciting learning and enrichment program—as opposed “summer school.” Although not definitive, the district that structured its summer programs this way had the highest participation rate in our sample.

That said, as we conclude in the paper, our findings underscore the need for a continued commitment from policymakers at all levels to deliver recovery interventions at the scale and intensity needed to address the pandemic’s academic impact. Failing to do so will have dire consequences for students and for our wider society.

The post Summer<strong> </strong>programs can be an effective treatment, but districts need to get the “dosage” right appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/summer-programs-can-be-an-effective-treatment-but-districts-need-to-get-the-dosage-right/feed/ 0
Assessment security: Who—or what—are we really protecting? https://www.nwea.org/blog/2023/assessment-security-who-or-what-are-we-really-protecting/ https://www.nwea.org/blog/2023/assessment-security-who-or-what-are-we-really-protecting/#respond Thu, 10 Aug 2023 12:00:00 +0000 https://www.nwea.org/blog/?p=20266 There are many misconceptions blocking innovation in state summative assessments. One is security. Existing policies related to assessment security may be doing more harm than good. An […]

The post Assessment security: Who—or what—are we really protecting? appeared first on Teach. Learn. Grow..

]]>

There are many misconceptions blocking innovation in state summative assessments. One is security. Existing policies related to assessment security may be doing more harm than good.

An antiquated system

Testing programs use a variety of security protocols and procedures primarily to reduce cheating and protect test content. While the purpose of security practices is to ensure the results are reliable, valid, and trustworthy, many common testing protocols were designed under twentieth-century testing conditions—that is, for paper-and-pencil testing, and in the context of the No Child Left Behind era. We have been applying those protocols to computer-based assessments.

Old security protocols often required intensive management in the classroom and throughout school and district offices. For example, it was common for test proctors to take posters and other items off classroom walls during state testing. They also enacted strict rules around what kids could and couldn’t do; those who finished early or weren’t testing often had to sit quietly and do very little while their peers completed their test. Test materials had to be painstakingly accounted for at all times and kept under lock-and-key when not actively being used for testing, while teachers and administrators were required to sign affidavits affirming secure materials handling and appropriate test administration practices.

Security protocols were upheld by the belief that as long as the protocols were followed, the integrity of the test and results would remain intact.

Assessment security gone too far

The transition to online testing has offered some inherent security improvements. There are no physical test booklets to photocopy now, access to test content and student answers is restricted through online assignment and permissions, and it can be more difficult for students to see each other’s work as they are focused on their own screens. Also, with adaptive testing, individual students see completely different items.

Technology now also offers the possibility of rapid and expansive cheating detection capabilities, but this has progressed to ever more invasive methods of ensuring security at the expense of student privacy. Yet content still leaks, and preventing those intent on cheating carries an increasing cost.

It’s time to revisit assessment security

Accountability testing options have become more flexible under the Every Student Succeeds Act, and educators are demanding assessments that work for them, creating evolution in the industry. As states rethink their assessment programs, it’s clear security protocols also need reevaluating.

We’ve seen evidence that change is possible. At the height of the pandemic, for example, the College Board allowed AP testing to take place at home, unproctored and using an open-book approach. While we would not recommend states follow that exact approach with their summative assessments, it shows we can think creatively about how to balance assessment security protocols with student and teacher needs.

One place states can start is by taking stock of their assessment model. A through-year model, for example, can create multiple data points, reducing end-of-year pressure on a single measure of student achievement. Computer adaptive tests are also less likely to lead to cheating since students see different test items.

It’s also important for states and assessment developers to ensure tests provide valuable information for students and teachers. Providing helpful information that supports teaching and learning in robust ways is perhaps the best way to prevent cheating and promote more authentic, engaging, and meaningful assessment practice.

As states look for ways to create more rigorous and meaningful state summative assessments, it’s well past time to reevaluate assessment security. Those in place currently tend to burden students and teachers and do very little to make tests better or even more secure.

What are your ideas on how we can shift state assessment security protocols to align with new technology and assessment models? Let us know. We’re @NWEAPolicy on X.

The post Assessment security: Who—or what—are we really protecting? appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/assessment-security-who-or-what-are-we-really-protecting/feed/ 0
Computer adaptive assessment: A proven approach with limited uptake https://www.nwea.org/blog/2023/computer-adaptive-assessment-a-proven-approach-with-limited-uptake/ https://www.nwea.org/blog/2023/computer-adaptive-assessment-a-proven-approach-with-limited-uptake/#respond Thu, 03 Aug 2023 12:00:00 +0000 https://www.nwea.org/blog/?p=20251 As my colleagues have discussed in earlier Teach. Learn. Grow. posts, there are common misconceptions holding state summative assessments back from innovating at scale. I’d like to […]

The post Computer adaptive assessment: A proven approach with limited uptake appeared first on Teach. Learn. Grow..

]]>

As my colleagues have discussed in earlier Teach. Learn. Grow. posts, there are common misconceptions holding state summative assessments back from innovating at scale. I’d like to look at the use of computer adaptive assessment and obstacles to using it for state summative tests. These kinds of tests are widely respected, yet many states still don’t have a computer adaptive summative assessment.

About computer adaptive assessment

Computer adaptive tests aren’t new and have been around since the 1980s, when the first adaptive assessment, the Armed Services Vocational Aptitude Battery (ASVAB), came into use.

Computer adaptive tests tailor the difficulty of test items to student performance as they take the assessment. Questions generally get easier if a student is struggling and harder if a student is excelling.

There are different types of computer adaptive assessment, too. Computer adaptive assessment used for federal accountability can be configured to adapt on and off grade level, providing grade-level performance information by constraining the amount of off-grade level adapting. Federal policy only allows state tests to adapt one grade above the tested level and one grade below, but I don’t see that as a constraint because computer adaptive assessments can adapt their difficulty levels within a grade level, asking deeper and more complex questions as is appropriate.

Due to a large item bank with items covering the whole ability continuum, adaptive tests more accurately measure student ability than fixed-form assessments, in which all students see the same test items. They also tend to be quicker to administer, requiring fewer questions to measure student achievement. Since the questions are tailored toward student levels, they are generally considered more engaging for students.

The resistance to computer adaptive state tests

Obstacles holding states back from shifting to computer adaptive assessment generally include misguided fears about the degree to which they provide comparable results between students, since kids aren’t seeing the same questions. The test blueprint ensures the items selected by the adaptive engine sufficiently cover the reporting categories for all students; however, fixed-form assessments normally also have multiple test forms with different questions, to address concerns of cheating.

Computer adaptive assessment does require a large item bank to provide students with the right item based on their previous responses. It is costly for states to build up an item bank for computer adaptive summative tests due to the amount of time and number of reviews it takes to develop a high-quality item, but there are ways to economize. States can pool resources or have educators in a state review and approve test items from another state’s test, rather than creating entirely new ones. There is also potential, with more research, for automated item generation to help create items, with teachers in the state reviewing all items.

A worthwhile change

If a state is looking to improve their state assessment system, they should consider taking a close look at computer adaptive assessment, which has advantages over traditional, fixed-form assessments, such as fewer items, which leads to shorter tests, and more precise information on student knowledge. Many of the perceived barriers to using computer adaptive assessments can easily be addressed if those barriers are stopping states from moving to computer adaptive assessments.

What are your ideas on how we can use computer adaptive assessments to improve statewide assessments? Let us know. We’re @NWEAPolicy on Twitter.

The post Computer adaptive assessment: A proven approach with limited uptake appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/computer-adaptive-assessment-a-proven-approach-with-limited-uptake/feed/ 0
Student proficiency: The “by” vs. “at” year’s end debate https://www.nwea.org/blog/2023/student-proficiency-the-by-vs-at-years-end-debate/ https://www.nwea.org/blog/2023/student-proficiency-the-by-vs-at-years-end-debate/#respond Tue, 25 Jul 2023 12:00:00 +0000 https://www.nwea.org/blog/?p=20219 As I mentioned in “Misconceptions preventing innovation and improvement in state assessments,” there’s more we can do to better meet the needs of students, educators, teachers, and […]

The post Student proficiency: The “by” vs. “at” year’s end debate appeared first on Teach. Learn. Grow..

]]>

As I mentioned in “Misconceptions preventing innovation and improvement in state assessments,” there’s more we can do to better meet the needs of students, educators, teachers, and policymakers. I’d like to look at key questions surrounding emerging through-year assessments, how and when to come up with a summative score, and how we evaluate student proficiency.

About a dozen states have developed or are developing through-year assessment models in which tests are administered at multiple times during the school year, instead of just once in the spring. This can lead to richer and more actionable data related to student progress, shorter assessments, and even fewer assessments administered overall.

But there is debate over how to handle the summative results and whether students still need to be tested at the end of the year if, along the way, they meet the expectations for proficiency on an early administration of the assessment. Should states consider when a student shows what they know and can do by the end of the year or only at the end of the year? This distinction affects the options available—and the decisions made—for state testing systems.

The advantages of looking at data by year’s end

There are several advantages if states look to ensure students are proficient by the end of the year. Students who show proficiency early could:

  • Advance to other topics deeply within and even off grade level, potentially leading to more growth for advanced students
  • Sit out later test administrations
  • Take other types of assessments that can provide additional information about student learning
  • Complete a “check-in” at the end of the school year to ensure continued progress

If states are locked into a model of measuring on-grade proficiency at the end of the year, all kids need to take spring assessments, regardless of their earlier performance. This is the status quo because we assume students may forget and not maintain an earlier level of performance. Yet we also assume that a student’s performance in spring is sufficient to consider the following fall, when there’s more than ample research that students experience summer learning loss. We don’t require students to retest in the fall, just in case.

Where we can see by the end of year in action

Because of the current rhetoric around accepting only springtime performance, most states leveraging through-year models count only the spring administration for accountability purposes, with two exceptions: In Louisiana, all three test administrations are used to inform a student’s final summative scores. Six other states leverage one of the earliest through-year assessment designs, the Dynamic Learning Maps (DLM). DLM instructionally embedded (IE) assessments combine results from assessments administered in fall and spring to produce a summative student score. This system has passed the assessment portion for peer review for ELA and math assessments.

Outside the traditional accountability assessments, competency-based education has long valued student proficiency by the end of the year. In a competency-based education system, students demonstrate proficiency based on when they are ready to show mastery, and assessments are meant to provide timely information to inform a student’s learning along the way. Nearly every state has policy supporting competency-based education, and more districts and schools are implementing competency-based education practices. An assessment model that prioritizes by the end of year proficiency would also help support states, districts, and schools implementing competency-based education practices.

Policy must lead

Changing assessment systems that currently prioritize student proficiency at the end of the school year is, ultimately, a policy decision. There are many defensible measurement models to leverage earlier performance. State leaders should work in partnership with their educators, school leaders, and community members to consider how policy decisions might impact their current accountability models, including how growth might be considered, testing logistics, and more.

We believe states should choose what works best for them, while keeping at the forefront an assessment that is reliable, is valid, and holds students to a high standard. We’re concerned, however, that debate over the two approaches and whether they are equally worthwhile is slowing the adoption of innovative through-year models and encouraging states to only produce assessments with traditional, end-of-year summative scores.

What are your thoughts? Do you have ideas on how we can better design assessments to allow students to show proficiency throughout the school year? Let us know. We’re @NWEAPolicy on Twitter.

The post Student proficiency: The “by” vs. “at” year’s end debate appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/student-proficiency-the-by-vs-at-years-end-debate/feed/ 0
COVID-19 impacts: New data shows older students’ recovery needs attention https://www.nwea.org/blog/2023/covid-19-impacts-new-data-shows-older-students-recovery-needs-attention/ https://www.nwea.org/blog/2023/covid-19-impacts-new-data-shows-older-students-recovery-needs-attention/#respond Tue, 11 Jul 2023 15:26:15 +0000 https://www.nwea.org/blog/?p=20179 As our nation’s educators remain hard at work trying to move the needle for students who have not yet rebounded from COVID-19 impacts on schooling, we’re hit […]

The post COVID-19 impacts: New data shows older students’ recovery needs attention appeared first on Teach. Learn. Grow..

]]>

As our nation’s educators remain hard at work trying to move the needle for students who have not yet rebounded from COVID-19 impacts on schooling, we’re hit with difficult news: older students’ pandemic recovery has stalled.

Last year, NWEA researchers reported encouraging signs that the nation’s education system was bouncing back; students showed gains on MAP® Growth™ in reading and math at a rate that was comparable to pre-pandemic times. This year, though, new research shows that while students are still learning and growing, their rate of academic growth has not been as fast as before the start of COVID-19.

Schools have been working hard to address students’ needs with high-dose tutoring, enrichment classes, and after-school learning opportunities. Given the devastating impact of the pandemic on students and families, we knew recovery was going to take time, though we had hoped for continued progress.

Schools, with the support of federal and state policy leaders, will continue the hard work and their commitment to help students catch up after a difficult three years.

Students are learning, but at a slower pace

The federal pandemic emergency declaration is over, but COVID-19 impacts on students’ reading and mathematics achievement persist.

NWEA researchers Karyn Lewis and Megan Kuhfeld recently analyzed the MAP Growth scores of 6.7 million US students in grades 3–8 in about 22,000 public schools. They compared the achievement and growth of students from the beginning of the 2020–21 school year to the end of the 2022–23 school year to the achievement and growth of a similar group of 11 million students who tested in the 2016–17, 2017–18, and 2018–19 school years. The big-picture finding is that in nearly all grades, the achievement gains during 2022–23 were less than in the pre-pandemic period.

The good news is that the growth rate trends in 2022–23 for the youngest students exceeded or mirrored the trends of the pre-pandemic cohorts. In reading, third graders’ learning gains exceeded typical growth by 4%, and fourth-grade growth rates slipped slightly by 1%. In mathematics, third graders’ growth rates exceeded typical rates by 2%, while fourth graders’ gains lagged pre-pandemic growth trends by 7%. Unfortunately, middle school achievement gains lagged furthest behind, falling short of pre-pandemic averages by 16% to 19% in reading and by 6% to 10% in mathematics.

Lewis and Kuhfeld estimate that the kids who were eighth-graders in the 2022–23 school year will need an extra 9.1 months of learning in mathematics and 7.4 months of learning in reading to catch up to pre-pandemic achievement levels. These students are entering their freshman year of high school needing to accomplish almost five years of learning before they graduate from high school four years later.

Seventh-graders will need an estimated 5.9 months of school to recover learning losses in both subjects, and sixth-graders will require four additional months in reading and 3.5 months in mathematics. Even with growth rates in 2022–23 that were more consistent with pre-pandemic trends, third- and fourth-graders will still need between 1.6 and 2.6 months of extra instruction.

The research also found that marginalized students’ pandemic recovery has been slow, especially in middle school. In mathematics, Black and Hispanic middle schoolers of all grades will need more months in mathematics (6.2 and 6.4, respectively) than Asian and white students (4.3 months and 5.3 months, respectively). In reading, white, Black, and Hispanic students need additional time (4.9 months, 4.9 months, and 6.7 months) than Asian students (1.4 months). The disparities are not as stark at the elementary school level.

Where do we go from here?

This new research on COVID-19 impacts is surprising and disheartening. Since the beginning of COVID-19, districts and schools have confronted staffing challenges, intervention implementation delays, and political debates. Given this context, recovery efforts have been understandably slow to start and scale. Schools are working hard to aid students’ academic recovery, but addressing the gaps from the findings detailed above will take sizable and sustained resources and efforts in the coming years, from all levels of the education ecosystem.

The federal government has invested billions of dollars that have been essential in addressing recovery efforts. When those vital funds expire, schools will need continued investments to intensify and sustain consistent momentum toward academic recovery. Federal policymakers must do more to ensure those funds continue and districts have the resources to continue to scale interventions and programs that support student recovery. At the state and district levels, here are three considerations for policymakers and education leaders as they refine or jump-start new recovery efforts.

1. Use local data to guide recovery and invest in what works

Data exists in many forms. It comes from assessment results at the school and district levels. Data is also available on key factors, such as enrollment, attendance, and mental health. It’s essential that all data is timely and provides insight into how to improve classroom practice and address students’ specific in-school and out-of-school needs.

States and districts can set up processes and tools that provide capacity to schools when gathering data and tracking the implementation of interventions. This information can help them understand how the interventions are improving student outcomes and how they can shift intervention implementation to better support student recovery. Gathering data on student learning and recovery will allow states, districts, and schools to maintain the most effective interventions and provide the necessary resources to do so until students are fully back on track.

2. Expand instructional time by deploying evidence-based interventions and programs to the students who still need additional support

Interventions and programs must be scaled to the size of the challenge, and students in need of additional support may require multiple interventions to fully recover from COVID-19 impacts. Gaps for older students as documented in our research—and replicated elsewhere by other researchers using different assessments—will require a significant suite of interventions that match the magnitude of the crisis.

State networks have identified how to improve student learning in after-school programs. The RAND Corporation has identified the characteristics of successful summer learning programs. These programs can create engaging and enriching learning opportunities in out-of-school time to put students on the path to recovery.

Last school year, we saw that many districts faced challenges when implementing interventions. Some of the challenges districts faced included:

  • A disconnect between district identification of students in need of interventions and which students actually received interventions
  • Hiring enough staff to provide the interventions
  • Finding the time or space to deliver interventions

State and district leaders can work with schools to develop policies and practices that ensure schools are able to implement interventions efficiently and effectively, such as by providing data coaching to schools, developing a menu of interventions, and building intervention programs for long-term sustainability.

3. Communicate the importance of academic recovery, sharing timely and relevant information with families

Educators see COVID-19 impacts in their classrooms every day, but families often don’t understand the magnitude of the pandemic’s influence on students’ academic progress. A recent Pew poll found that over half of families believed the pandemic had only a temporary effect on a child’s education. Closing that “perception gap” between schools and families would aid district recovery efforts that in too many places are falling short of their goals in terms of reach, intensity, and impact.

Additionally, states, districts, and schools can provide families with timely information about their child’s progress and achievement compared to grade-level standards. They should also share resources and tools families can use to support learning recovery at home.

Continue to focus on academic recovery

Data from the 2022–23 school year is concerning and shows that COVID-19 impacts may be longer lasting than expected. We must work quickly to address the gaps that exist for students. Today’s students are our future workforce, and if current recovery trends continue, some kids may not recover before they end their public school education.

The task ahead appears to be daunting, but we’re confident that our nation’s schools have the tools, know-how, and commitment to overcome the pandemic’s impact on student achievement. But they need significant and sustained support from state and federal policymakers to ensure our nation’s students can make a full academic recovery.

The post COVID-19 impacts: New data shows older students’ recovery needs attention appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/covid-19-impacts-new-data-shows-older-students-recovery-needs-attention/feed/ 0
Assessment subscores: Why we have them and what they can—and can’t—do https://www.nwea.org/blog/2023/assessment-subscores-why-we-have-them-and-what-they-can-and-cant-do/ https://www.nwea.org/blog/2023/assessment-subscores-why-we-have-them-and-what-they-can-and-cant-do/#respond Thu, 06 Jul 2023 12:00:00 +0000 https://www.nwea.org/blog/?p=20169 Many common misconceptions and barriers are holding states back from innovating and improving summative assessments, including the value of subscores. Assessment subscores, as they’re used right now, […]

The post Assessment subscores: Why we have them and what they can—and can’t—do appeared first on Teach. Learn. Grow..

]]>

Many common misconceptions and barriers are holding states back from innovating and improving summative assessments, including the value of subscores. Assessment subscores, as they’re used right now, prevent states from adopting smarter, faster testing systems, but there are alternative visions for helping families and educators diagnose student learning needs.

Why we rely on subscores in the first place

The Every Student Succeeds Act (ESSA) requires states to have diagnostic information for individual students. The purpose of and intention for providing such information makes sense; it’s important to understand how students are doing in nuanced ways and evaluate how well instructional practices and programs in schools are working. Diagnostic information should, by definition, reveal the cause or nature of a problem.

Assessment subscores are currently the primary approach states use to provide diagnostic information intended to inform how well students are doing in specific areas of learning, like algebraic thinking in math and informational reading in English language arts. There are serious drawbacks to how the ESSA policy is currently implemented, and there are challenges to ensuring the policy has a stronger impact.

The trouble with looking only at subscores

The problem with limiting diagnostic information to assessment subscores is that the number of items needed to provide reliable information about student subdomain knowledge is far greater than what is typically or reasonably included on one test, at least when using traditional psychometric methods.

Assessments that are used for accountability are currently developed so that a score from one student or time can be compared to another student or time. One way assessments have historically ensured those comparisons are accurate is by building assessments that follow a similar structure. Like with a house, this structure is sketched out as a blueprint.

A test blueprint ensures each test has a similar structure by defining approximately how many items will be on the test in total (think of this as the size of the house) and about how many of those items will measure certain areas (these are the rooms in the house). In this metaphor, subscores represent the rooms in the house.

Summative tests simply aren’t always long enough to really provide useful diagnostic information from subscores.

Say we are building a mathematics test that measures the overall domain of mathematics. The blueprint will tell us what will be included in that test, such as questions on numbers and operations, fractions and decimals, algebraic operations, geometry, and data. The number of subdomains and how many questions address each results in the overall size of the test. What’s missing, however, is a more detailed view. We know how big our house is and how many rooms are in it, but not how many doors and windows there are, for example.

Building assessments like these that have comparability—at least in terms of face validity through common blueprints—has been the tradition. Naturally, there is a desire to interpret student performance in the subdomains for more diagnostic information. The challenge, then, is getting sufficient information from each domain without making tests longer. Yes, the more we observe how a student is doing, the better we know what they do or don’t know; yet most assessments using traditional methods require only five or six items per subdomain. While such a small number is appropriate to ensure a balanced representation of targeted subdomains for comparability, no one would argue that a five-item test would be reliable or valid enough to provide a score or inform important decisions. Summative tests simply aren’t always long enough to really provide useful diagnostic information from subscores.

How to get a more complete picture

ESSA doesn’t require diagnostic information to come solely from assessment subscores. In fact, many in the field have rightly warned against using subscores to eke out instructionally informative information.

Without just adding more questions, and making tests overly lengthy, there are other options for states:

  • Include other kinds of data sources, including performance-based assessments, portfolios, and teacher-provided student evaluations. Those, of course, would require supportive and extensive professional development and process standardizations, at minimum.
  • Try adaptive assessment, like the state summative assessments in Nebraska and Alaska. Their first priority in adapting is to ensure each student receives a test aligned to an overall blueprint for comparability and to provide a defensible overall score. The assessments then also find out more about student knowledge in subdomains and produce a more reliable subscore. By using a constraint-based assessment engine, the tests have the potential to fully personalize a student’s assessment experience by adapting even more diagnostically.
  • Extract more information from assessment items. Items are developed to determine what a student knows, but they also include valuable information we can use to infer what a student doesn’t Multiple-choice-item distractors are developed to model common mistakes, misunderstandings, and misconceptions, for example. Extended response rubrics also highlight what a student doesn’t know in the lower scores of a rubric.
  • Review and calibrate items to each state’s detailed achievement level descriptors, or ALDs. Teachers can see how a student’s overall score relates to detailed ALDs and explore what’s expected for getting to the next level or what concepts they need to review in prior levels, even across grades. Teachers can also see the standard and achievement level for each item each student received. This level of diagnostic information allows teachers to look at the data through the lens of what students know based on standards, achievement expectations, and what teachers have taught.

Change is possible

Providing meaningful information about how students are doing on state assessments is an important goal. If we truly want to make progress in this area, it’s vital we look at current policies and practices related to assessment subscores, consider advances in item development and assessment design, and even leverage information outside a singular test event. We believe we can do better.

What are your ideas on how we can improve diagnostic information from assessments? Let us know. We’re @NWEAPolicy on Twitter.

Thomas Christie contributed to this post. He is the senior director of learning and assessment engineering at NWEA, and his work focuses on maximizing the usefulness of educational data for students and teachers in the classroom.

The post Assessment subscores: Why we have them and what they can—and can’t—do appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/assessment-subscores-why-we-have-them-and-what-they-can-and-cant-do/feed/ 0
Misconceptions preventing innovation and improvement in state assessments  https://www.nwea.org/blog/2023/misconceptions-preventing-innovation-and-improvement-in-state-assessments/ https://www.nwea.org/blog/2023/misconceptions-preventing-innovation-and-improvement-in-state-assessments/#respond Tue, 27 Jun 2023 12:00:00 +0000 https://www.nwea.org/blog/?p=20143 The purpose of state assessment systems has been hotly debated over the past 20 years. State assessments are designed to ensure every child has access to an […]

The post Misconceptions preventing innovation and improvement in state assessments  appeared first on Teach. Learn. Grow..

]]>

The purpose of state assessment systems has been hotly debated over the past 20 years. State assessments are designed to ensure every child has access to an equitable and excellent education, but are they meant solely for transparency and accountability purposes? Or can they be more than that, by providing timely, actionable information to families and educators?

We think the answer is all of the above. But let’s be honest: state assessment systems are not fully serving the needs of families and educators today. To do that, state assessments systems will need to be reimagined and redesigned with students, families, educators, and policymakers in mind. Bold progress is possible, but first we’ll need to address some misconceptions getting in the way of assessment innovation and improvement.

The misconceptions about state assessments

States administer summative assessments annually in reading and math in grades 3–8, once in high school and once in each grade span (grades 3–5, 6–9, and 10–12) in science. Most are deeply anchored in traditions of assessment design and implementation, but it is worth looking at those traditions and examining how well they’re working today.

Over the years, educators, school leaders, and parents have raised concerns and challenges related to statewide summative assessments, including how long the tests take, the way they can disrupt instruction, the usefulness of the information they yield, and delays in getting results, among other things. States are showing interest in changes aimed at addressing such challenges (e.g., through-year assessments, performance assessments, competency-based assessments, etc.) but creating new statewide assessments isn’t an easy task. Barriers to change have included financial constraints, the perception of rigid federal policies or interpretations of those policies, and a general resistance within education to change.

More specifically, the following key policy and technical issues stand in the way:

  • Subscores. These are meant to provide diagnostic information to students and teachers, but they are not providing sufficient information as is. Alternatives could provide more helpful diagnostic tools with less testing time required.
  • Proficiency determinations “by” vs. “at” the end of the year. Policymakers and assessment experts are debating when students should be deemed proficient in a course or grade. This decision has important implications for testing design and innovation.
  • Computer adaptivity. The technology to adapt assessments based on the test taker’s responses has been around for decades. Adaptivity can provide highly precise, personalized tests, but misconceptions around the barriers to developing adaptive tests has prevented states from shifting away from fixed-form assessment designs.
  • Security. Current test security protocols are based on a model in which all students take the same test at the same time on paper. Updating security protocols based on modern testing systems would give policymakers more space to create new assessments that are secure, equitable, and more meaningful.

These are just some of the issues that warrant attention if we are to ensure state assessments fulfill their purpose: to measure student achievement and evaluate whether schools are serving all children well.

One size does not fit all

State assessments play a vital role in American public education, and we must modernize and improve them. States are making substantial investments in their assessment systems, both financially and in the amount of time it takes to administer them. As such, we believe assessment developers can and should partner with states to develop better assessments. We are actively partnering with several state education leaders as they consider ways to reimagine and improve their assessments.

As with all education policies, there is no single approach to improvement. What’s right for one state may not be what’s needed in another state. States need a range of options to best meet the needs of the communities they serve.

Do you have ideas on how to improve state assessments? Let us know. We’re @NWEAPolicy on Twitter.

The post Misconceptions preventing innovation and improvement in state assessments  appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/misconceptions-preventing-innovation-and-improvement-in-state-assessments/feed/ 0
State test results must be released more quickly to benefit kids https://www.nwea.org/blog/2023/state-test-results-must-be-released-more-quickly-to-benefit-kids/ https://www.nwea.org/blog/2023/state-test-results-must-be-released-more-quickly-to-benefit-kids/#respond Fri, 12 May 2023 12:00:00 +0000 https://blog-prd.cms-dev.nwea.org/?p=18816 Over the next two months, at least 25 million elementary, middle, and high school students nationwide will sit down for state exams in reading, math, and science. […]

The post State test results must be released more quickly to benefit kids appeared first on Teach. Learn. Grow..

]]>

Over the next two months, at least 25 million elementary, middle, and high school students nationwide will sit down for state exams in reading, math, and science. Most caregivers will want to know how their child did. Are they performing on grade level? Did they make a year’s worth of growth since last year?

If the results come fast enough, caregivers could invest in extra learning supports this summer, or even consider a different school for their child for the fall. Teachers could use the summer to review their students’ performance and adjust their lesson plans for next year. Principals could use the results to assign students in need of extra support to their best teachers next year.

All of these actions depend on getting test results back quickly. But if history is any guide, we probably won’t have the results of spring 2023 testing for months.

Like it did in creating the state testing mandate, Congress needs to step in and require states to send preliminary score reports back to families and teachers very quickly. Such a shift would be good policy at any point, but it’s especially urgent right now. Due to COVID-19 and the long periods of remote learning it necessitated, today’s students are far below where they otherwise would be. Students who were in need of extra help before the pandemic fell even further behind.

Many school district leaders are working to develop interventions to help address these gaps, but students aren’t signing up for the summer-school, after-school, or tutoring programs that are being offered. Caregivers could be receptive audiences and engaged as partners to help boost participation rates, but only if they’re given time and space to do so.

The problem: State test results are too slow

Most state tests are administered in April or May. The graph below shows when states released the results of their 2022 tests to the public. Each dot represents one state and the date they released their results. As a reference point, the chart also includes dotted horizontal lines for August 1 and September 1. Although school start dates vary across and within states, these lines are intended as rough proxies for when the following school year starts.

As the graph shows, in 2022, only five states released their results in June or July, in advance of the new school year. Another 10 released results in August, just before or right around the start of the new year. But that means 35 states and the District of Columbia all released their results in September or later.

Source: 2022 data comes from the author’s scan of state department of education websites.

The chart above shows which states released their official results to the public, and a handful of states share preliminary data with parents and educators in advance of the official public release. However, that’s far from standard practice.

Why aren’t states releasing their results faster? It’s likely not a technical problem. States have been administering annual tests for two decades, and the tests themselves are now routinely taken on computers.

This suggests the delays are mostly a function of political processes. NWEA provides results on MAP® Growth™ interim assessments to schools within 24 hours after a student completes a test. Students who take the ACT, SAT, GRE, and AP tests can all expect their results online within about two weeks. These are for the easy-to-score multiple choice components of the tests, but the makers of the ACT and SAT tests warn that results may take another few days or up to two weeks to score written test items. In contrast to the state tests, which typically have no stakes attached to them for individual students, these latter examples are all vetted results that have high stakes for students.

What’s different is that these tests are sold on the private market and must be responsive to end users. In contrast, states have configured their testing systems more as a compliance exercise in response to the federal testing mandate, at the expense of timely and actionable information to parents and educators.

The feds have not been helpful in this regard. Congress imposed a long list of data points that must be included and disaggregated on school report cards, but it is silent about how fast the results must be relayed to parents, teachers, or the public.

The U.S. Department of Education has layered on its own technical specifications for state tests. Its most recent assessment peer review process focused much more on technical concerns around the test itself than it did on timeliness and usability of the results. For example, it included a single question on reporting and asked for states to provide evidence of a “timeline that shows results are reported to districts, schools, and teachers in time to allow for the use of the results in planning for the following school year.” However, the same process evaluated state tests on 29 other “critical elements” related to the test itself and the processes by which the tests are administered and monitored. Taken as a whole, the peer-review process nudges states to adopt more technically complex tests at the expense of simplicity and speed.

In short, we need a new thumb on the scale to make the state tests timely and actionable for the intended users.

The solution: Require states to release results to caregivers and educators within two weeks

Congress should amend the testing rule to require states to send preliminary results to caregivers and educators within two weeks of a test’s administration. States could take more time to produce vetted results for public accountability purposes, but the preliminary results would provide immediate, actionable information to the people in the best positions to act. Some states already choose to send score reports back to caregivers and educators earlier than the public release, but a quick return of results should be the standard operating practice across the country, and that will only happen with congressional action.

There’s a case to be made that even the vetted school- and district-level results should be released much faster than they are today. School and district leaders could also make much better use of their summer time if they had faster results.

But if nothing else, states should get preliminary results back to caregivers and educators quickly because they can take immediate actions. If caregivers knew their child’s results by the end of May, they might be able to make different decisions for their child for the summer, such as finding a tutor or signing up for summer school. They would also have time to consider alternative schooling options for their child for the following school year.

Processing the results quickly would also provide teachers and school leaders the time to actually look at and reflect on their students’ performance. Given enough lead time, a teacher might be able to change their instructional practices for the following year. A principal might be able to respond to schoolwide challenges, such as problems with early reading skills, or they might consider a student’s test score when making classroom assignments for the following year. District leaders might adopt different curricula—and provide staff the time and training to adapt—or change school-level staffing levels.

Policymakers and advocates have touted all of these use cases for state tests before, but they’re impossible to do well with the current slow pace of the results. A key portion of the theory of action behind state tests depends on processing the results quickly. State administrators need to be nudged to focus on speed and getting test results back to caregivers and educators as quickly as possible.

The post State test results must be released more quickly to benefit kids appeared first on Teach. Learn. Grow..

]]>
https://www.nwea.org/blog/2023/state-test-results-must-be-released-more-quickly-to-benefit-kids/feed/ 0