Assessment for personalized learning: A race? An art? Insights from VirtualSC's deep dive into strategic change

by Diana Sharp

VirtualSC, South Carolina's state-sponsored virtual school, already provides South Carolina with important levers for its statewide personalized learning initiative. By offering students access to courses not available in their home brick-and-mortar school, VirtualSC increases students' opportunities to create a more flexible and personalized learning pathway that better matches students' career interests. By offering students the opportunity to work on their courses outside of the school walls, VirtualSC also increases the flexibility of students' learning environments. However, the leadership team at VirtualSC hasn't stopped with these levers, and their explorations into personalized assessment have the potential to bring new answers - and new questions - into the dialogue on how virtual schools can transform traditional assessment practices and achieve the twin personalized learning goals of increased learning engagement and increased learning success.

Borrowing from networked improvement communities

In 2017, the Regional Educational Laboratory Southeast (REL Southeast) began working as a thought partner with VirtualSC Director Bradley Mitchell and his leadership team in their efforts to determine which area of personalized learning offered the best opportunities for change within their program. After selecting student assessment as their priority area, VirtualSC and REL Southeast staff planned a deep dive meeting to clarify VirtualSC's assessment challenges and map out next steps. To help accomplish these goals, REL staff borrowed and adapted strategy-planning techniques from Networked Improvement Communities (Bryk, Gomez, Grunow, & LeMahieu, 2015), along with recommendations from other RELs using these techniques (Proger, Bhatt, Cirks, & Gurke, 2017). In January 2019, VirtualSC staff and REL Southeast staff met in Columbia, South Carolina, and, fueled by some homemade peanut butter bars, set to work.

Categories and causes

The goal of the January meeting was to come up with a "Fishbone diagram" that would shed light on the categories of VirtualSC's challenges surrounding assessment for personalized learning, along with ideas on the root causes underlying these challenges. Over the course of three hours, VirtualSC brainstormed specific challenges and grouped them into six categories:

  • Producing assessments

  • Grading assessments

  • Making decisions based on data

These challenge categories were expanded into problem statements and placed on the fishbone diagram. Next, due to time limitations, participants voted on two categories to further explore. Then, using techniques such as "The Five Whys," participants expanded the chosen categories (Producing assessments, Grading assessments) into hypothesized root causes to address.

The resulting fishbone diagram is shown below. Discussion around the root causes highlighted a key concept and tension: Flexibility vs. Standardization.

Screen Shot 2019-08-26 at 12.53.38 PM.png

The challenge of flexibility in personalized assessment

Flexibility has become a hallmark of personalized assessment. Ideally, students should have multiple options in showing what they know, via drawings, models, tests, quizzes, performances, presentations, and projects. Students should have the option to re-do assessments if they initially perform poorly.

However, flexibility brings with it a host of challenges that schools have traditionally avoided through standardization of assessment. As the VirtualSC staff noted, the workload of producing assessments is automatically increased by the greater number of options called for in an ideal personalized assessment environment, as opposed to the workload of producing one assessment option for all students. This workload is not easily shouldered by staff already at capacity with other tasks. In addition, many of the performance-based options called for by personalized learning advocates have an inherently greater amount of flexibility in how different teachers might grade them, as compared to traditional tests and quizzes.

In the weeks following the deep dive, as we searched for a metaphor for this flexibility challenge, we thought about similar flexibility challenges in assessment that happen during the Olympic Games. For example, some people only like watching Olympic events where performance is strictly standardized, such as races where the standard is set by the timeclock, and the fastest person to the finish line scores the best and wins. Some people even object to the inclusion of events where the winner is selected based on how judges rate the athlete on artistic performance, as in figure skating, gymnastics, or diving. "It's too subjective," they complain.

Similarly, performance-based assessments for learning are always going to have less standardization around grading, with more flexible interpretations of success than standard multiple-choice or fill-in-the-blank tests. And yet, proponents of personalized learning environments continue to challenge the mindset that learning is a race, arguing instead that learning is more like an art. In the "race" mindset, there is often only one way to determine success. In the "race" mindset, it matters whether you get to the finish line ahead of everyone else.

In contrast, personalized learning advocates call for more of an "art" mindset. With this mindset, individual judgment will always play a larger role, resulting in a greater variety of ways for defining success. With this mindset, some students will take longer than others to produce their art.

If we want to adopt the "art" mindset of learning, then we need to adopt more flexibility in assessments. However, like the judges in Olympic performance-based sports, we can seek strong guidelines. Olympic-level judges of skaters, gymnasts, and divers are highly trained on the guidelines for awarding their points and ratings, in efforts to reduce subjectivity as much as possible. Ratings often contain a mix of point values awarded for indisputable successes (three flips) or obvious missteps (a fall onto the ice), as well as rubrics that require some subjectivity in making point value determinations. Overcoming the challenges of flexibility within personalized learning assessments will likely require an increased attention to developing guidelines and training judges (teachers), just as sports organizations have spent great attention on assessment in sports that demand more than a timeclock.

Next steps: Guidelines on the challenge of flexibility

VirtualSC's insights about their challenges led to the decision that guidelines are needed to support the flexibility challenges of personalized assessment. For example, those charged with producing assessments need guidelines on how many assessment options are feasible, given limitations on teacher and student workloads, along with guidelines on which options are best for particular types of courses. Guidelines are also needed to help establish acceptable levels of consistency in grading across teachers for assessments that require higher levels of judgment and interpretation, including guidelines on how to help students understand what they need to do to be judged successful in their learning.

Many other groups are already tackling these challenges, but the emerging nature of the personalized learning movement means that it can be difficult to find existing guidelines, and even more difficult to find guidelines specific to learning in virtual environments. As a next step in supporting VirtualSC's work, the REL Southeast has begun to seek out existing guidelines to meet the challenges of personalized assessment in virtual learning programs and connect VirtualSC with other groups who are working to develop similar guidelines. If you are a member of a group interested in joining us and collaborating on this effort, please contact us, and stay tuned for more on our journey (not a race!) with VirtualSC as we jointly and thoughtfully tackle the challenges of assessment in personalized learning. 

References

  • Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America's schools can get better at getting better. Cambridge, MA: Harvard Education Press.

  • Proger, A. R., Bhatt, M. P., Cirks, V., & Gurke, D. (2017). Establishing and sustaining networked improvement communities: Lessons from Michigan and Minnesota (REL 2017-264). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Midwest. Retrieved from http://ies.ed.gov/ncee/edlabs.

About the Author

Diana Sharp is a Senior Research Associate at RMC Research Corporation and manager of the VirtualSC Partnership for Student Success in Online Learning at the Regional Educational Laboratory Southeast.

 

 

Previous
Previous

Snapshot Spotlight on Trio Wolf Creek

Next
Next

Do schools that require laptops improve student learning?