Playful Assessments: 3 Ways to Inject Fun into Measuring Outcomes

Friday, April 19, 2019
Playful Assessments: 3 Ways to Inject Fun into Measuring Outcomes

Assessing students with standardized testing is a given in public schools throughout the United States—especially in the wake of 2001’s federal No Child Left Behind law, which mandated that all students in grades three through eight be tested annually in reading and math, while older students should be periodically tested in the same subjects. The hope was that this would shine a spotlight on inequities in the school system and help ensure all students became proficient in math and reading by 2014.

However, that didn’t happen. When 2014 came around, barely half of all students were proficient in either reading or math, according to the National Assessment of Educational Progress. There are a host of explanations for this, from the suggestion that the concept of "proficiency" is fluid (and, thus, subject to human manipulation) to the allegation that No Child Left Behind raised the bar for schools but failed to ensure adequate support measures would follow.

 

Unintended effects

In addition, some No Child Left Behind detractors argued that the testing mandates crowded out other important facets of education. For example, science educator and advocate Francis Eberle spoke up in 2011 about how the emphasis on math and reading tests had a negative effect on science instruction. In an interview for the journal Science, Eberle described teachers being told not to focus on science under No Child Left Behind—a move that he said caused a significant drop in science skills for students.

Although No Child Left Behind expired in 2007, it was not rewritten until 2015, when then-President Barack Obama signed its replacement—the Every Student Succeeds Act—into law. This new federal education policy retained many aspects of its predecessor, including the requirement that all students take annual standardized tests primarily in reading and math, but also threw open a new door by allowing more local control over assessments. A 2016 post on the Edutopia website that outlined the changes noted that, “for the first time, states must use more than academic factors in their accountability system.” The main point is this: Annual test scores in reading and math are no longer the only way to measure student progress and growth.

 

Looking to the future

And now, researchers from MIT are playing around—literally—with the question of whether standardized assessments should exist at all.

In 2018, Emily Tate summed up MIT’s work for the education technology news site EdSurge by presenting the opinion that assessments simply have not “evolved at the same pace” as teacher-driven classroom lessons, which are often built around trying to create “fun, engaging, and formative” experiences for students. Tate went on to note that much of what teachers want to nurture in their students is simply not captured by traditional, multiple-choice standardized tests, and this creates a “disconnect between what schools value and what they measure.” With this in mind, a group of MIT education researchers has been working on how to implement “playful assessments” that seek to measure outcomes based on the dynamic environment that many teachers and students work to uphold.

Here’s what that looks like:
  • Rubrics: Although using rubrics to evaluate learning is nothing new in education, what is new is trying to design rubrics that measure tougher-to-assess indicators of learning, such as creativity and collaboration. To facilitate this, MIT researchers have developed the MetaRubric card game, which asks teachers to participate in and then evaluate a creative project. The goal is to generate discussion and knowledge about the value of rubrics and how to apply them to things such as personalized and project-based learning.

  • Student input: A key element of MIT’s work on assessments involves an emphasis on including both teachers and students in the evaluation process. Self-reflection and peer evaluation are key elements of this for both adult and child participants, along with the use of student-driven projects with agreed-upon outcomes. This places MIT’s work along a continuous thread of efforts to better understand what students should be learning and how best to assess this, as evidenced by a deep dive from Focus on Inquiry, a project of the Galileo Education Network.

  • Embedded assessments: MIT is working in partnership with both the National Science Foundation and a nonprofit partner called MakerEd to put MIT’s budding research around playful assessments to use at two very different schools: a charter school in Virginia and a public K–8 school in California. At the Virginia location, middle-school students engage in “projects that encourage designing, building and tinkering” in a much less traditional environment. Meanwhile, at the California site, students and teachers operate in a more standard setting. As the EdSurge report explained, both schools are working on implementing “embedded assessments” that catch and monitor learning as it is taking place. This can involve using something called “Sparkle Sleuths,” which are notes passed from teacher to student that offer precise feedback, encouragement, etc.

The teachers and researchers featured in the EdSurge reports acknowledged that their embrace of playful assessments amounts to a work in progress—and, it seems, that is precisely the point. Like the learning process itself, individuals involved in trying out alternative assessment strategies are embracing the unknown, as well as the concept of progress coming all at once or in tiny steps. Rather than adhering to the idea that annual standardized testing is the best way to indicate student success or failure, these playful assessment pioneers are working to create and monitor dynamic teaching and learning activities that put students at the center of their own progress.

Share This: 
 

____________________________________

Featured White Paper:

Assessment Competency: How to Obtain the Right Information to Improve Data­‐Driven Instruction

When assessments are properly administered and integrated into instruction, the resulting data can provide valuable information. To be effective, though, teachers and administrators must first understand the purpose of these assessments since they each yield different kinds of data. Read the white paper by Lexia’s Chief Learning Officer, Dr. Liz Brooke, to learn about the types of assessments and how to create a purpose-driven assessment plan.

read the white paper
Resource Type: