# Grouped, Practical Assessments

I’ve been working through some ideas about assessment in high school physics. The goal is to assess students in a way that is more meaningful, more engaging, more effective at analyzing a student’s ability to do real physics tasks, and more likely to result in useful learning experiences. At the same time, with my IB classes, I cannot take my eye off the inevitability of high-stakes standardized exams and the concomitant need to prepare students for these.

I’ve been curious about authentic assessment for a while, but this specific work is inspired by Joss Ives‘s work on two-stage collaborative exams and by conversations and collaboration with Kelly O’Shea, with whom I will be presenting a workshop on the topic this summer.

In this post, I will outline and analyze one assessment I have attempted.

First, the students worked in groups to build background by creating some review notes about double-slit interference, which was the topic of this assessment. I encouraged them to “use their resources”, which in most cases meant their notes and the textbook, although a couple also used the internet for translations and definitions. Below is the prompt and a typical response.

Next, I regrouped the students (always groups of 3) and gave them their tools: a laser, a double-slit slide, a ruler, and a tape measure. I told them that their task was to determine the wavelength of the laser. Here, there was a couple minutes of uncertainty: one group launched into a debate about whether the laser was emitting light, another forgot about their notes from a minute before and tried thinking of ways to measure the laser’s frequency (planning to use the wave equation). Without nudges or hints, however, all the groups converged on the same idea: shine the laser through the double slits onto a distant wall, and measure the various distances. Below is a typical example of their work.

Finally, I asked the students to reflect on their experience. The first question aims to get them to think about their role in the group. Most of the answers here were descriptive (“I held the laser”), and few tackled the second part meaningfully. The second question aims to get them to reflect on their experimental design. The majority discussed something related to random error and the need for repeated trials.

The third question is inspired by something inspired by Ilana Horn and aped from Kelly: different ways of “being smart” or “doing science” in our class. Here are the results:

I like the diversity of approaches that were used, and that are sought. Working systematically seems to have been viewed by most of the students as key in this assessment, which is something I agree with.

Finally, is the question of whether the students preferred this type of assessment to a traditional test. Overwhelmingly, they preferred this grouped, practical approach. Even the disadvantages they suggested were quite positive. Here are some of the responses:

The educational idea of authentic assessments dates back at least to the 1990s, and of course the theory-vs-practical debate in science education predates the work of the Committee of Ten who laid the foundation for public education in the USA back in the 1890s. For me, the challenge is finding a way to do meaningful, practical assessment in a way that upholds the rigor of our contemporary courses while also being more engaging and meaningful for students.

If this is of interest to you, then please check out Kelly’s blog post and stay tuned for our workshop this summer. I’d also appreciate hearing about any ideas, feedback, or experience if you have a story to tell.