My grade 11 physics class is a collection of students from a handful of different education systems and backgrounds, so I wanted to use Lawson’s Classroom Test of Scientific Reasoning [CTSR] to get a sense of the students’ reasoning skills. Higher CTSR scores have also been correlated with improved performance, so I wanted to see if the possibility exists for me to target reasoning skills with my instruction.
Unfortunately, like most of the standard assessment tools, the CTSR is a bit wordy. That is intimidating to my students, all of whom speak English as a second, third, or fourth language. To help students understand what the questions area asking, I assembled replica demonstrations of the scenarios described in the CTSR: two balls of clay with equal masses (one pressed into a pancake), two graduated cylinders with different widths demarcated with 1, 2, 3, etc instead of units of volume, and so forth. Since our school has a strict animal experimentation policy, I had to skip 6 of the questions. I also skipped the last four, since they are so wordy even most native English speakers don’t fully read them!
- The Lawson CTSR was applied as specified.
- After 30 minutes, the students were instructed to turn over their answer sheet and use a second answer sheet on the back.
- Students were informed that they would write the test again, this time with some visual aids, but given no feedback about their original performance.
- After this explanation, I said as little as possible; merely demonstrating the apparatus.
- After the second application of the test, students were asked whether they thought they had the same answers the second time. 12 out of 14 claimed they did (in reality, only 6 had exactly the same answers).
During the second iteration, 8 of 14 students had at least one answer that was different. This includes the 7 lowest-scoring students. Of the changes, 27 were changed from wrong to right, and 2 were changed from right to wrong. This impressive record suggests that most changes were done because of an improved understanding of the situation. There is an average increase of 3.375 points per student (out of 20, so 17%), for those with different answers.
The results from this graph show a different classroom dynamic once we begin to account for language difficulties. Misdiagnosing language challenges as conceptual misunderstandings can lead to problems and frustration. If you are using the CTSR, the FCI, or any other baseline instrument, be careful about your audience. And, for those who create standardized tests, especially for students in international settings (*cough* IB *cough*) it is essential that the gist of the problem can be understood without students getting caught up by the language.