A couple days ago I posted about my grade 6 class developing a model for scientific investigations, which we subsequently used as a template for our lab notebooks. Now that we have that under control (more or less!) we can start to focus on the process of designing an experiment.
I have opted to focus on four factors that guide experiment design:
1. The independent variable
2. The dependent variable
3. The controlled variables
4. A hypothesis
However, since the students have not done experiments in which they test multiple values for a variable, this is bit of a bootstrapping process. I opted to give the students written descriptions of simple experiments, asking them to identify 1-4 for each. However, despite my best efforts to engage them in the task, many did not even attempt to read for comprehension. I will need to follow up on this with our ESL support group.
My grade 11 physics class is a collection of students from a handful of different education systems and backgrounds, so I wanted to use Lawson’s Classroom Test of Scientific Reasoning [CTSR] to get a sense of the students’ reasoning skills. Higher CTSR scores have also been correlated with improved performance, so I wanted to see if the possibility exists for me to target reasoning skills with my instruction.
Unfortunately, like most of the standard assessment tools, the CTSR is a bit wordy. That is intimidating to my students, all of whom speak English as a second, third, or fourth language. To help students understand what the questions area asking, I assembled replica demonstrations of the scenarios described in the CTSR: two balls of clay with equal masses (one pressed into a pancake), two graduated cylinders with different widths demarcated with 1, 2, 3, etc instead of units of volume, and so forth. Since our school has a strict animal experimentation policy, I had to skip 6 of the questions. I also skipped the last four, since they are so wordy even most native English speakers don’t fully read them!
- The Lawson CTSR was applied as specified.
- After 30 minutes, the students were instructed to turn over their answer sheet and use a second answer sheet on the back.
- Students were informed that they would write the test again, this time with some visual aids, but given no feedback about their original performance.
- After this explanation, I said as little as possible; merely demonstrating the apparatus.
- After the second application of the test, students were asked whether they thought they had the same answers the second time. 12 out of 14 claimed they did (in reality, only 6 had exactly the same answers).
During the second iteration, 8 of 14 students had at least one answer that was different. This includes the 7 lowest-scoring students. Of the changes, 27 were changed from wrong to right, and 2 were changed from right to wrong. This impressive record suggests that most changes were done because of an improved understanding of the situation. There is an average increase of 3.375 points per student (out of 20, so 17%), for those with different answers.
The results from this graph show a different classroom dynamic once we begin to account for language difficulties. Misdiagnosing language challenges as conceptual misunderstandings can lead to problems and frustration. If you are using the CTSR, the FCI, or any other baseline instrument, be careful about your audience. And, for those who create standardized tests, especially for students in international settings (*cough* IB *cough*) it is essential that the gist of the problem can be understood without students getting caught up by the language.
The Force Concept Inventory (FCI) is a test designed to assess a student’s ability to think “Newtonially”. Unfortunately, it’s written with a lot of physics jargon, and it is tough for English Language Learners (ELLs) to understand the questions. I wanted to see whether a simplified version of the FCI would give different results than the FCI, when administered to a group of ELLs.
Here’s the paper.