# Points & Sets: Lab Reasoning

As I pursue my PhD in physics education research, I have found time to explore a number of interesting questions. In this post, I’ll explore an approach to thinking about student reasoning in the introductory physics lab that we decided not to pursue at this time.

### The Physics Measurement Questionnaire

Building on more than 20 years of development, use, and research, the Physics Measurement Questionnaire (PMQ) is a staple in introductory physics lab research. Its home is the group of Saalih Allie. This 10-question assessment is based on the following experiment:

Students are asked a series of questions that seek to determine the extent to which they agree with a set of principles about how lab-work is done (at least, in the abstract). The principles don’t appear explicitly in the relevant literature, but seem to be:

1. More data is better
2. When collecting data, it is good to get the largest range possible for the independent variable
3. The average represents a set of repeated trials
4. Less spread in the data is better
5. Two sets of measurements agree if the averages are similar compared with the spread in their data

These axioms feel true and, as a teacher, I see value in getting my students to understand fundamentals about the nature of scientific work. However, while they are broadly applicable, the reality of science is that none of these axioms are exactly true. There is the question of whether science has a universal “method” — Paul Feyerabend argues it doesn’t. When I sat down with a distinguished professor to look at the PMQ, the professor could identify the “right” answer, but often didn’t agree with it.

This reminds me of the “Nature of Science” that was brought into the International Baccalaureate (IB) physics curriculum in 2014. It appears in the course guide as a 6-page list of statements about how science works, culminating in this mind-bending image:

So maybe attempting to define how science works isn’t a productive approach. Fortunately, the PMQ isn’t just a test of whether students agree with certain axioms of experimental physics.

### Question-Level Analysis

In addition to evaluating students on their agreement with the above axioms, the PMQ also asks students to justify their reasoning. An example question follows:

After “Explain your choice”, the PMQ includes several blank lines for the student response. The instructions suggest that students should spend “between 5 and 10 minutes to answer each question”.

A thorough analysis of student responses is possible by using the comprehensive categorization scheme in Trevor Volkwyn’s MSc thesis. Volkwyn (and the PMQ authors) view the PMQ as an assessment that aims to distinguish two types of student reasoning about experimental data collection:

Point-Like Reasoning is that in which a student makes a single measurement, and presumes that the single (“point”) measurement represents the true value of the parameter that is being measured

Set-Like Reasoning is that in which a student makes multiple measurements, and presumes that the set of these measurements represents the true value of the parameter in question

Alternatively, we could view the point-like paradigm as that in which students don’t account for measurement uncertainty or random error.

Examples of responses that conform to the point-like reasoning paradigm, for the example above, include:

• Repeating the experiment will give the same result if a very accurate measuring system is used
• Two measurements are enough, because the second one confirms the first measurement
• It is important to practice taking data to get a more accurate measurement

Examples of responses that match the set-like paradigm include:

• Taking multiple measurements is necessary to get an average value
• Multiple measurements allows you to measure the spread of the data

Thus, it is possible to conduct a pre/post analysis on a lab course to see whether students’ reasoning shifts from point-like to set-like over the course. For example, Lewandowski et al at UC Boulder have done exactly this, and see small but significant shifts.

### My Data

I subjected a class of 20 students to the PMQ, and coded the responses to two of the prompts using Volkwyn’s scheme. The categorization was fairly straightforward and unambiguous.

For question 5, which asks students which of two sets of data is better (one has larger spread than the other), 17 students provided a set-like response and 3 gave the point-like response.

For question 6, which asks students whether two results agree (they have similar averages and comparatively-large spreads), only 1 student gave a correct set-like response, and the other 19 provided point-like reasoning.

I think this indicates two things:

1. Different types of prompts may have very different response levels. Similar results are found in the literature, such as Lewandowski’s paper, above. This suggests that the set-like reasoning construct is complex, either with multiple steps to mastery or with multiple facets. Thus, it might not make sense to talk about it as a single entity with multiple probes, but rather as a collection of beliefs, skills and understandings.
2. Some of the reasoning on question 6 seemed shallow. This suggests, for me, a bigger take-away message: my students aren’t being provoked to think critically about their data collection and analysis.

Going forward, we’ve decided not to use the PMQ as part of our introductory lab reform efforts. However, by trying it out, I was able to see clearly that our new curriculum will need to include time dedicated to getting students to think about the relevant questions (not axioms) for the experiments they conduct:

1. How much data should I collect?
2. What range of data should I collect?
3. How will I represent the results of my data collection?
4. How will I parameterize the spread in my data, and how will I reduce measurement uncertainties and random error?
5. For different types of data, how can we know if two results agree?

Interestingly, some of Natasha Holmes’ work on introductory labs starts with the 5th question by asking students to develop a modified t-test, and then uses that tool to motivate cycles of reflection in the lab. That’s another approach we’ve agreed doesn’t quite work for us, but that is likewise a huge source of inspiration.

# Urination and Physics

The Times Education Supplement [TES] is known as a fairly conservative British publication, focusing on policy news, endorsements of the teaching profession, and op-eds by teachers. So it was surprising to see a click-bait headline relating to physics education research: “Taking the pee out of physics: How boys are getting a leg-up“. Unlike many submitted posts, this one is not identified as being written by a blogger, and comments are disabled — we are intended to treat this as real research news.

The crux of the argument is this: we have a gender gap in physics scores on standardized assessments. That gap seems to be most pronounced on tasks involving 2-dimensional motion. One explanation for the discrepancy is that boys have more experience with balls, rockets, cannons, and so forth because of the social conditioning they experience as children. However, the authors note that female students in the “ultra-masculine environment” of a military school show the same gender gap. Thus, they conclude that ball sports and play-acting war isn’t the factor. Instead, they propose that boys playfully urinate, and thus have experience with projectile motion in a way that girls don’t.

There is a lot about the article that is objectionable.

1. This article isn’t based on published scientific work, it doesn’t refer to a submitted manuscript, and the authors don’t have any related publications in the literature. This isn’t an idea that has been vetted by peer review. More importantly, it isn’t a mature scientific idea: the authors have proposed a hypothesis, but haven’t actually carried out the experiment.

It would be easy to test: survey men about their childhood urination habits, and about their proficiency with physics. Maybe throw a tricky physics problem at them, too. But the authors didn’t do this, preferring to write about the idea as if it were too obvious to need verification. This sort of speculative science is problematic, and popularizing ideas that haven’t been vetted empirically has been problematic in physics in recent years. It is particularly bad in the field of physics education research, which is struggling to be recognized as proper science by a dubious physics community.

2. Since the authors didn’t conduct a study, I did. I asked 25 people (THANK YOU!!) to answer four questions: were they sports fans as children, did they playfully urinate as children, and were they good at physics in school? I also asked them which angle would optimize the range of a projectile in the real-world case where air friction cannot be neglected — someone familiar with projectile motion either experimentally or theoretically should know that slightly decreasing the angle from 45 degrees (the theoretical optimum) will increase the range when air friction is considered.

The results of the survey show that neither urination nor sports were strong predictors for physics ability. The strongest relationship was between sports and success on the physics problem, but this did not reach an adequate level of confidence*. In short, had the authors actually tested their hypothesis, they would have found it incorrect.

3. The language used in the article makes it clear that this is click-bait rather than a serious attempt to introduce a new idea. Consider the following lines: “those sparkling arcs of urine”, “pee-based-game-playing”, and “…despite the surface layer of toilet humour, and the implication that physics may be little more than a pissing contest, we’re making a serious point.”

Unfortunately, with phrasing like that, the authors are not.

4. Another point is made by Brett Hall: projectile motion isn’t a topic that occurs at the start of the curriculum, yet the gender gap is apparent from early in the physics course. Likewise, the authors suggest focusing on energy conservation first, rather than projectile motion, but this is something that is already done in many classrooms.

5. Research by Zahra Hazari and others points to socio-cultural factors (identity,  home and school support) being the most relevant to explain why girls opt out of physics. I wouldn’t argue that the gender gap is an understood problem, but the authors present it as wholly-unsolved (perhaps to increase the audience’s willingness to accept their unorthodox idea) when it isn’t.

6. [addition 18 September] On further reflection, it is more clear to me that the phrasing and positioning of this idea to be damaging and troublesome, in addition to being incorrect and click-bait. A phrase like “why don’t young women perform as well in physics?” presupposes that the cause is a deficiency in the women, rather than the sexist culture in which they are raised and on whose assessments they are being found wanting. I hope no teenage girl hears of this incorrect hypothesis, reads this article, or absorbs the various ripples it is making in the news media.

Lastly, a note about ad hominem rebuttals. I think that most men would look at this idea and disagree because of their personal experience. I’ve seen some rejection of this hypothesis because the primary and secondary authors are female. However, there is value in the perspective of an outsider: we do a lot of things unconsciously, and only an external viewer would be able to make connections we might otherwise miss. Dismissing this work about male urination because the authors are female is incorrect.

* The n=24 study I did was enough to show that the urination=physics ability hypothesis cannot be the primary explanation for the gender gap. However, it is possible that there is still a small correlation. As pointed out by Steve Zagieboylo, however, this pathway likely goes boy-sports-physics rather than boy-urination-physics, given the strong social differentiation that boys face. The results from my study suggest this but, since the effect is smaller, I cannot claim to have discovered anything with the small sample I used.

# Studying StudyIB

The wonderful Chris Hamper has been working on a new educational idea over the last year. Housed on StudyIB, the Virtual Tutor is an attempt to recreate the experience of working one-on-one with a tutor as you go through a multi-step physics problem. There is a network that draws in resources and reminders for students, depending on their progress. It’s a good idea and, with the current web technologies available, just about due. Here is a video where Chris explains how the Virtual Tutor works.

I introduced the Virtual Tutor to my students as a way to study for their IBDP Physics exams. The response was generally along the lines of “this is interesting”. However, it wasn’t clear whether or not this approach was effective, so my students and I devised a small study to try to answer that question.

First, the students wrote a pre-test on a particular topic. Second, they worked through one of the learning networks on the Virtual Tutor (we did Forces 3). Third, they wrote a post-test on the same topic (but with slightly different questions). The pre- and post-tests have three questions.

The first question is about something we have practiced extensively, and that they should know how to do: drawing a free-body diagram. The average pre-test score was 2.45 (out of 3), and the post-test score was an increase by 0.27 points. This corresponds to a small number of students forgetting or misdrawing one of their force vectors. It seems that the Virtual Tutor was an adequate reminder. Below is a sample or pre- and post-test work that shows this.

The second question is about something we have not practiced very much: drawing force diagrams, where the forces are drawn at the place where they originate, rather than applied to a hypothetical center of mass. Here, the Virtual Tutor helped some students (as shown in the pre/post examples below), while two students had a lower score on the post-test for this question. The overall effect was an increase of 0.46 points to 1.37 (out of 3).

The third pre/post question is something more akin to what the students saw on the Virtual Tutor: a standard physics problem where students need to move through several steps, doing mathematics, in order to find a numerical answer. This is the type of question the Virtual Tutor was designed for, and here it was most effective: the average student score increased from 1.00 (out of 4) to 2.82. The below work is typical: a student was able to start the problem, but got “stuck”: the Virtual Tutor reminded or taught him the necessary steps for this type of problem, and he was able to transfer that knowledge and finish the problem.

I will follow-up with my students after their mock exams next week, to see if and to what extent they found the Virtual Tutor useful. From this small study, however, a few conclusions emerge:

1. The Virtual Tutor probably does about as well as any sort of studying for reminding students about fundamentals that they already know.
2. The Virtual Tutor isn’t particularly effective as an expository tool. If students need to learn some new ideas or facts, their textbooks, videos, or classroom learning experiences are better (I should add that the rest of the StudyIB site is quite good for this).
3. The Virtual Tutor is effective at reminding students of the difficult, complicated processes involved in solving multi-step problems. As seen on the third question of this study, one session with the virtual tutor was sufficient to get about half the students in this study from a low score to a high score on the problem.

I’m pretty impressed with the Virtual Tutors. If you’re a physics student reviewing for exams, consider giving it a try.

Here’s the (averaged) data:

# Latvian Physics Exam

Today was the pilot administration of Latvia’s new 12th-grade physics exam. Next year, students will be required to write either the physics or the chemistry exam as part of their secondary school leaving requirements (for those students who pursue the academic track of studies). In this post, I will take a close look at the exemplar, which can be found (in Latvian) here.

Part 1.1 (multiple choice): Assumptions and Memory

The exam opens with a question about the applicability of the accelerated motion model. It’s a promising sign, but all of the answers — a launched javelin, a parachutist in the first minute of fall, a person in an elevator, and a hammer dropped on the moon — are possible, and the question is really about the degree to which the model can be applied to these situations. For example, a javelin has a very narrow cross section, so it would be reasonable to assume it experiences very little drag force. In fact, the aerodynamics of javelins are interesting, and although their motion is non-parabolic, the reasons are not obvious (an example).

The fifth question is identical to one from an IB physics exam from a couple years ago, where the motion of a piston is presumed to be uniquely responsible for the temperature increase of a compressed ideal gas. The twelfth lists four core equations about electricity and asks which explains the need for high voltage in long-distance power lines. These types of questions are frustrating because multiple answers are reasonable, and the students are forced to try to guess what the exam-writer is thinking.

Another type of question that appears is simple recall. Question 6 requires that students remember that isochoric processes have constant volume. The 13th question seems to ask that students remember how the technology of electromagnetic inductive charging works. Question 20 is about the origin of the atoms that make up our bodies.

The third type of common question is about relationships between variables. Students are expected to recall how electric fields depend on the distances from point charges, how photon energy is related to wavelength, and how the energy of photoelectrons is related to the frequency of incident light.

These multiple choice questions are pretty standard, with questions similar to IB Physics and the GRE. The difficulty level is probably appropriate, in that students will show a nice Gaussian distribution of responses. But as a tool for educational assessment, I think these multiple-choice questions have only marginal utility.

Part 1.2 (short answer): Not For The Meticulous

The second part of the exam consists of 10 short-answer questions. In reality, it is a mix of question types that is sure to frustrate students. There are 6 multiple-choice style questions where students must choose all the correct answers, 2 ranking tasks, and 2 calculations. Problematically, some of the distractor options on the multiple-choice style problems could conceivably be relevant. For example, problem 27 shows possible light paths refracting and reflecting around a glass block. A line path that is not visibly refracted by the block is possible, depending on the index of refraction (and the refraction is exaggerated compared with what would be observed in the lab).

The biggest problem with questions like this is that, while they are accessible to very weak students and straightforward to students who have a teacher’s level of understanding of physics problems, they will not be very successful at differentiating between a student who is at 90% of a robust understanding, and a student who can intelligently guess 10%.

I’m happy that the test-writers are attempting to expand beyond multiple-choice questions, but this section will be hard for the students, and will provide little feedback about students’ ability to use their physics knowledge and skills to meaningfully create knowledge.

Part 2 (long answer): Physics Problems, With a Dash of Discussion

The long-answer section begins with the ominous note that this section will involve judging pace, but with 135 minutes to solve 5-7 problems, the question likely will not be pace so much as sustained concentration.

I want to like these problems: they combine calculations (kinetic and gravitational energies for the first, circular motion for the second) with diagrams and comparison of quantities. This fixes the problem with the first half of the exam, and allows students the opportunity to explain their answers.

However, the structures for the problems seem set up to greatly restrict the students from any sort of individual actions, knowledge construction, or creativity. The first problem requires that students draw the velocity and acceleration vectors for a pendulum — something that is usually recalled, rather than figured out.

The rest of the problems mix standard physics things-that-must-be-remembered with little calculations. There is a clear effort to make the problems meaningful and/or relevant: one concludes by asking students for a conclusion about heat loss in a house, and another is about the laser on a Mars rover. However, the overwhelming sense with these questions is that of standard physics regurgitation: draw a voltage divider, calculate the diffraction angle, identify this range of the EM spectrum, etc.

The final question (at last!) asks that students work with some data, but this is an illusion: after finding the slope from a graph, it’s the same deal as before: find the efficiency of the electric kettle.

By the time you get to the end of these problems, it is clear that they are merely standard physics problems, with just a few “explain why” questions. This is better than the mechanistic rigamarole of the IB physics exams, but not by much. Even here, there is no chance for students to construct their own meaning or use genuine critical thinking, and I suspect that most students will get most of their points by reiterating standard physics solutions to these standard physics problems.

Implications

There is a suspicion that the new physics and chemistry exams are an attempt by the Ministry of Education to add more rigour to a national curriculum that has recently come under fire after Oxford University decided not to recognize the Latvian grade 12 diploma as adequate preparation for undergraduate studies. Thus, this exam will probably give universities a better way to sort students. I’ve heard that, after promising they wouldn’t be required, many universities are asking to see students’ results for this pilot year for the exam.

However, like all standardized testing of this sort, what we will primarily find is that students attending better-funded schools, in Rīga, and with higher socioeconomic class will do better on these exams. Additionally, these exams will be felt backward into the 11th grade and earlier, as physics and chemistry teachers are increasingly under pressure to prepare students for the exam, and thus are forced to sacrifice good science teaching in favour of test preparation. The quality of meaningful science education will fall, resulting in weaker critical thinking and scientific reasoning skills across society, while universities will increasingly rely on these test scores to make admissions decisions, and disadvantaged students will be left behind.

If I’m wrong about some or all of this, or if you have perspective or insight to share, please reply or email me (danny.doucette at gmail).

# Social Justice in Physics

Moses Rifkin does a superb 6-day unit on social justice in his physics class. Here, by arguing against it, a Fox News correspondent makes it clear why social justice is needed:

I wanted to do something similar to Moses, but I had two constraints:

1. Since I teach IB physics, and already don’t get enough contact hours, I couldn’t devote more than a class period to it.
2. Since I teach at an international school in Northern Europe, the social justice issues experienced by my students and in our culture will not necessarily be racial in nature.

Thus, I tried to lift out my favourite parts from Moses’ curriculum, and recontextualize everything to be more universal in nature. Our discussions ended up primarily focusing on sexism, with class, religion and disabilities as other sources of examples and discussion.

We started with some ground rules, directly pilfered from Moses:

Second, I introduced the idea of stereotype threat. Two students had studied this in a psychology class, but had difficult explaining it. I gave an example (as a North American in Europe, I fear being seen as monolingual, and am disinclined to practice languages as I struggle to learn, thus learning less well). The students brainstormed examples in pairs, then shared out. This took about 15 minutes.

Third, I had students randomly select from a list of social groups. They used their computers to quickly find and research two physicists from that social group. In a circle, they shared who they found and I probed with questions like “how did you find this person?”, “how did you choose this physicist?”, “had you heard of this person before today?” and “was it hard to find physicists in this social group?” Our list of social groups (the last two were suggested by students during our discussions):

women, men, heterosexual, homosexual, black, white, young, old, disabled, able-bodied, Christian/Muslim/Jewish, Eastern religion, European/American, not European/American, upper class, lower class

This led fairly naturally to a discussion of why some of these social groups are under-represented among physicists. I asked the students to make hypotheses to explain the under-representation, and then to offer counter-examples for the hypotheses, if they could think of any. Our hypotheses were that the distribution of physicists:

• represents the population
• is determined by the geographical location of universities and research institutions
• is determined by social expectations
• is determined by history/politics

These were all seen to be unsuccessful as a complete explanation. Next, we switched directions, and looked at the barriers for people of under-represented social groups. Some good arguments were presented here, including the effect of expensive tuition at university, the impact of stereotypes, and the role of religion. I was able to cap-off these arguments by labeling these effects as the essence of institutional sexism, racism, classism, ablism, agism, homophobia, etc.

We finished with the Implicit Association Test about gender and science. I told the students that they need not share their scores, but many were keen to talk about it, so I know that we got a variety of results that approximately conform to what one would expect from a mixed group.

Before we left, I tried to introduce the idea of privilege, and especially of white privilege, but I think this fell flat, like everything does when you’ve got two minutes until lunchtime.

# PE Effect Lab

I tried to use the photoelectric effect as a paradigm lab for our unit on atomic, nuclear, and particle physics. Ultimately, it was a failure. This post will do two things: (a) explain how to create and use apparatus to make this lab work, and (b) analyze why it didn’t work for me.

Two sources need to be acknowledged first. As always, Arons did a fantastic job of explaining why the photoelectric effect is difficult for students, and he also provides a good plan if you wish to introduce it. Secondly, the design of my apparatus was inspired by one designed by Rice, Garver, and Schober.

Background

The photoelectric effect is a simple idea with devastating consequences. Essentially, light shining on a metal causes electrons to be knocked off the surface of the metal. If two electrodes are placed close together and connected with a metal wire, these electrons cause a current to flow in the wire, which can be detected with a microammeter.

Thus, we should be able to detect the photoelectric effect with just four pieces of equipment: a pair of closely-spaced electrodes, a microammeter, some wire to connect them, and a light source.

The first of these is the toughest: the electrodes should be in a vacuum, so the electrons are not absorbed or scattered by air molecules. The 1P39 vacuum tube is perfect. There are usually a few such “new old stock” on ebay, at a price of about USD 40 each. I’d suggest getting stands for the tubes, too: the part number is 27E122, and the active sockets are 4 and 8.

A decent multimeter can accurately read down to 0.1 microamperes, so that’s no problem. I put little wire loops on my stands so that we can use wires with alligator clips. Ambient light is enough to produce about half a microamp of current.

I demoed this with the students, and they were able to draw the relevant simple circuit diagram and construct their own simple devices pretty easily. I had them spend about 15 minutes investigating the impacts of light intensity and light colour (frequency) on the current. We didn’t spent much time analyzing this, but the results were clear and straightforward: more light means more current, and different colours seemed to have different currents as well, but in a less-obvious way.

Stopping Potential

Okay great, we’ve got light creating electricity. So what? The clever trick is to apply a (variable) potential difference across the electrodes. If you adjust the potential difference so that a large positive charge is on the “receiving” electrode, the electrons will accelerate to that electrode, and the current should increase to a certain saturation point. This is where all the electrons are reaching the “receiving” electrode.

On the other hand, if you reverse the potential difference and apply a negative charge to the “receiving” electrode, the electrons will be repelled. However, at a very low potential, some of the electrons will have enough kinetic energy to overcome the electrostatic repulsion and enter the “receiving” electrode. Thus, the game is to slowly increase the potential until all the electrons are stopped from reaching the receiving electrode.

At this point, the kinetic energy of the electrons is equal to the electric potential energy provided by the potential difference (ie: KE = q * V). Thus, given the charge of the electron, we can use the Stopping Potential to calculate the amount of kinetic energy the electrons have when they exit the metal.

The range of potentials needed is about 0 to 1.5 V. I used some 10 kOhm potentiometers and 56 kOhm resistors to make potential dividers that would give appropriate voltages from a 9V battery. I like working with 9V batteries, but there’s no reason not to use a AA directly on a potentiometer instead.

When you add the potential difference, you also need a voltmeter and another pair of wires — things start to get a bit messy on the lab table. This was where my students got caught. Although they were able to digest the idea of a stopping potential (with a bit of prompting, the students put together the idea without my help), actually constructing the circuit on the lab bench proved to be too much.

A few (top) students were able to recall what they knew about voltmeters, and were able to draw appropriate circuit diagrams. Most were not, which means that (a) our study of that topic last year did not have lasting outcomes, and (b) I wasn’t scaffolding the circuit-building well enough.

Worse, though, a design flaw popped up that took me a while to diagnose and repair. If you apply the potential difference to the vacuum tube backwards, and turn the potentiometer, you’ll get a variety of currents. When the potential difference is zero, the current is nearly zero, and when the potential difference is increased, the current increases as well. To the students, this behaviour didn’t seem abnormal. Thus, I had to check with the groups one at a time, check that they’d wired up correctly, and do a bunch of plugging/unplugging if they had it backwards. Murphy’s Law stepped in here, and so I ended up spending a couple minutes fixing all of the apparatuses before we could even begin experimenting.

I ought, thus, to have labelled the tube stands with + and – signs, and asked students to ensure they were connecting things correctly.

By this point, of course, the lab had come off the rails. The experiment was no longer theirs, and after asking the students to sit patiently for about 20 minutes while I troubleshot their circuits, behaviour issues were cropping up too.

Frequencies

The goal of the lab is to get a graph of electron energy (ie: 1.6e-19 C times the stopping potential) over the frequency of light that is causing the photoelectric effect. This graph will have a negative vertical axis intercept representing the work function of the metal (ie: the amount of energy required to liberate the valence electron) and a slope representing the amount of energy a quantum of light will have, for each Hz of frequency (ie: Planck’s constant).

LEDs work quite well for this, since they emit a fairly narrow range of wavelengths. I made some multi-LED sources using surface-mount super-bright LEDs. I first laid down some adhesive copper tape, then super-glued the LEDs in place. Soldering wasn’t too bad, but problematically the plastic base started to melt if I wasn’t really quick with the soldering gun. A 6-position rotary switch allowed the students to choose between LEDs, and I used 5V wall-wart adapters for this (my love affair with 9V batteries aside, I really don’t like using batteries in the lab).

Easier is to get a selection of through-hole LEDs and 3V coin-size batteries. You can hold the battery between the leads of the LED and make a complete circuit by holding the LED leads against the battery faces with thumb and forefinger.

I had some blue and near-UV LEDs that my students used in this manner, and they didn’t have much trouble with it. I’d suggest making a small hole in the side of a cardboard box for this approach. The cardboard box covers the vacuum tube, blocking out stray light.

The weakest students had trouble figuring out the frequencies of light from the wavelengths, so I sat down with the worst offender and we worked it out; I asked him to write his answers on the whiteboard at the front.

Putting It Together

We had some trouble with LoggerPro. The students typed in their numbers in the format 1.3*10^(-19), but LoggerPro interpreted those as text labels rather than numbers. Worse, when they tried to correct the problem, by typing 1.3e-19, the cell was replaced with 0. Surely that cannot be correct! (It is.) I did a second tour of the room, showing each group about this strange notation, and the strange behaviour from LoggerPro.

At this point, it became clear that a lot of the groups had been quite careless with their data collection. Their points had a lot of scatter, and some had points that couldn’t have been correct. I noticed two differences between how I conducted the experiment, and how they did:

1. I was careful to zero out the current with the light source off, then turn the LED on, and only then begin to increase the potential difference. The students usually didn’t check their light shields, and didn’t return the potential difference to zero between trials.
2. I increased the potential difference slowly, carefully, and deliberately. The potentiometers are sensitive enough to get 0.01 V accuracy, which I could justify in my own trials of the experiment, but the students were not careful enough to get consistent and reliable stopping potentials.

We whiteboarded our results. Three groups had unusable data: two because of spurious results, and the third for reasons I cannot fathom but surely related to their being “done” with data collection after about 3 minutes (my suggestion of multiple trials was not adopted — this is probably related to the breakdown of discipline I mentioned earlier). Of the others, all but one made fairly-obvious calculation errors.

The single group with decent data and error-free analysis came up with a result of about 4e-34 J s, which isn’t bad — but not nearly as good as the robust result I was able to get when I did the lab myself, using the same equipment, of 6.3e-34 J s.

Conclusion

Summer rustiness? Surely. A tricky lab that needed more scaffolding and support. Yep. A lab that allowed us to see that E = hf and that light comes in quanta, without needing a teacher to point and explain? Nope.

Overall, my attempt to use this lab as a paradigm to anchor our unit on modern physics was not successful. We spent about 3 hours developing it, and yet came away with only a sketchy model. After our whiteboard session, I needed to decide between two alternatives: re-do the lab, hopefully getting better results and accept a week-long setback, or move on and try to use the Bohr atom as a model instead. There was so much frustration about the lab, and our IB curriculum is so unforgiving, that we could only move on.

This lab could work. It could be really good. It brings together ideas from earlier studies of electricity and waves, and it provides a clear basis for further studies of atomic physics. However, if you decide to use it, go slow, be deliberate, and ensure the students do the same.

Edit: check out the good comment by Andy, below. On Twitter, Frank asks, “Have you tried using the PhET simulation on the photoelectric effect? Could possibly using in tandem with hand-on lab, similar to how folks use the circuit sim in lab.”

# Themes from AAPTsm15

The summer meeting of the American Association of Physics Teachers [AAPT], along with the subsequent Physics Education Research [PER] Conference, has just concluded. It’s given me a lot to think about. Here are a few themes that have emerged for me:

1. The need to address the affective dimension of learning, and the thinking skills, while trusting that the content will follow. The PER group at Maine had great success doing this with their middle-school physical science teacher support programme, MainePSP.

Ainissa Ramirez related the importance of students seeing role models among their teachers. Deepika Menon’s talk about self-efficacy among teacher trainees pointed out that self-belief follows from both competence and social factors. Tammie Visintainer’s work with children in a summer school emphasized the importance of belonging for children, with some feeling “shut down” or “invisible” when their contributions to the group were not recognized or valued.

2. The social atmosphere of the classroom, especially the relationships between the teacher and the students, was another important thread. Scot Hovan’s fascinating dissection of discourse inspired me to want to re-examine the speech patterns in my own classroom.

Henry Suarez’s rapid-fire talk suggested that student epistemic agency increased when teachers asked fewer converging and curricular questions, avoided reifying student responses, and encouraged student discussion about a more-“puzzling” curriculum. May Lee’s talk/poster about student positioning in group work emphasized the importance of establishing and maintaining a healthy social dynamic.

A slide from Scot Hovans talk. Click the image to see more.

3. Students’ intuitive responses, p-prims, and cultural predispositions cannot be altered by instruction. The aim is to give students the tools to effectively make correct judgements when they engage in “wait, let me think about that” thoughts. These ideas come from Eugenia Ektina’s keynote at the Physics Teacher Camp.

Multiple representations are a good way to do this. So are peer instruction and differentiated/individualized learning. Stephen Collins gave a great talk reminding us how difficult this can be without effective systems in place.

Natasha Holmes at PERC

4. The role of uncertainties and measurement was a cornerstone of the PER conference, and Natasha Holmes’ talks (plenary and workshop, as well as her poster) emphasized the difficulty of teaching critical thinking skills and lab skills. Duane Deardorff pointed out that we need to do a better job adopting the ISO Guide to Uncertainties and Measurement, in order to achieve standardization across the disciplines and even between labs and schools.

In my workshop group, I was provoked to wonder if we can teach students to expect that measures of physical quantities should have uncertainties attached. We also tried to conceive of an experiment that would compel students to need uncertainties. The roll-a-ball-off-a-ramp-and-place-the-cup practical (with successive prizes for a selection of smaller target cups) is a first step, but doesn’t motivate the use of uncertainties powerfully enough.

Ruth Howes tells a story.

5. Physics has a rich cultural heritage, which should be mined to help our students understand the discipline. Ruth Howes gave an amazing 10-minute talk about Marietta Blau and Alice Hall Armstrong, for example.

However, I feel uncomfortable at times with the privilege afforded to white men in the discipline, and the ignorance most afford it. I missed Melissa Dancy’s talk suggesting that the deficit model (summer science camp for girls to make them love science!) is treating the symptom, while a cultural paradigm shift needs to happen in the discipline.

Ainissa Ramirez reminded us that women and minorities are often presumed incompetent. I heard (more) stories about departmental fights between crabby old men. And, poignantly, there was a couple who brought their young daughter along to the PER conference, a reminder that women in physics face parenthood questions that their male peers do not. Seminars, office hours, and labs have been established as zones where families are unwelcome — another legacy of male domination.

Students face cultural battles too, and usually much less visibly (I am pursuing this question).

Ainissa Ramirez: “Everything on this list is something someone said I couldnt do”

6. The importance of community, and community-building, among physics educators. We are often isolated, we have a difficult task, and there is no easy solution. The best teachers we can be are teachers who constantly seek to improve, and communities are essential for that.

Kelly O’Shea and Andy Rundquist emphasized this aspect in their talks about scientific learning communities. The Physics Teacher Camp and the Modeling course I pursued earlier in the summer were great examples of this (and so are the Global Physics Department and #iteachphysics). I think next year I will try to do something to encourage folks to use twitter, gDocs, or whatever other tool is available in order to enrich the conference(s) and make more connections.

# Whirly Tube Physics

At a Modeling workshop earlier this summer, I was given a “whirly tube”. When you swing it in a circle, it makes a low-pitched whistling sound. There are overtones as well. It’s a pretty interesting instance of the classic standing wave in a tube scenario.

The product is available from Steve Spangler’s store (click the image above).

I wrote up a bit of a physical exposition of what (I think is) is going on with the whirly tube.

I’m not convinced this explanation is correct. Andy’s suggestion below seems good. I’d appreciate hearing from anyone with insight.

# Whose model?

I think my favourite part of the modeling approach is that students discover, and thus gain ownership of, the core ideas of physics. When they extract a formula from an experiment they conducted themselves, it is more real to them.

For the energy model, I made a point of emphasizing student ownership of the model. I rarely orate, but recall saying, “This is not Newton’s kinetic energy equation. It doesn’t belong to dead white men. This is YOUR formula for kinetic energy.”

As an entrance activity, I asked students to write down THEIR equations for three types of energy, and THE equations for three types of force. The performance difference on this proximate test of model understanding isn’t because of phrasing. Rather, it comes from how we learned: forces were wrapped up in Newton’s laws, while energies were MINE.

I think this is a good way to approach the cultural dimension of physics education. It encourages students to build understandings that reconcile their home and school worldviews, while allowing the empiricism of Western science a chance to perform and earn respect at a personal level. Whose equations are these?

In a class of 14, the students got 36 out of a possible 42 for “their” equations about energy, and only 21 out of 42 for “the” equations about force. After some silly statistics, that gives me p=0.33 on the null hypothesis that this test distinguishes something other than how recently they learned the idea. So, let’s call it an interesting subgroup analysis.

# “Physics of” Books

I have been collecting books that explore the physics of things. The aim is to have some references ready when my students start their individual projects (any maybe some extended essays) for the new IB physics curriculum.

The Physics of Hockey, by Hache, is probably my favourite (I am Canadian, after all!) but a student has it at the moment so it didn’t make the photo. Several of these books are written at a level to be quite accessible to a high school student, while also doing a good job with calculations, graphs, and tables of data.

I would like to write a book like this one day, but I don’t know what it would be about.