Tag Archives: physics

Studying StudyIB

The wonderful Chris Hamper has been working on a new educational idea over the last year. Housed on StudyIB, the Virtual Tutor is an attempt to recreate the experience of working one-on-one with a tutor as you go through a multi-step physics problem. There is a network that draws in resources and reminders for students, depending on their progress. It’s a good idea and, with the current web technologies available, just about due. Here is a video where Chris explains how the Virtual Tutor works.

Screen Shot 2017-03-12 at 2.37.39 PM

I introduced the Virtual Tutor to my students as a way to study for their IBDP Physics exams. The response was generally along the lines of “this is interesting”. However, it wasn’t clear whether or not this approach was effective, so my students and I devised a small study to try to answer that question.

First, the students wrote a pre-test on a particular topic. Second, they worked through one of the learning networks on the Virtual Tutor (we did Forces 3). Third, they wrote a post-test on the same topic (but with slightly different questions). The pre- and post-tests have three questions.

The first question is about something we have practiced extensively, and that they should know how to do: drawing a free-body diagram. The average pre-test score was 2.45 (out of 3), and the post-test score was an increase by 0.27 points. This corresponds to a small number of students forgetting or misdrawing one of their force vectors. It seems that the Virtual Tutor was an adequate reminder. Below is a sample or pre- and post-test work that shows this.

20170312_140229-800x450 20170312_140237-800x450

The second question is about something we have not practiced very much: drawing force diagrams, where the forces are drawn at the place where they originate, rather than applied to a hypothetical center of mass. Here, the Virtual Tutor helped some students (as shown in the pre/post examples below), while two students had a lower score on the post-test for this question. The overall effect was an increase of 0.46 points to 1.37 (out of 3).

20170312_140208-800x450 20170312_140215-800x450

The third pre/post question is something more akin to what the students saw on the Virtual Tutor: a standard physics problem where students need to move through several steps, doing mathematics, in order to find a numerical answer. This is the type of question the Virtual Tutor was designed for, and here it was most effective: the average student score increased from 1.00 (out of 4) to 2.82. The below work is typical: a student was able to start the problem, but got “stuck”: the Virtual Tutor reminded or taught him the necessary steps for this type of problem, and he was able to transfer that knowledge and finish the problem.

20170312_140128-800x450 20170312_140141-800x450

I will follow-up with my students after their mock exams next week, to see if and to what extent they found the Virtual Tutor useful. From this small study, however, a few conclusions emerge:

  1. The Virtual Tutor probably does about as well as any sort of studying for reminding students about fundamentals that they already know.
  2. The Virtual Tutor isn’t particularly effective as an expository tool. If students need to learn some new ideas or facts, their textbooks, videos, or classroom learning experiences are better (I should add that the rest of the StudyIB site is quite good for this).
  3. The Virtual Tutor is effective at reminding students of the difficult, complicated processes involved in solving multi-step problems. As seen on the third question of this study, one session with the virtual tutor was sufficient to get about half the students in this study from a low score to a high score on the problem.

I’m pretty impressed with the Virtual Tutors. If you’re a physics student reviewing for exams, consider giving it a try.

Here’s the (averaged) data:

Screen Shot 2017-03-12 at 2.44.02 PM

Latvian Physics Exam

Today was the pilot administration of Latvia’s new 12th-grade physics exam. Next year, students will be required to write either the physics or the chemistry exam as part of their secondary school leaving requirements (for those students who pursue the academic track of studies). In this post, I will take a close look at the exemplar, which can be found (in Latvian) here.


Part 1.1 (multiple choice): Assumptions and Memory

The exam opens with a question about the applicability of the accelerated motion model. It’s a promising sign, but all of the answers — a launched javelin, a parachutist in the first minute of fall, a person in an elevator, and a hammer dropped on the moon — are possible, and the question is really about the degree to which the model can be applied to these situations. For example, a javelin has a very narrow cross section, so it would be reasonable to assume it experiences very little drag force. In fact, the aerodynamics of javelins are interesting, and although their motion is non-parabolic, the reasons are not obvious (an example).

The fifth question is identical to one from an IB physics exam from a couple years ago, where the motion of a piston is presumed to be uniquely responsible for the temperature increase of a compressed ideal gas. The twelfth lists four core equations about electricity and asks which explains the need for high voltage in long-distance power lines. These types of questions are frustrating because multiple answers are reasonable, and the students are forced to try to guess what the exam-writer is thinking.

Another type of question that appears is simple recall. Question 6 requires that students remember that isochoric processes have constant volume. The 13th question seems to ask that students remember how the technology of electromagnetic inductive charging works. Question 20 is about the origin of the atoms that make up our bodies.

The third type of common question is about relationships between variables. Students are expected to recall how electric fields depend on the distances from point charges, how photon energy is related to wavelength, and how the energy of photoelectrons is related to the frequency of incident light.

These multiple choice questions are pretty standard, with questions similar to IB Physics and the GRE. The difficulty level is probably appropriate, in that students will show a nice Gaussian distribution of responses. But as a tool for educational assessment, I think these multiple-choice questions have only marginal utility.

Part 1.2 (short answer): Not For The Meticulous

The second part of the exam consists of 10 short-answer questions. In reality, it is a mix of question types that is sure to frustrate students. There are 6 multiple-choice style questions where students must choose all the correct answers, 2 ranking tasks, and 2 calculations. Problematically, some of the distractor options on the multiple-choice style problems could conceivably be relevant. For example, problem 27 shows possible light paths refracting and reflecting around a glass block. A line path that is not visibly refracted by the block is possible, depending on the index of refraction (and the refraction is exaggerated compared with what would be observed in the lab).

The biggest problem with questions like this is that, while they are accessible to very weak students and straightforward to students who have a teacher’s level of understanding of physics problems, they will not be very successful at differentiating between a student who is at 90% of a robust understanding, and a student who can intelligently guess 10%.

I’m happy that the test-writers are attempting to expand beyond multiple-choice questions, but this section will be hard for the students, and will provide little feedback about students’ ability to use their physics knowledge and skills to meaningfully create knowledge.

Part 2 (long answer): Physics Problems, With a Dash of Discussion

The long-answer section begins with the ominous note that this section will involve judging pace, but with 135 minutes to solve 5-7 problems, the question likely will not be pace so much as sustained concentration.

I want to like these problems: they combine calculations (kinetic and gravitational energies for the first, circular motion for the second) with diagrams and comparison of quantities. This fixes the problem with the first half of the exam, and allows students the opportunity to explain their answers.

However, the structures for the problems seem set up to greatly restrict the students from any sort of individual actions, knowledge construction, or creativity. The first problem requires that students draw the velocity and acceleration vectors for a pendulum — something that is usually recalled, rather than figured out.

The rest of the problems mix standard physics things-that-must-be-remembered with little calculations. There is a clear effort to make the problems meaningful and/or relevant: one concludes by asking students for a conclusion about heat loss in a house, and another is about the laser on a Mars rover. However, the overwhelming sense with these questions is that of standard physics regurgitation: draw a voltage divider, calculate the diffraction angle, identify this range of the EM spectrum, etc.

The final question (at last!) asks that students work with some data, but this is an illusion: after finding the slope from a graph, it’s the same deal as before: find the efficiency of the electric kettle.

By the time you get to the end of these problems, it is clear that they are merely standard physics problems, with just a few “explain why” questions. This is better than the mechanistic rigamarole of the IB physics exams, but not by much. Even here, there is no chance for students to construct their own meaning or use genuine critical thinking, and I suspect that most students will get most of their points by reiterating standard physics solutions to these standard physics problems.


There is a suspicion that the new physics and chemistry exams are an attempt by the Ministry of Education to add more rigour to a national curriculum that has recently come under fire after Oxford University decided not to recognize the Latvian grade 12 diploma as adequate preparation for undergraduate studies. Thus, this exam will probably give universities a better way to sort students. I’ve heard that, after promising they wouldn’t be required, many universities are asking to see students’ results for this pilot year for the exam.

However, like all standardized testing of this sort, what we will primarily find is that students attending better-funded schools, in Rīga, and with higher socioeconomic class will do better on these exams. Additionally, these exams will be felt backward into the 11th grade and earlier, as physics and chemistry teachers are increasingly under pressure to prepare students for the exam, and thus are forced to sacrifice good science teaching in favour of test preparation. The quality of meaningful science education will fall, resulting in weaker critical thinking and scientific reasoning skills across society, while universities will increasingly rely on these test scores to make admissions decisions, and disadvantaged students will be left behind.

If I’m wrong about some or all of this, or if you have perspective or insight to share, please reply or email me (danny.doucette at gmail).

Social Justice in Physics

Moses Rifkin does a superb 6-day unit on social justice in his physics class. Here, by arguing against it, a Fox News correspondent makes it clear why social justice is needed:

I wanted to do something similar to Moses, but I had two constraints:

  1. Since I teach IB physics, and already don’t get enough contact hours, I couldn’t devote more than a class period to it.
  2. Since I teach at an international school in Northern Europe, the social justice issues experienced by my students and in our culture will not necessarily be racial in nature.

Thus, I tried to lift out my favourite parts from Moses’ curriculum, and recontextualize everything to be more universal in nature. Our discussions ended up primarily focusing on sexism, with class, religion and disabilities as other sources of examples and discussion.

We started with some ground rules, directly pilfered from Moses:


Second, I introduced the idea of stereotype threat. Two students had studied this in a psychology class, but had difficult explaining it. I gave an example (as a North American in Europe, I fear being seen as monolingual, and am disinclined to practice languages as I struggle to learn, thus learning less well). The students brainstormed examples in pairs, then shared out. This took about 15 minutes.

Third, I had students randomly select from a list of social groups. They used their computers to quickly find and research two physicists from that social group. In a circle, they shared who they found and I probed with questions like “how did you find this person?”, “how did you choose this physicist?”, “had you heard of this person before today?” and “was it hard to find physicists in this social group?” Our list of social groups (the last two were suggested by students during our discussions):

women, men, heterosexual, homosexual, black, white, young, old, disabled, able-bodied, Christian/Muslim/Jewish, Eastern religion, European/American, not European/American, upper class, lower class

This led fairly naturally to a discussion of why some of these social groups are under-represented among physicists. I asked the students to make hypotheses to explain the under-representation, and then to offer counter-examples for the hypotheses, if they could think of any. Our hypotheses were that the distribution of physicists:

  • represents the population
  • is determined by the geographical location of universities and research institutions
  • is determined by the populations access to education
  • is determined by social expectations
  • is determined by history/politics

These were all seen to be unsuccessful as a complete explanation. Next, we switched directions, and looked at the barriers for people of under-represented social groups. Some good arguments were presented here, including the effect of expensive tuition at university, the impact of stereotypes, and the role of religion. I was able to cap-off these arguments by labeling these effects as the essence of institutional sexism, racism, classism, ablism, agism, homophobia, etc.

We finished with the Implicit Association Test about gender and science. I told the students that they need not share their scores, but many were keen to talk about it, so I know that we got a variety of results that approximately conform to what one would expect from a mixed group.

Before we left, I tried to introduce the idea of privilege, and especially of white privilege, but I think this fell flat, like everything does when you’ve got two minutes until lunchtime.

PE Effect Lab

I tried to use the photoelectric effect as a paradigm lab for our unit on atomic, nuclear, and particle physics. Ultimately, it was a failure. This post will do two things: (a) explain how to create and use apparatus to make this lab work, and (b) analyze why it didn’t work for me.

Two sources need to be acknowledged first. As always, Arons did a fantastic job of explaining why the photoelectric effect is difficult for students, and he also provides a good plan if you wish to introduce it. Secondly, the design of my apparatus was inspired by one designed by Rice, Garver, and Schober.


The photoelectric effect is a simple idea with devastating consequences. Essentially, light shining on a metal causes electrons to be knocked off the surface of the metal. If two electrodes are placed close together and connected with a metal wire, these electrons cause a current to flow in the wire, which can be detected with a microammeter.

Thus, we should be able to detect the photoelectric effect with just four pieces of equipment: a pair of closely-spaced electrodes, a microammeter, some wire to connect them, and a light source.


The first of these is the toughest: the electrodes should be in a vacuum, so the electrons are not absorbed or scattered by air molecules. The 1P39 vacuum tube is perfect. There are usually a few such “new old stock” on ebay, at a price of about USD 40 each. I’d suggest getting stands for the tubes, too: the part number is 27E122, and the active sockets are 4 and 8.

A decent multimeter can accurately read down to 0.1 microamperes, so that’s no problem. I put little wire loops on my stands so that we can use wires with alligator clips. Ambient light is enough to produce about half a microamp of current.

I demoed this with the students, and they were able to draw the relevant simple circuit diagram and construct their own simple devices pretty easily. I had them spend about 15 minutes investigating the impacts of light intensity and light colour (frequency) on the current. We didn’t spent much time analyzing this, but the results were clear and straightforward: more light means more current, and different colours seemed to have different currents as well, but in a less-obvious way.

Stopping Potential

Okay great, we’ve got light creating electricity. So what? The clever trick is to apply a (variable) potential difference across the electrodes. If you adjust the potential difference so that a large positive charge is on the “receiving” electrode, the electrons will accelerate to that electrode, and the current should increase to a certain saturation point. This is where all the electrons are reaching the “receiving” electrode.

On the other hand, if you reverse the potential difference and apply a negative charge to the “receiving” electrode, the electrons will be repelled. However, at a very low potential, some of the electrons will have enough kinetic energy to overcome the electrostatic repulsion and enter the “receiving” electrode. Thus, the game is to slowly increase the potential until all the electrons are stopped from reaching the receiving electrode.

At this point, the kinetic energy of the electrons is equal to the electric potential energy provided by the potential difference (ie: KE = q * V). Thus, given the charge of the electron, we can use the Stopping Potential to calculate the amount of kinetic energy the electrons have when they exit the metal.


The range of potentials needed is about 0 to 1.5 V. I used some 10 kOhm potentiometers and 56 kOhm resistors to make potential dividers that would give appropriate voltages from a 9V battery. I like working with 9V batteries, but there’s no reason not to use a AA directly on a potentiometer instead.

When you add the potential difference, you also need a voltmeter and another pair of wires — things start to get a bit messy on the lab table. This was where my students got caught. Although they were able to digest the idea of a stopping potential (with a bit of prompting, the students put together the idea without my help), actually constructing the circuit on the lab bench proved to be too much.


A few (top) students were able to recall what they knew about voltmeters, and were able to draw appropriate circuit diagrams. Most were not, which means that (a) our study of that topic last year did not have lasting outcomes, and (b) I wasn’t scaffolding the circuit-building well enough.

Worse, though, a design flaw popped up that took me a while to diagnose and repair. If you apply the potential difference to the vacuum tube backwards, and turn the potentiometer, you’ll get a variety of currents. When the potential difference is zero, the current is nearly zero, and when the potential difference is increased, the current increases as well. To the students, this behaviour didn’t seem abnormal. Thus, I had to check with the groups one at a time, check that they’d wired up correctly, and do a bunch of plugging/unplugging if they had it backwards. Murphy’s Law stepped in here, and so I ended up spending a couple minutes fixing all of the apparatuses before we could even begin experimenting.

I ought, thus, to have labelled the tube stands with + and – signs, and asked students to ensure they were connecting things correctly.

By this point, of course, the lab had come off the rails. The experiment was no longer theirs, and after asking the students to sit patiently for about 20 minutes while I troubleshot their circuits, behaviour issues were cropping up too.


The goal of the lab is to get a graph of electron energy (ie: 1.6e-19 C times the stopping potential) over the frequency of light that is causing the photoelectric effect. This graph will have a negative vertical axis intercept representing the work function of the metal (ie: the amount of energy required to liberate the valence electron) and a slope representing the amount of energy a quantum of light will have, for each Hz of frequency (ie: Planck’s constant).

LEDs work quite well for this, since they emit a fairly narrow range of wavelengths. I made some multi-LED sources using surface-mount super-bright LEDs. I first laid down some adhesive copper tape, then super-glued the LEDs in place. Soldering wasn’t too bad, but problematically the plastic base started to melt if I wasn’t really quick with the soldering gun. A 6-position rotary switch allowed the students to choose between LEDs, and I used 5V wall-wart adapters for this (my love affair with 9V batteries aside, I really don’t like using batteries in the lab).


Easier is to get a selection of through-hole LEDs and 3V coin-size batteries. You can hold the battery between the leads of the LED and make a complete circuit by holding the LED leads against the battery faces with thumb and forefinger.

I had some blue and near-UV LEDs that my students used in this manner, and they didn’t have much trouble with it. I’d suggest making a small hole in the side of a cardboard box for this approach. The cardboard box covers the vacuum tube, blocking out stray light.


The weakest students had trouble figuring out the frequencies of light from the wavelengths, so I sat down with the worst offender and we worked it out; I asked him to write his answers on the whiteboard at the front.

Putting It Together

We had some trouble with LoggerPro. The students typed in their numbers in the format 1.3*10^(-19), but LoggerPro interpreted those as text labels rather than numbers. Worse, when they tried to correct the problem, by typing 1.3e-19, the cell was replaced with 0. Surely that cannot be correct! (It is.) I did a second tour of the room, showing each group about this strange notation, and the strange behaviour from LoggerPro.

At this point, it became clear that a lot of the groups had been quite careless with their data collection. Their points had a lot of scatter, and some had points that couldn’t have been correct. I noticed two differences between how I conducted the experiment, and how they did:

  1. I was careful to zero out the current with the light source off, then turn the LED on, and only then begin to increase the potential difference. The students usually didn’t check their light shields, and didn’t return the potential difference to zero between trials.
  2. I increased the potential difference slowly, carefully, and deliberately. The potentiometers are sensitive enough to get 0.01 V accuracy, which I could justify in my own trials of the experiment, but the students were not careful enough to get consistent and reliable stopping potentials.

We whiteboarded our results. Three groups had unusable data: two because of spurious results, and the third for reasons I cannot fathom but surely related to their being “done” with data collection after about 3 minutes (my suggestion of multiple trials was not adopted — this is probably related to the breakdown of discipline I mentioned earlier). Of the others, all but one made fairly-obvious calculation errors.

The single group with decent data and error-free analysis came up with a result of about 4e-34 J s, which isn’t bad — but not nearly as good as the robust result I was able to get when I did the lab myself, using the same equipment, of 6.3e-34 J s.


Summer rustiness? Surely. A tricky lab that needed more scaffolding and support. Yep. A lab that allowed us to see that E = hf and that light comes in quanta, without needing a teacher to point and explain? Nope.

Overall, my attempt to use this lab as a paradigm to anchor our unit on modern physics was not successful. We spent about 3 hours developing it, and yet came away with only a sketchy model. After our whiteboard session, I needed to decide between two alternatives: re-do the lab, hopefully getting better results and accept a week-long setback, or move on and try to use the Bohr atom as a model instead. There was so much frustration about the lab, and our IB curriculum is so unforgiving, that we could only move on.

This lab could work. It could be really good. It brings together ideas from earlier studies of electricity and waves, and it provides a clear basis for further studies of atomic physics. However, if you decide to use it, go slow, be deliberate, and ensure the students do the same.

Edit: check out the good comment by Andy, below. On Twitter, Frank asks, “Have you tried using the PhET simulation on the photoelectric effect? Could possibly using in tandem with hand-on lab, similar to how folks use the circuit sim in lab.”

Themes from AAPTsm15

The summer meeting of the American Association of Physics Teachers [AAPT], along with the subsequent Physics Education Research [PER] Conference, has just concluded. It’s given me a lot to think about. Here are a few themes that have emerged for me:

Screen Shot 2015-07-30 at 8.05.14 PM

1. The need to address the affective dimension of learning, and the thinking skills, while trusting that the content will follow. The PER group at Maine had great success doing this with their middle-school physical science teacher support programme, MainePSP.

Ainissa Ramirez related the importance of students seeing role models among their teachers. Deepika Menon’s talk about self-efficacy among teacher trainees pointed out that self-belief follows from both competence and social factors. Tammie Visintainer’s work with children in a summer school emphasized the importance of belonging for children, with some feeling “shut down” or “invisible” when their contributions to the group were not recognized or valued.


Screen Shot 2015-07-30 at 7.56.41 PM

2. The social atmosphere of the classroom, especially the relationships between the teacher and the students, was another important thread. Scot Hovan’s fascinating dissection of discourse inspired me to want to re-examine the speech patterns in my own classroom.

Henry Suarez’s rapid-fire talk suggested that student epistemic agency increased when teachers asked fewer converging and curricular questions, avoided reifying student responses, and encouraged student discussion about a more-“puzzling” curriculum. May Lee’s talk/poster about student positioning in group work emphasized the importance of establishing and maintaining a healthy social dynamic.

Screen Shot 2015-07-30 at 8.17.33 PM

A slide from Scot Hovans talk. Click the image to see more.

3. Students’ intuitive responses, p-prims, and cultural predispositions cannot be altered by instruction. The aim is to give students the tools to effectively make correct judgements when they engage in “wait, let me think about that” thoughts. These ideas come from Eugenia Ektina’s keynote at the Physics Teacher Camp.

Multiple representations are a good way to do this. So are peer instruction and differentiated/individualized learning. Stephen Collins gave a great talk reminding us how difficult this can be without effective systems in place.


Natasha Holmes at PERC

4. The role of uncertainties and measurement was a cornerstone of the PER conference, and Natasha Holmes’ talks (plenary and workshop, as well as her poster) emphasized the difficulty of teaching critical thinking skills and lab skills. Duane Deardorff pointed out that we need to do a better job adopting the ISO Guide to Uncertainties and Measurement, in order to achieve standardization across the disciplines and even between labs and schools.

In my workshop group, I was provoked to wonder if we can teach students to expect that measures of physical quantities should have uncertainties attached. We also tried to conceive of an experiment that would compel students to need uncertainties. The roll-a-ball-off-a-ramp-and-place-the-cup practical (with successive prizes for a selection of smaller target cups) is a first step, but doesn’t motivate the use of uncertainties powerfully enough.

Ruth Howes tells a story.

Ruth Howes tells a story.

5. Physics has a rich cultural heritage, which should be mined to help our students understand the discipline. Ruth Howes gave an amazing 10-minute talk about Marietta Blau and Alice Hall Armstrong, for example.

However, I feel uncomfortable at times with the privilege afforded to white men in the discipline, and the ignorance most afford it. I missed Melissa Dancy’s talk suggesting that the deficit model (summer science camp for girls to make them love science!) is treating the symptom, while a cultural paradigm shift needs to happen in the discipline.

Ainissa Ramirez reminded us that women and minorities are often presumed incompetent. I heard (more) stories about departmental fights between crabby old men. And, poignantly, there was a couple who brought their young daughter along to the PER conference, a reminder that women in physics face parenthood questions that their male peers do not. Seminars, office hours, and labs have been established as zones where families are unwelcome — another legacy of male domination.

Students face cultural battles too, and usually much less visibly (I am pursuing this question).


Ainissa Ramirez: “Everything on this list is something someone said I couldnt do”

6. The importance of community, and community-building, among physics educators. We are often isolated, we have a difficult task, and there is no easy solution. The best teachers we can be are teachers who constantly seek to improve, and communities are essential for that.


Kelly O’Shea and Andy Rundquist emphasized this aspect in their talks about scientific learning communities. The Physics Teacher Camp and the Modeling course I pursued earlier in the summer were great examples of this (and so are the Global Physics Department and #iteachphysics). I think next year I will try to do something to encourage folks to use twitter, gDocs, or whatever other tool is available in order to enrich the conference(s) and make more connections.

Screen Shot 2015-07-30 at 8.05.37 PM

Whirly Tube Physics

At a Modeling workshop earlier this summer, I was given a “whirly tube”. When you swing it in a circle, it makes a low-pitched whistling sound. There are overtones as well. It’s a pretty interesting instance of the classic standing wave in a tube scenario.

The product is available from Steve Spangler’s store (click the image above).

I wrote up a bit of a physical exposition of what (I think is) is going on with the whirly tube.
Read the PDF

I’m not convinced this explanation is correct. Andy’s suggestion below seems good. I’d appreciate hearing from anyone with insight.

Whose model?

I think my favourite part of the modeling approach is that students discover, and thus gain ownership of, the core ideas of physics. When they extract a formula from an experiment they conducted themselves, it is more real to them.


For the energy model, I made a point of emphasizing student ownership of the model. I rarely orate, but recall saying, “This is not Newton’s kinetic energy equation. It doesn’t belong to dead white men. This is YOUR formula for kinetic energy.”


As an entrance activity, I asked students to write down THEIR equations for three types of energy, and THE equations for three types of force. The performance difference on this proximate test of model understanding isn’t because of phrasing. Rather, it comes from how we learned: forces were wrapped up in Newton’s laws, while energies were MINE.


I think this is a good way to approach the cultural dimension of physics education. It encourages students to build understandings that reconcile their home and school worldviews, while allowing the empiricism of Western science a chance to perform and earn respect at a personal level. Whose equations are these?



In a class of 14, the students got 36 out of a possible 42 for “their” equations about energy, and only 21 out of 42 for “the” equations about force. After some silly statistics, that gives me p=0.33 on the null hypothesis that this test distinguishes something other than how recently they learned the idea. So, let’s call it an interesting subgroup analysis.