The textbook version of formative versus summative assessment is clean: formative is assessment for learning (ongoing, low-stakes, used to adjust instruction), summative is assessment of learning (end-point, graded, used to record what a student knows). Most teachers can recite this.
The gap between understanding the distinction and actually having a classroom system where formative data changes what you do next week is wider than the theory suggests.
Why formative assessment often doesn’t function as formative
The most common problem: formative assessment is conducted, but its results don’t feed back into instruction quickly enough to matter.
You give an exit ticket on Thursday. You collect it, look through it Friday morning, note that eight students don’t seem to understand the concept you covered. Monday, you move to the next unit because the schedule says to. The formative data informed your awareness of the gap; it didn’t change anything.
This isn’t laziness or indifference — it’s the structure of most teaching schedules. The curriculum keeps moving. The unit test is when it is. Formative data that arrives on a schedule that doesn’t allow for adjustment isn’t functioning as formative, even if it’s called that.
The question that matters is practical: when you collect formative evidence, does it have any chance of changing what you do in the next class period? If the answer is usually no, the system needs a different design, not a different name.
What has to be true for formative data to change instruction
Three conditions, all of which have to be in place:
The assessment has to happen early enough. Exit tickets on day twelve of a fourteen-day unit are too late. You need to know about the gap when you still have time to address it. This means front-loading formative checks — a quick concept check on day two, not day ten.
The results have to be readable fast. If formative assessment takes thirty minutes to process, it won’t be processed. The formats that work are ones where you can know, in two minutes, whether the class understands: a show-of-hands, a quick written response with a clear right or wrong, a thumbs-up/down/sideways, a three-problem set you can sort into two piles (got it / didn’t get it) while students are packing up.
Your schedule has to have flex. Even small flex — the ability to spend ten more minutes on a concept because you saw confusion yesterday — is enough. What doesn’t work is a schedule so tight that the assessment results can’t change anything. If that’s your situation, the most valuable form of formative assessment is probably individual flagging (noting which students need a follow-up conversation) rather than trying to reteach whole-class.
The low-stakes grade trap
A lot of “formative assessment” in practice means “graded homework” or “graded participation.” The problem: when students know formative work is graded, they stop being honest with you about what they don’t understand.
A student who doesn’t follow an example in class won’t raise their hand if participation is graded — they’ll nod and fake it. A student who doesn’t understand the homework will copy answers rather than turn in a blank. The grade incentive corrupts the information you’re trying to collect.
True formative assessment — the kind that gives you useful data — needs to feel low-stakes to students. “I’m checking in to understand where we are, not to score you” only works if it’s true. If there are five participation points on the line, it’s not true.
This doesn’t mean nothing informal gets graded. It means you have to be deliberate about which tasks are for your information versus which are for the record.
What to actually do with formative results
Assume you have useful data: you’ve checked for understanding and know that roughly a third of your class hasn’t grasped a key concept. Now what?
The options, roughly in order of effort:
In-class correction: Before moving on, address the gap directly. “I saw that a lot of people are still not sure about X — let me show one more example from a different angle.” This requires noticing the gap before the class period ends.
Small-group reteach: Pull together the students who showed confusion during the next class’s independent work time. Eight minutes with four students who need the concept explained differently is often enough.
Partner work designed around the gap: Pair students who demonstrated understanding with students who didn’t, with a task designed to surface the concept. Peer explanation often works where direct instruction doesn’t.
Adjust the next assignment: If you were planning a complex task that builds on the concept, simplify it or add a scaffolded entry point. This is quieter than reteaching but still responsive.
Note it and target it at the next natural checkpoint: Not every gap needs to be addressed immediately. Some concepts spiral — they come back. Knowing that ten students are shaky on a foundational idea tells you to watch for it at the next major assessment and intervene before that.
Summative assessment as a different thing
Once summative is correctly understood as the endpoint rather than just “the test that counts for more,” it changes how you design it.
Summative assessments should be measuring what students can do at the end of the learning — not whether they retained what was taught in week two of six. If you covered a concept, found that half the class didn’t understand it, reteached it, and they got it by week five, the summative should be able to reflect the week-five understanding. An assessment that locks in the week-two confusion and averages it into the final grade isn’t measuring what they know at the end; it’s measuring a composite of the whole learning arc.
This is what drives some teachers toward the SBG approach of allowing retakes and updating scores when mastery is demonstrated. You don’t have to implement that fully to absorb the principle: summative assessment should try to capture current understanding, not historical performance.
The practical first step
If you want to actually run formative assessment as formative, start with one unit where you commit to one formative check per week, with the explicit agreement with yourself that you’ll do at least something responsive based on the results.
Not a new lesson. Not a reteach from scratch. Something: an extra example at the start of the next class, a targeted conversation with two students you flagged, a problem added to warm-up that specifically addresses the confusion you saw.
After four weeks of that, you’ll have a much clearer picture of what formative assessment can realistically look like in your specific teaching context — and what changes in your schedule would make it work better. Start there.