A student with a 504 plan for processing speed sits down to take the same forty-question test as everyone else. She knows the material — you’ve seen her explain it clearly during discussion. She hands in a paper that’s two-thirds complete.
What’s the fair grade?
Most gradebook software answers that question before you even ask it: she gets whatever percentage of correct answers she managed to finish. Clean, consistent, defensible. Also, arguably, a measurement of her processing speed rather than her knowledge of photosynthesis.
This is the tension at the center of differentiated grading — and it’s worth sitting with before we get into the methods.
What differentiation in grading actually means
There’s a conflation that happens in a lot of conversations about this: differentiated grading gets treated as synonymous with lowering standards or giving some students easier paths to an A. That’s not what it is.
The core idea is simpler: a grade should measure what a student learned, not just how they performed on one particular task on one particular day. When you adjust how you assess — extended time, modified format, fewer questions covering the same concepts — you’re trying to get a more accurate read on the learning, not a more generous one.
That distinction matters. A rubric that gives full credit for “demonstrates understanding of the nitrogen cycle through written explanation or labeled diagram” is not easier than one requiring only written explanation. It’s broader. The standard is the same; the demonstration pathway isn’t.
Three approaches that don’t require rebuilding everything
1. Modified rubrics with the same criteria
The cleanest version of this: keep your rubric criteria identical, but adjust what “meets standard” looks like for a given student based on their documented accommodations.
For a student with a language processing IEP, “clearly explains cause and effect” might be assessed on a shorter written response, or through a verbal check-in. The criterion is the same. You’re just accounting for the fact that the length of the written response wasn’t the thing you were trying to measure.
In practice: keep a note in your records for each modification. What was adjusted, for whom, which assessment. This protects you in any parent or administrator conversation and keeps your own grading consistent across the term.
2. Separate growth columns — not blended in
One pattern that tends to cause more problems than it solves: blending an “effort” or “participation” score into a student’s academic grade. The intention is good — you want to recognize improvement and engagement. But what you end up with is a grade that represents neither learning nor behavior particularly clearly.
A cleaner approach: track effort, growth, and engagement as separate columns that inform your understanding of the student but don’t get folded into the academic percentage. Some schools have separate behavior or growth grades; if yours doesn’t, keeping these notes in a parallel system still helps you have more grounded conversations with students and parents.
The academic grade stays cleaner. The qualitative picture stays visible. They’re just not in the same cell.
3. Tiered assessments with a shared floor
This one takes more upfront work but pays off in terms of both accuracy and classroom culture. The idea: design assessments with a base-level section that all students complete (covering core grade-level concepts), and an extension section that any student can attempt for additional points.
The base section determines whether a student has met the standard. The extension section gives a pathway for students who’ve gone beyond it. Students with modifications focus on the base; students who want a challenge go further.
What tends to happen with well-designed tiered assessments is that more students attempt the extension than you’d expect — because it’s optional, it doesn’t carry the pressure of the base questions, and curiosity takes over.
”But won’t other students think it’s unfair?”
Usually, when students raise this — and they do — what they’re really asking is whether you’re applying some secret easier scale that they’re not getting access to. The honest answer to that is no, and you can say so directly.
“Different students have different needs, and I’m making sure everyone gets assessed on what they’ve learned. The standard is the same.” Most students in middle school and up accept this when it’s said plainly. The ones who push back are often the ones who’ve absorbed the myth that sameness equals fairness — which is worth gently challenging anyway.
What students notice more than differentiated formats is inconsistency: when the same accommodation seems to appear and disappear, or when it’s obvious that some students are getting much more generous marking rather than adjusted methods. Transparency about that you’re differentiating (without necessarily going into specifics about individual students) goes a long way.
A note on records
Whatever approach you use, document it. Not elaborate documentation — just: which students have which accommodations active, what modifications were applied to which assessments, and approximately when. A simple notes column in your grade tracking is enough.
This isn’t about covering yourself legally, though it does that. It’s that differentiated grading only works if it’s consistent. Without records, it’s easy to drift — applying accommodations some weeks and forgetting them others, adjusting one assessment format but not a similar one. The record keeps your own practice honest.
The goal, ultimately, is grades that mean something. That tell you and the student what they know. That’s harder than applying one scale uniformly — but it’s also the point.