Assessment: Every Step Informs the Next

Overview

Key things covered in this article

  • What assessment really means in learning design beyond final quizzes and pass/fail scores.
  • The three core types of assessment—diagnostic, formative, and summative—and when to use each.
  • Common assessment mistakes that frustrate learners and leaders, and how to avoid them.
  • How to factor in quality, accessibility, and follow-up so assessment supports real performance.
  • One guiding question you can use to design more intentional assessments in your next project.

Rethinking Assessment: Not Just the Final Score

When most people hear the word assessment, they picture final scores, high-stakes exams, or pass/fail checkboxes. In learning design, assessment is much more than an endpoint. It is a series of intentional check-ins that help you and your learners see what is really happening.

Assessment can help you:

  • Understand where learners are starting
  • Support them as they move through the experience
  • Confirm whether they can actually do what matters at the end

Like stepping stones across a river, each assessment—diagnostic, formative, and summative—plays a different role in helping people make progress. When you design those steps with care, assessment becomes less about judgment and more about momentum.

From a design thinking perspective, assessment is also a way to learn about your learners:

  • You listen to what they can do right now
  • You test your assumptions about what they understand
  • You adjust the course, examples, or support based on what you learn

In the next section, you’ll see how the three core types of assessment—diagnostic, formative, and summative— work together across the beginning, middle, and end of a learning journey.

The Three Essential Types of Assessment

Before you pick tools or formats, it helps to be clear about the kind of question you are trying to answer. Most assessment moments fall into one of three types:

  • Diagnostic: Where is the learner starting?
  • Formative: How are they progressing?
  • Summative: Did they reach the goal?

Each type supports a different moment in the learner journey. Diagnostic assessment helps you understand who is in the room. Formative assessment gives you and your learners feedback along the way. Summative assessment checks whether they can do the things that matter most at the end.

The cards below give a quick view of when to use each type, what it is best for, and common mistakes to watch for as you design.

Type 1

Diagnostic assessment – “Where are we starting?”

  • When: Before learning
  • Purpose: Identify prior knowledge, readiness, and gaps
  • Examples: Pre-tests, skills inventories, self-checklists

Use diagnostic assessment to understand who is in the room before you design or assign the full experience. It helps you avoid over-teaching what people already know and missing key gaps that matter.

Common mistake: Skipping pre-assessment and designing for the “average” learner instead of the actual group in front of you.

Type 2

Formative assessment – “How is it going?”

  • When: During learning
  • Purpose: Monitor progress and provide feedback
  • Examples: Knowledge checks, polls, practice tasks, peer feedback

Formative assessment gives learners and designers small, low-risk check-ins along the way. It shows whether people are on track, where they are stuck, and where they may need more practice or clarity.

Common mistake: Giving feedback only at the end—or making it so vague that learners don’t know what to change next.

Type 3

Summative assessment – “Did we get there?”

  • When: After learning
  • Purpose: Measure mastery and final outcomes
  • Examples: Final quizzes, simulations, performance tasks, certification tests

Summative assessment is where you check whether learners can do the things that matter most at the end of the journey. It should feel as close as possible to real work, not a separate school-like exercise.

Common mistake: Building a final quiz that doesn’t match the learning objectives or the real decisions people make on the job.

How the Three Types Work Together in One Program

In a single learning journey, you will often use all three types of assessment. Each one supports a different moment, and together they create a clearer picture of how learners are doing.

  • Diagnostic: New hires complete a brief pre-check on key policies before onboarding starts, so you can see who needs more support and who can move faster.
  • Formative: During onboarding, they practice scenarios, answer quick questions, and get feedback from a coach, so they can adjust before they make mistakes on the job.
  • Summative: At the end, they complete a realistic task or simulation that mirrors their actual job, so you can confirm they are ready to perform.

Seeing the types as a sequence, not separate events, helps you design assessment as part of the whole experience, rather than a single quiz at the end. It also makes it easier to explain to stakeholders why you are recommending different kinds of check-ins at different stages.

Choosing the Right Assessment for the Moment

A helpful way to choose the right assessment is to start with the decision you are trying to support. Different moments in the learning journey call for different types of information about your learners.

Ask yourself:

  • What decision are we trying to support? Do we need to place learners, coach them along the way, or certify that they are ready?
  • What do we need to learn about learners right now? Their starting point, their progress, or their end result?
  • What would this look like in their real work? Can we design the assessment to look more like that, and less like a generic quiz?

In practice, this often looks like:

  • If you are trying to place people into the right path, you are in diagnostic territory.
  • If you are trying to coach them while they are learning, you are using formative assessment.
  • If you are deciding whether they can handle real work on their own, you are in summative assessment.

Instead of defaulting to “10 multiple-choice questions at the end,” align the format with the moment and the job. When the assessment matches the real decisions learners make, it feels more useful to them and more trustworthy to you.

Are you measuring what matters?

As you think about your own courses, notice where you are already using diagnostic, formative, and summative assessment—even if you have not named them that way. The next step is to check whether those assessments are doing a good job of measuring what matters and giving learners fair, clear results.

Quality Check: Validity and Reliability

Before you finalize any assessment, it helps to pause and ask two simple questions about quality: Are we measuring the right thing, and can people trust the results?

Two core ideas guide this check:

  • Validity: Does this assessment measure what it is meant to measure?
    Example: If your goal is to test decision-making, are you actually checking the decisions learners make, or just whether they remember facts and definitions?
  • Reliability: Would different instructors, reviewers, or raters score this in the same way?
    Example: This matters most for presentations, role-plays, and portfolios, where judgment can vary.

You do not need to make every assessment perfect, but even a short validity and reliability check can prevent confusing, unfair, or misleading results. A simple starting point is to review your main assessment and ask:

  • “Is this task a good match for what learners need to do on the job?”
  • “If two different people scored this, would they likely give the same result? If not, what guidance or rubric is missing?”

These quick questions help you strengthen the quality of your assessments without adding a lot of extra work to your process.

Accessibility and Inclusion: Designing Assessments for All Learners

Great assessments are designed with inclusion in mind, not as an afterthought. When assessments are hard to read, hard to access, or full of hidden assumptions, the results tell you more about those barriers than about what learners can do.

As you design or review an assessment, consider:

  • Clarity of language: Use clear, direct wording for questions and instructions. Avoid unnecessary jargon or double negatives that make items confusing.
  • Technical accessibility: Ensure screen reader compatibility, keyboard navigation, and sufficient color contrast for text and buttons.
  • Time and flexibility: Be thoughtful about time limits and allow reasonable alternatives when processing speed, reading needs, or connection quality may differ.
  • Scenario design: Avoid making assumptions about culture, background, or access to specific technologies that some learners may not share.

Small changes—like clearer wording, better contrast, or more realistic scenarios—can make assessments more accurate and more humane. They also send a clear message that you want every learner to have a fair chance to show what they know and can do.

What Happens After the Assessment?

This is where assessment becomes truly useful. Once you have results, the value comes from what you do next. Rather than treating scores as an endpoint, use them as a starting point for action.

You can use assessment data to:

  • Analyze patterns: Look beyond individual scores. Are there topics where many people are struggling or excelling?
  • Give feedback: Make feedback timely, specific, and focused on what learners can do next, not just what they got wrong.
  • Adjust your design: Revise content, pacing, or activities based on what you learn, rather than repeating the same approach each cycle.
  • Support next steps: Recommend targeted resources, coaching, practice tasks, or stretch assignments based on where learners are now.
  • Track impact over time: Check whether behavior, quality, safety, or performance improves after the learning—and adjust again if it does not.

In a design thinking mindset, assessment is not the end of the story. It is a loop back into understanding your learners, refining your solution, and improving the experience over time.

Is your assessment helping learners move - or just checking a box?

When you look at one course you’re working on right now, how could you use diagnostic, formative, and summative assessment more intentionally—so each one gives you (and your learners) information you can actually act on?
For IDs & L&D pros

If you’re an ID or L&D pro reading this

Use this article as language you can bring into your next kickoff or review conversation, so stakeholders see assessment as part of designing for performance—not just a quiz at the end.

  • “What do we need to know about learners before they start this?”
  • “Where could we build in small check-ins so learners get feedback before the final quiz?”
  • “If we say they’re ‘certified’ at the end, what should they actually be able to do—and how can we test that in a realistic way?”