Direct and Indirect Assessment of Student Learning

Programs typically collect both direct and indirect evidence/data of student learning. Using both kinds of  assessments can support a more complete picture of what students are learning, how well they are learning, and how students perceive their learning. Direct measures identify specific skills or knowledge gaps while indirect measures help explain why gaps might exist.

Direct Assessments

Direct assessments provide data that demonstrate actual student learning or achievement through direct examination or observation of student work. It shows what students can do or produce.

Common Direct Methods:

  • Portfolios of student work
  • Capstone experiences scored with a rubric
  • Written work, performances, or presentations
  • Standardized exams (e.g., licensure, certification exams or other national tests)
  • Observations of student behaviors
  • Classroom response systems (clickers, etc.)
  • Ratings of student skills by field experience supervisors/employers

Indirect Assessments

Indirect assessments collect information about student perceptions of their learning, attitudes, or opinions. Indirect assessments provide contextual information to understand the learning environment or student responses to the learning opportunities.

Common Indirect Methods:

  • Surveys or questionnaires
  • Focus groups
  • Interviews
  • System of record data
  • Course grades/DFW rates
  • Participation/attendance rates
  • Student ratings of their knowledge/skills or reflections on what they have learned
  • End-of-semester evaluation questions focused on the course

One assessment method may address multiple learning outcomes. For example, a written project may be evaluated on written communication, research methodological knowledge, and content knowledge. One or more tools (e.g., rubrics) may be used to evaluate the written project.

Using both direct and indirect assessments makes space for multiple voices that can serve as a triangulation strategy to understand learning outcomes. When the information from multiple sources of evidence aligns, it can strengthen conclusions. When there are gaps or discrepancies, it can provide important insights or areas for further investigation. Using combined data helps evaluate both intended and unintended program impacts and supports evidence-based program changes.

Considerations for Using Common Methods

Method Benefits Drawbacks
Surveys Collect a lot of information fast; Can compare responses across populations Response rates are often low; Difficult to see nuances; Indirect assessment method
Student Work Authentic assessment; Embedded in the learning experience Takes time to assess properly; Must have agreement across raters
Course Grades Grades are required; Demographic data available Cannot disaggregate by outcomes; Grading not consistent across time nor instructors
Rubrics Inclusive strategy; Focused on student development; Broken down by outcomes or skills Takes time to develop good rubrics; Must be normed to get good data
Institutional Data Can be effective when linked to other performance measures and the results of the assessment of student learning (using a direct method) Available data may be limited or protected; Not a direct assessment of learning; Expertise on data analysis needed

Best Practices for Using Evidence

  • Use multiple types of evidence for a fuller picture
  • Prioritize direct evidence for primary assessment
  • Use indirect evidence to provide context and support
  • Ensure evidence aligns specifically with learning outcomes
  • Document collection methods and rating criteria
  • Include both quantitative and qualitative measures when possible