Dr. Bryan Hall (Senior Academic Consultant, Mindstream)
Executive Summary
Higher education institutions have invested time and treasure in developing assessment infrastructure, yet most struggle to build authentic “cultures of assessment” where data meaningfully drives improvement. This white paper argues that this failure stems from misaligned incentives: when assessment serves primarily compliance functions, it remains peripheral to institutional life. By repositioning assessment as central to faculty evaluation, particularly for contingent faculty, institutions can transform assessment from bureaucratic burden into strategic asset that genuinely improves teaching effectiveness and student learning outcomes.
The Assessment Paradox
The irony of contemporary higher education assessment is striking. Eighty percent of campuses have identified institution-wide learning outcomes. Assessment coordinators occupy offices across the country. Yet faculty frequently view assessment as a compliance exercise disconnected from their real work.
Study after study reveals the same finding: even after extensive professional development, faculty report “meeting institutional expectations” as their primary motivation—not improving student learning. Assessment data often sits unused. “Closing the loop” remains more aspiration than reality.
When assessment lacks institutional traction, valuable insights about student learning go unexploited. Most critically, institutions fail to leverage assessment’s potential as a powerful tool for evaluating teaching effectiveness across their faculty, especially the 73% of instructional staff who are contingent instructors.
The Validity Crisis in Teaching Evaluation
While institutions struggle to build assessment cultures, they simultaneously rely on evaluation systems of questionable validity for high-stakes personnel decisions. Student evaluations of teaching (SETs) suffer from well-documented problems:
Students frequently fail to distinguish between teaching quality and unrelated factors such as course difficulty, instructor personality, or their own performance level. Students demonstrate individualized rating dispositions that vary considerably, threatening reliability. Finaly, students often lack expertise to accurately evaluate course content and design.
These validity concerns become particularly acute when SETs drive reappointment decisions for contingent faculty (adjunct, contract, or untenured tenure-track). Institutions need better mechanisms for evaluating teaching effectiveness, mechanisms grounded in evidence of actual student learning rather than student perceptions.
A Strategic Solution: Assessment as Evidence of Teaching Effectiveness
Here lies an elegant solution to both problems: use student learning outcome assessment as the primary evidence for evaluating contingent faculty teaching effectiveness.
Imagine an assessment system where:
- Faculty have direct stakes in assessment outcomes because data influences reappointment decisions
- Assessment focus shifts from compliance to authentic measurement of teaching effectiveness
- Contingent faculty become deeply engaged because employment depends on demonstrating student learning
- Institutions measure actual learning rather than student satisfaction
- Assessment data receives serious attention because it drives consequential personnel decisions
This framework transforms assessment from peripheral compliance to central institutional process by fundamentally realigning incentives around documented student learning.
Why Cultures of Assessment Fail to Develop
Research identifies several critical barriers:
The Compliance Trap: When assessment serves primarily to satisfy accreditor requirements, faculty view it as external imposition. Compliance motivation rarely generates sustained engagement.
Abstracted Benefits: The relationship between assessment effort and teaching improvement remains indirect and delayed. Faculty invest significant time but may not see immediate payoff.
Opportunity Costs: Assessment requires time that could be spent on teaching, research, or service. Without clear benefits, faculty rationally allocate effort elsewhere.
Psychological Distance: When assessment data serves institutional purposes, individual faculty lack personal stakes in outcomes.
Adjunct Exclusion: Adjuncts and contingent instructors typically lack service obligations and participate minimally in assessment processes.
These barriers share a common feature, viz. misaligned incentives. Faculty bear assessment costs while benefits primarily accrue to institutions. Creating genuine assessment culture requires giving faculty direct, personal stakes in outcomes.
The Mechanism: From Perception to Learning
Current practice relies heavily on student evaluations measuring student perceptions and satisfaction. The strategic alternative is to evaluate contingent faculty based on student learning outcome assessment data measuring actual learning achievement.
Instead of asking “Did students like the course?” institutions ask “Did students learn what they were supposed to learn?”
For Teaching Evaluation Validity: Assessment provides direct evidence of teaching effectiveness (whether students achieved intended learning outcomes) rather than proxy measures based on satisfaction.
For Assessment Culture: When faculty careers depend on assessment data, assessment transforms from bureaucratic requirement to professional necessity. Faculty develop genuine expertise and engage seriously with results.
For Student Learning: When employment security depends on demonstrated student learning, faculty incentives align perfectly with institutional mission.
Implementation Pathways
Who Does the Assessment?
Individual Faculty Model: Instructors assess their own courses. Maximizes ownership and minimizes workload but creates potential bias.
Community-Based Model: Faculty teams collectively assess student work across sections. Increases reliability and reduces bias but requires significant service time.
AI-Augmented Model: Artificial intelligence tools assess student work against rubrics with human oversight. Offers objectivity and scalability while reducing workload.
The optimal choice depends on institutional context. Many institutions may benefit from hybrid approaches.
What Gets Assessed?
Standardized Assessments: Common instruments provide comparability but risk encouraging teaching to the test.
Rubric-Based Assessment: Faculty use common rubrics (like AAC&U VALUE rubrics) to evaluate authentic work. Better captures complex learning but requires careful norming. May not be flexible enough for all disciplinary contexts.
Portfolio Assessment: Students compile evidence across multiple contexts. Provides rich evidence but can be resource-intensive.
Strategic leaders will match assessment method to learning outcome and disciplinary norms.
How Are Results Used?
Threshold Standards: Faculty must demonstrate certain percentages of students meet benchmarks. Provides clear expectations but may not account for variation in preparation.
Value-Added Models: Assessment focuses on learning gains from beginning to end. Better accounts for incoming variation but requires more complex design.
Comparative Analysis: Individual performance compared to departmental norms. Contextualizes results but may discourage innovation.
Addressing Resistance
“Assessment will be gamed or manipulated.” The solution lies in design: using community-based or AI-augmented assessment, employing external evaluators, and monitoring for problematic patterns.
“This places too much emphasis on quantifiable outcomes.” Current systems emphasize student satisfaction, a measure of even less validity. Assessment can incorporate both quantitative and qualitative dimensions when appropriately designed.
“Faculty will teach to the assessment.” If assessments are well-designed and aligned to meaningful outcomes, teaching to the assessment means teaching what we want students to learn.
“This is too expensive.” Strategic implementation can control costs by building on existing infrastructure and selectively deploying technology solutions.
Change Management Framework
Phase One – Foundation Building (Months 1-6): Audit practices, identify pilot programs, develop implementation plans, initiate faculty development.
Phase Two – Piloting (Months 7-18): Implement in pilot areas, collect data, close the loop, build case studies.
Phase Three – Scaling (Months 19-36): Expand based on results, develop institution-wide policies, continue faculty development.
Phase Four – Institutionalization (Months 37+): Integrate into standard processes, refine approaches, assess cultural transformation.
The Strategic Payoff
This framework achieves multiple objectives through coordinated intervention:
- Improving Teaching Evaluation Validity: Direct, reliable evidence of teaching effectiveness based on student learning
- Building Assessment Culture: When careers depend on assessment, it becomes central to institutional life
- Strengthening Academic Quality: Focus on demonstrated learning improves instructional quality
- Optimizing Resource Use: Existing infrastructure serves multiple purposes
- Enhancing Faculty Development: Engagement with assessment provides actionable insights
- Engaging Contingent Faculty: The largest instructional population becomes invested in assessment
These benefits compound over time as the system matures and cultural transformation deepens.
Conclusion: From Compliance to Culture
The persistent failure to build genuine cultures of assessment stems from misaligned incentives. When assessment serves primarily compliance functions, it remains peripheral. When assessment data determines career outcomes, it becomes central.
This white paper has argued for bold reframing: position student learning outcome assessment as primary evidence for contingent faculty evaluation. This single intervention simultaneously addresses teaching evaluation validity concerns and creates conditions for authentic assessment culture by realigning incentives around student learning achievement.
Implementation will require courage, sophistication, and persistence. Yet the alternative—continuing with evaluation systems of questionable validity while leaving assessment culture unrealized—becomes increasingly untenable as stakeholder pressure for accountability intensifies.
The infrastructure largely exists. The methodology is established. What remains is strategic leadership willing to recognize that the solution to building assessment culture lies in making assessment matter for what faculty care about most, viz. their professional standing and employment security.
The opportunity before us is institutional transformation—from compliance cultures to cultures of evidence, from questionable evaluation methods to direct measurement of teaching impact, from peripheral assessment to central accountability.
Mindstream stands ready to help your institution examine your existing assessment practices, design solutions that meet your specific needs, and work with administrators, faculty, and staff to implement these solutions.