Assessment Resources
Peggy L. Maki is internationally recognized for her work in helping faculty construct assessment practices that support enhanced student learning. Her work guides much of how CSUMB has designed its assessment process. A brief excerpt from her Chapter 5 of her book Assessing for Learning: Building a Sustainable Commitment across the Institution is summarized here.
The purpose of the rubric is to translate an outcome into a set of criteria (elements of the outcome) and standards of performance that can be used to assess how well the student work demonstrates the outcome. As Maki writes, "[r]esults of applying these criteria provide evidence of learning patterns. At both the institution and program levels these patterns identify students’ areas of strength as well as weakness. Interpreting patterns of weakness leads to adjustments or modifications in pedagogy; curricular, co-curricular, and instructional design; and education practices and opportunities" (Maki, p. 121). At the same time, patterns of strength point us to practices that can be adopted and adapted more widely. Maki provides a step-by-step approach in the excerpt (pp. 124-126). TLA has an archive of sample rubrics from a range of disciplines and the AAC&U VALUE rubrics are great starting points for Essential Learning Outcomes.
Once faculty members have agreed on a rubric, it’s important to spend time working together to "calibrate" or "norm." This generally involves having participants read and score a sample of student work, share their scores and discuss how they chose the scores. The goal is to ensure a shared understanding of the rubric and establish relative agreement on how to apply the standards to specific work. If there is substantial disagreement on the initial sample, the process should be repeated with another sample. Agreement will never be perfect, but establishing a reasonable level of inter-rater reliability is crucial. This is not only to serve the purposes of the assessment, but to ensure that the program faculty are in agreement about what they are expecting of students. Once the norming is completed, at least two readers score each student sample, with the person managing the reading monitoring scores for inter-rater reliability. Where there is a pattern of substantial disagreement between first and second reader (e.g., the first reader is consistently higher than the second), the manager can ask the readers to consult and come to agreement. This should bring readers back into line with the group norm. Again, the excerpt from Maki has a step-by-step approach (pp. 126-127).
- Designation of a student artifact(s) that calls upon students to use the knowledge, skills and abilities identified in one or more of the GE, MLO or core competency learning outcomes. The artifact(s) may be an assignment already integrated into a course or may be designed specifically for this purpose. Whichever option is chosen, the task assigned should be aligned with the rubric being used and should be authentic (i.e., work assigned and graded within each section, so that students approach it seriously).
- The faculty developing the assessment will adopt assessment artifacts and rubrics consistent with its academic orientation and the structure of its courses.
- Assessments will be conducted by a group of faculty members who participate in group calibrating/norming sessions. These sessions are designed to ensure that faculty have a shared understanding of the rubric’s criteria and standards and are reasonably comfortable applying them to student work.
- Each student submission will be assessed by two faculty members and, if their assessments diverge substantially, raters will consult in order to reach consensus or the submission will receive a third assessment.
TLA is available to provide support for developing and implementing assessment activities.
- TLA is available to help facilitate work on the development of the assignment, the assessment instrument and the assessment sessions.
- TLA is available to collaborate with faculty to develop a sampling method that provides appropriate representation of course sections and of students.
- TLA will hire and supervise a student assistant to support the process (e.g., pulling samples, stripping identifiers, entering into database, running reports).
The student work will be coded for purposes of analysis, identifying course and section as well as individual student. Data will be analyzed using files stripped of identifiers. The TLA will retain coding data in a password protected file. Faculty will be able to request data on their sections and will be able to compare those to the total aggregated dataset, but no other party will be given access to course level data and no party will be given access to student level data.
As Peggy Maki writes, assessment is conducted in order to examine "the efficacy of collective educational practices" (_Assessment for Learning_, p. xvii). The most carefully crafted assessment activity is meaningless if we don't use the results to evaluate and improve our practices. So the crux of the process begins once the assessment is completed and faculty sits with results, in order to interpret them, appreciate what is working, and formulate responses where results are disappointing.
The first part of this process is interpreting the results. The initial reporting of results will provide a general description of the data—how student performance on the rubric is distributed for each criterion assessed. Some further analysis can break down and compared by course level (i.e., upper and lower division) or by discipline. Making sense of the patterns that emerge from this level of analysis requires bringing faculty together with the data and the curriculum map to begin looking for explanations, identifying further questions, and beginning to propose interventions that will improve outcomes.
To move beyond this kind of disaggregation (e.g., to look at characteristics of the student population, like major, race, class standing) requires working with Institutional Assessment and Research (IAR) and TLA to access and combine data from the student database.
Some of the interventions to consider include:
- exploring areas of particular success for practices that may be useful in making improvements elsewhere;
- looking at the curriculum map for evidence of adequate scaffolding through prerequisites and course sequencing;
- considering the outcomes themselves to confirm that they are properly articulated to address learning outcomes;
- providing faculty with more support and/or models for instruction that produce high impact;
- providing students with more tutoring and/or supplemental instruction opportunities;
- enhancing scaffolding by previewing concepts in prior courses;
- rewriting assignments to better align with outcomes and rubrics; and
- changing textbooks or other course resources.
Interventions can/should begin small. Don’t move to curricular overhaul to address something that might be improved with modest change.
As you initiate changes, think about how you will assess their impact on a small level—e.g., by having faculty making the changes consult with each other or with faculty whose courses follow. This will help you make adjustments as you make the interventions and before you repeat the systematic assessment cycle for the outcome(s) you’re addressing.
Continuing degree programs are asked to conduct program review every seven years. The program review process occurs over 4 semester: 1) fall of year 1 for creating the self-study; 2) spring of year 1 for the external review; 3) summer of year 1 for a subcommittee of the Senate Curriculum Committee Council to provide recommendations based on the self-study and external review; and 4) fall of year 2 for the program to develop its program improvement plan. During these 2 years, programs are not expected to conduct annual assessment activities. Instead they are asked in the self-study to summarize the annual assessment work conducted since the last program review, synthesize the results and report on changes made.
You will be guided through the program review template to report briefly on each of the outcomes you’ve assessed. Then you have the opportunity to stand back and look at these activities as a whole. Appendix A, Section B of the program review Procedure Manual offers some questions that can inform this process. Your synthesis can address trends or patterns that have emerged over the series of assessments or discoveries you’ve made about effective teaching/learning practices. You can also report on the impact that engaging in annual assessment has had on the faculty and program. Have your assessment activities served to enhance your program’s evidence-based decision making? Have you altered your faculty mentoring practices to better serve student learning?