Student Learning Outcomes (SLOs) at College of San Mateo - Assessment Toolkit - Part II: Assessing Learning Outcomes
Student Learning Outcomes (SLOs)
Assessment Toolkit - Part II: Assessing Learning Outcomes

What you will find here


To recap: Learning outcomes clearly state learning goals. Assessment uses these outcomes as a framework for exploring whether students (and, therefore, whether we) are achieving those goals.

In other words: How are students doing – in our courses, services, programs, institutional goals?

An assessment activity attempts to address some version of this question.

The big picture: Asking the question helps us take the long view of our courses and programs –to look at the “big picture.” Faculty tend to know how last year’s students did in a specific course. What we don’t always know, however, is how well all this coursework helps students achieve their larger degree or certificate goals, or how effectively we’re contributing to a student’s general education.

Drilling down: While assessment is about looking at the bigger picture, a generic question (i.e., “How are students doing?”) is often not very helpful. Instead, the assessment process gives us a chance to ask a specific question about the bigger picture of student learning.

Some examples of possible enquiries: 

  • Looking for patterns in student learning: to see if there are specific areas of weakness in student learning, at the course, service or program level, that need addressing. (“Which concept in our Political Science courses are students struggling with most?”)

  • Exploring ways to address issues we already know exist, and assess what is working: closing achievement gaps, supporting student learning in discipline-specific or institutional learning goals. (“How can we support students of color to help them achieve important learning outcomes in our discipline?”)

  • Evaluating the effectiveness of some new teaching strategy or initiative. (“Which of our pedagogical strategies have helped students improve their sentence-level writing?”)

Examples. Here are some examples of what might be considered “assessment” activities – activities intended to clarify or address aspects of student learning.

  • What aspect of the degree program needs reinforcement?
    - Faculty administer a survey to recently graduated students, soliciting input; AND/OR
    - Faculty create a capstone assignment in capstone course; grade collaboratively, using a rubric aligned with PLOs; identify where students need more support; design and implement interventions (new teaching strategies, changes to curriculum, more targeted use of the Learning Center, etc.).

  • How can I better prepare students to apply what they learn in my class to their general knowledge work?
    Faculty collaborate across disciplines to create a shared assignment, focusing on core competency in their discipline, and grade with a shared rubric; OR
    - Faculty collaborate with faculty and staff from other disciplines and services to host a student activity (discussion, debate, team game, movie viewing); OR
    - Faculty participate in a flex day ILO assessment workshop, to re-envision in an interdisciplinary context.
(For more about assessment, visit “Assessment and Program Review,” “Assessment – An Institutional Overview,” and “Resources.”)

(back to top)


Standard definition:  A program is a series of courses culminating in a degree or certificate.

Defining "program" for the purposes of assessing student learning: The most obvious way to use SLO assessment is to see whether students are achieving the learning goals of a degree or certificate program. 

In many institutions, the SLO assessment process focuses on degree or certificate programs. Where the majority of students are focused on achieving a degree or completing a certificate (for instance, in four-year institutions, or in CTE programs), the program outcomes offer a good vantage point from which to evaluate the effectiveness of the courses. In other words, where lots of students take a degree, it's worth ensuring that the courses in that degree really impart the knowledge, skills or abilities defined in the outcomes. 

However, it is not always practical or useful to focus on the learning goals of a degree or certificate program. Not all disciplines offer a degree or certificate, and in many disciplines, few students complete the degree or program. There may be other end-points from which faculty want to evaluate the effectiveness of their courses or services. And there may be issues in student learning that require urgent attention, which relate only indirectly to the degree or certificate goals. 

Faculty can therefore prioritize whatever aspect of their discipline or service area that needs attention. This could be:

  • the learning goals of a degree or certificate program;
  • the learning goals of a sequence of courses (for instance, ESL courses)
  • the learning goals of a disciplinary core (for disciplines with neither sequenced courses nor degree/certificate)
  • the learning goals of a learning community. 

Even where faculty do wish to assess the effectiveness of degrees and certificates (as in, for instance, CTE courses), they may not always be able to assess all their degrees and/or certificates in a two-year period or they may need to prioritize a degree, certificate, or sequence of courses for a particular reason. Good reasons to prioritize might include

  • to assess a very popular degree or certificate ("Is our Certificate in Teaching Yoga giving students what they need?")
  • to assess a new degree / certificate / sequence of courses ("Does our new Eonology AA give our students the right skills?")
  • to focus on improving student learning in one particular program or course outcome ("Program Outcome #2 looks weak; what can we do to improve this?")
  • to focus on improving one particular skill area ("Students are struggling with reading - let's find ways to improve")
  • to evaluate the success of a change in pedagogy or curriculum ("Are the new course requirements improving students' grasp of the material?")
  • to evaluate the success of a particular student body in the program ("Have we improved outcomes for our Pacific Islander students?")
  • to scrutinize a problematic degree, certificate or course sequence (low rates of success, retention, etc.) ("Where exactly are students struggling in ENGL 110?")
In short: 

If your discipline offers popular degree and/or certificate programs, these are good targets for assessment.

However, focus your assessment where it will do most good for student learning. if your discipline does not offer a degree or certificate program, or if other course sequences or priorities emerge, 

(back to top)


The starting point for any assessment of student learning is: What do we want to find out?

Some questions can always apply. For example, faculty can always ask the kinds of questions that help establish the health of student learning in a chosen program:

  • Is the curriculum in the program cohesive, current, complete?
  • Do students walk away with the knowledge, skills and abilities promised in the relevant learning outcomes?
  • Are students retaining their knowledge/skills and applying it to other classes?
  • Where are there achievement gaps, and what can we do about them?

But some questions may be more precise, and may reflect specific priorities. For example, faculty might want to look at a particular student population, or at the health of a new curriculum, or at some particular interest or concern.


  • Where do international students need the most academic support in our program?
  • We dropped a pre-requisite last year. Has this impacted student learning in the class?
  • Since complying with AB 705, we’ve opened up our freshman classes to students with varied levels of preparedness. Are they achieving the same learning outcomes as past freshman students? And if not, which skills specifically need most support, and what interventions are most effective?
  • Three instructors tried three different methods of getting students to read effectively. Which worked best?
(back to top)


Suit the method to the question: The best choice of assessment method depends on what you are trying to find out.

Direct methods involve direct observations of student work, and typically (though not necessarily) draw on course assignments. These are best suited to questions about improved student performance (“Did this newly created module on active reading help students understand their assigned readings better?”).

Indirect methods may be better suited to questions about student perception, or confidence, or reflection (“Did this program adequately prepare students for transfer work?”).

Success in external exams or employment is typically counted as an indirect measure of student learning, because looking at these results does not involve faculty looking directly at student work. However, in a program intended to prepare students for licensure or employment, it’s probably the most relevant measure of the program’s effectiveness.

Methods of assessment include, but are not limited to:

Direct methodIndirect method
  • Final exams/capstone assignments
  • Extra post-course quiz
  • Pre and post-tests
  • Assignments aligned to specific outcomes
  • Portfolios
  • Extra-curricular student activity involving demonstration of SLO (i.e., debate discussion, presentations)
  • Course grades
  • Surveys of currently enrolled students
  • Surveys of alumni/graduates
  • Success in licensure exams
  • Success in employment

Capstone courses / assignments

Definition: A capstone is the summative course or assignment in a course of study.

  • A capstone assignment requires students to demonstrate all the knowledge, skills and abilities in a given course.
  • A capstone course requires students to demonstrate all the knowledge, skills and abilities in a given program.

Where there is a capstone course, a key assignment or assignments in that course also gives an overview of student learning in the program outcomes.

An embedded assignment can use language that deliberately evokes the program outcomes. Here is an example from ENGL 110 (the relevant part italicized):

  • Analyze and respond critically to a variety of literary and expository texts
  • Demonstrate knowledge of a variety of authors, literary genres, and literary devices.
ESSAY: Many of the stories and poems we've read focus on language barriers inside families. What happens to family relationships when parents and children speak a different language? What problems arise, and how do people overcome them? In a well-developed essay (about 2000 words), discuss this question. Your essay should present a coherent thesis, and your support should include a convincing literary analysis of at least one poem and one short story we've read, as well as evidence and/or illustrations from non-fiction works by Judith Harris' The Nurture Assumption, Amy Tan's "My Mother's English," and at least two other sources you find yourself. Your paper should be laid out and cited in MLA format. 

Aligned rubric:  A graded assignment can easily yield an SLO score with a rubric aligned to the outcomes. You can find an example here. 

(back to top)


Assessment is framed around your research question. So you should look for the data that will answer your specific question. 

Here is a (by no means exhaustive) list: 

  • student work (from in-class assignments, or specially organized student activities);
  • student surveys (enrolled students, or those recently graduated, employed or transferred);
  • focus groups;
  • student achievement data;
  • college-wide surveys;
  • licensure results;
  • follow-up with target programs or employers.  

Who can help you find this data?

Yourselves: Faculty and staff working with students can use in-class assignments, or design their own surveys, focus groups, student activities, or follow-up (for instance, contacting target programs or employers for feedback, or drawing on licensure results). 

PRIE: The Office of Planning, Research, Innovation and Effectiveness supports small-scale faculty assessment research in a number of ways:

  • Research requests: PRIE has access to College student achievement data, as well as other resources. 
  • Trainings: PRIE can offer workshops in research basics such as survey design. 
  • Campus Climate and Satisfaction Surveys: These are administered by PRIE, and may include questions relating to outcomes assessment, such as student satisfaction with Learning Centers, support services, and institutional learning outcomes.

    (To find out more, visit PRIE's website at To find out more, visit PRIE's website

MarketingThe College's Marketing Office can help with outreach to CSM alumni. 

The Student Learning Outcomes Coordinator:   Feel free to contact the SLOAC with any questions. 

(back to top)


Qualities of a good assessment activity: Here are some questions to bear in mind:

  • does your assessment activity involve collegial conversations? Think of the SLO requirement as a mandate to have regular discussions about student learning, inside and across disciplines. The Assessment Plan should be a departmental discussion item; and institutional outcomes assessments create a forum for interdisciplinary conversations about student learning. 
    Some departments have only one faculty member, so not all discipline-level assessments can be part of a collegial process. However, wherever possible, assessment planning should emerge from collegial discussions.
  • does your assessment activity focus on something specific?. Tackle some specific aspect of learning, or some specific research question.    
  • does your assessment activity address student learning? Raising grades, improving enrollment, boosting productivity, and so on are important, but not necessarily issues of student learning (although improved grades may be the way you measure a proposed improvement in learning).
  • does your assessment activity tell you something that you don't already know? If your SLO activity only repeats what you know already from grading, and from working with students, try something new. 
  • does your assessment activity involve evidence and observation? Learning outcomes assessment should be grounded in data, which can take the form of surveys, experiments, in-course grades, course grades, completion rates, or whatever other instrument suits your purpose.
  • does your assessment activity get you somewhere? Faculty should be able to do something about what they discover. 
  • is your assessment activity manageable? Faculty do not have time to launch elaborate research projects. Start with something you have questions about, and which you can take some action. 

Some types of assessment activity. Assessment activities might include - 

  • Pedagogical experiments: Assessment processes can evaluate new teaching strategy, a curricular innovation, participation in a new initiative or small-scale learning community aimed at promoting student learning within or across disciplines.
  • Curricular review: Faculty can review SLO maps (for program or institutional outcomes) to ensure degree and certificate curricula are still coherent, current, and comprehensive.
  • Research: Faculty can research a particular question about student learning that can't be answered in class (for example, whether an accelerated class has the same success rate as a full-semester class, and why).
  • Norming / standardizing faculty assessment of student learning: Faculty can use the assessment process to ensure that grades reflect true student learning, or to minimize subjectivity in grades through norming. 
  • Improving assessments: Do your assignments truly gauge student learning? Faculty can use the process to reflect on and improve their own assessment instruments. 
  • Promoting and assessing interdisciplinary learning: Small-scale learning communities; shared activities (movies, debates, etc.) between classes; participation in interdisciplinary workshops or student activities are all excellent ways to promote and assess student learning beyond the classroom.  

Examples: Here are some examples of assessment activities:

"Students are doing poorly in sentence-level writing. Three of our faculty tried three different strategies for improving it, then compared notes at the end of the semester to see which one worked better. None was perfect, but one was noticeably less useful. We're discussing this at our next meeting."

"The Biology department is trying culturally relevant approaches to teaching BIOL 100, to tackle achievement gaps in STEM education. We will monitor grades to see if students improve, and administer a short survey at the end of the semester to see if students have a more positive experience. A biology teacher came to an English 165 class as part of a unit on bad science reporting. Students completed a short survey afterwards to report on whether this enhanced their grasp of critical thinking across disciplines."

"We reviewed the alignment of our course-to-program outcomes; turns out no one remembers who wrote the original outcomes, and we don’t like them, so we’re rewriting them to reflect current program needs."

"Two of our faculty participated in the Fall ILO workshop, creating a rubric for Effective Communication, which we used to norm some assignments in our department in Spring to see if our assignments aligned clearly with the ILO."

"With AB705, we're going to have a lot more students repeating our ENGL 105 class. What will these students need? We will survey students repeating classes to find out what they think they need more help with, what got in their way, and what would most help them as they retake the class." 

"We wanted to know if the students who completed our AA-T in Culinary Arts felt they were well-prepared for careers or transfer. Working with Marketing, we got hold of 23 recent graduates and surveyed them. We discovered that we are doing much better at preparing students for transfer than for careers. We are adding a unit to two of the required courses directly aimed at workplace issues."

For more ideas,

(back to top)


The goal of Learning Support and Student Services assessment, as with instructional disciplines, is to ask questions that will help improve student outcomes; to gather data that will answer those questions; and to analyze and act upon the findings.

Student Services: SLOs or SAOs? 
Not all student services outcomes are learning outcomes. Which sort of outcome will best help you evaluate the success of your service?

SLOs = Student learning outcomes. A learning outcome describes the knowledge, skills or abilities students should take away with them as a result of an interaction with the college. For student services, this might be a workshop, information session, or other engagement intended to leave the student with specific knowledge, skills or abilities. 

Where faculty or staff are imparting information, and where the success of the activity rests on whether or not students retain the information, an SLO is appropriate. Examples:

    • A workshop on instructional material (i.e., grammar, math etc.)
    • An orientation to communicate the resources of a center
    • A workshop to inform students about financial aid, campus resources etc.

Methods typically include exit quizzes, surveys, etc.

Where a student support service offers instructional courses (e.g., Counseling), the course SLOs are assessed like any other instructional course.

SAOs = Service area outcomes. Service area outcomes describe, not so much what students have learned, but what students have been able to do as the result of an interaction with the college. Examples:

    • Complete an application or form
    • Get access to a health or wellness counselor
    •  Complete their registration entirely online

Here’s a side-by-side comparison:

Question: Did this service succeed? = Did students learn something? Question: Did this service succeed? = Were students able to get something done?
  • New F-1 students will be appraised and knowledgeable of the educational counseling and course selection workshop
  • Students will know about center resources
    • Students completed their registration online
    • 90% of GI Bill-eligible veterans will apply for financial aid through FAFSA
  • Students completed their registration online
  • 90% of GI Bill-eligible veterans will apply for financial aid through FAFSA

Research question

The generic question, as with instructional faculty, is simple: Is the service delivering the promised outcomes for students?

Depending on their priorities, faculty and staff may wish to apply a more specific question to their service or learning center.  For instance, a center or service might want to explore whether a change in procedure was improving the experience for specific student populations, whether a new online service was delivering effective service, or whether students were largely satisfied with the help they received.  A service or center might also want to look at its work in the context of larger institutional goals (see “A word on interdisciplinary alignment” below).

Alignment with institutional outcomes

Learning centers and student services are not required, in Program Review, to discuss participation in any institutional assessment.

However, like all College constituencies, centers and services support institutional learning outcomes (ILOs), both directly and indirectly.  

The first ILO, especially, addresses skills and abilities directly fostered by engagement with centers and services:

Independent learning and development - The ability of students to develop, evaluate, and pursue personal, academic, and/or career goals. Students will be able to:

  • Demonstrate effective study strategies;
  • Articulate realistic and achievable academic and/or career goals;
  • Identify and make use of college and community resources (academic and student support services).

The language of this outcome can provide a framework for center and service assessment, or offer an opportunity for assessment activities that involve collaboration with disciplines and services.

Methods of assessment

Learning centers and student services typically use:

  • Student surveys (e.g., after usage of a center or service)
  • Pre- and/or post-quizzes (e.g., for workshops)
  • Embedded assignments (e.g., for instructional courses)
  • Best practices
  • Other as determined

(back to top)


Faculty in the classroom, and student services faculty and staff in learning centers and offices, rarely have a chance to look at student learning from perspectives outside their own. Institutional assessment activities can provide a chance to think outside the classroom.

Interdisciplinary assessment matters. Student learning is enhanced when knowledge, skills and abilities presented in the classroom are reinforced or otherwise referenced across the curriculum. To this end, the more faculty have an opportunity to collaborate, cross-reference, or otherwise make connections between disciplines (and between disciplines and services), the better we will support our students towards their educational goals. In addition, many aspects of student learning touch on all disciplines (for instance, critical thinking or reading). 

Faculty and staff have a number of ways to participate in interdisciplinary assessment activities: 

Participate in institutional outcomes flex day workshops: Every year, three of the six institutional outcomes are assessed by a team of faculty participating in flex-day workshops in Fall and Spring. All six institutional outcomes are assessed in the two-year Program Review cycle. 

Organize an interdisciplinary class or activity: Faculty can create an activity or shared project that aligns with a shared institutional learning outcome. Some ideas include

  • inviting guest speakers from another discipline or services;
  • interdisciplinary norming of student work;
  • teaming up with a colleague from another discipline to share an assignment, develop a course theme, team-teach a concept etc.; 
  • creating an interdisciplinary student activity.  

Interdisciplinary learning activities like these do indeed constitute institutional assessment activities, provided they

  • address a shared institutional learning outcome (e.g., effective communication, critical thinking, etc.)
  • involve an attempt to improve student learning in that outcome
  • involve documented discussions and/or activities between faculty.

Research question: Interdisciplinary activities can focus on a generic assessment question, such as: “Are students learning to communicate effectively?”

However, faculty can and should create their own priorities. Some examples of more specific assessment questions might be

  • “How can we better support student reading?”
  • “How can we get students to think critically about numbers in different contexts?”
  • “If non-science majors can make a connection between good citizenship and scientific awareness, will this help them apply what they learn in science class in their daily lives?”
  • “How can I make ethical reasoning figure more explicitly in my curriculum, and will this make a different to students?”
  • “Are the students in my English comp classes really using their skills in other essay-writing classes? Where are they falling short?”
(back to top)