Student Learning Outcomes (SLOs) at College of San Mateo - Faculty Toolkit: Assessing Learning Outcomes
Student Learning Outcomes (SLOs)
Faculty Toolkit: Assessing Learning Outcomes


A useful, sustainable assessment cycle should have the following qualities:


Make sure your assessment will tell you what you want to find out about. Whether you want to know about gaps in the program, specific weaknesses, consistency of standards or trends in achievement, make sure that the choice of task, student population, and method will serve your purpose.


Wherever possible, it's good to compare apples with apples. If students are responding to a variety of different assignments, the data may be hard to interpret.


Wherever possible, learning outcomes assessment should be unfussy, an integral part of a wider assessment cycle. Complex or unwieldy assessment methods won't be sustainable.


We can't anticipate everything we'll want to know from our data, nor can we always guess what the accreditation bodies will want us to find out. We do know that we must disaggregate our data, which means that we will need to associate learning outcome results with student G-numbers. It's helpful, therefore, to come up with a process that is as comprehensive as possible, and that faculty can revisit or amend as needed.


Learning outcomes are not grades. You can treat them as grades – or you can assess them quite differently. Think about the various options below, and see which one suits your purposes best.

ABC option

For each piece of student work that you assess, you can assign a "grade" on its mastery of a specific learning outcome.

Example: If you are reading a quiz that shows a student's understanding of human anatomy, you can grade his understanding as you would do a quiz: A, B, C, D.

Yes/No or Yes/No/Developing option

For each piece of student work that you assess, you can decide simply whether it does or does not show mastery of a learning outcome. Or, if that seems too bald, you can decide also whether it shows that a student is developing mastery of a learning outcome.

Example: If you are assessing whether a student can use appropriate evidence by reading an essay, you can simply record whether this student does or does not, on the whole, use appropriate evidence. Or you can also add the "developing" category for students who haven't quite achieved mastery, but who seem to be on the way there.

In addition, you must consider whether you want to set targets for your department, discipline, service or program. As noted above, your targets will depend on what you're trying to find out.

Align your target success rate to the college target rate

If the College wants to aim for an 80% success rate in learning outcomes assessment, you might want to set your department or service target rates to the same level.

Align your target success rate to your passing rates

If you want to know whether or not standards are consistent and rigorous, you might set your target level at your passing rate, and see if randomly selected students seem to show mastery of learning outcomes at the same rate as students pass the class.  If 75% of your students pass their classes, but only 35% of randomly selected students appear to have mastered the learning outcomes, you may have a problem.

Align your target success rates to your ideals

The goal of assessment is to work towards "continuous improvement." Perhaps the only acceptable target is 100% student success – not because we'll achieve it, but because it will keep us looking.

Align your target success rates to practical possibilities

In many cases, notably many student services, it should be possible to get very high rates of SLO success. Most students who use the Financial Aid office should leave knowing how to complete a financial aid application, the first of two learning outcomes. In other cases, learning outcomes – in second-semester transfer classes, for instance, or in classes intended to prepare for transfer – must test the capacities of many students. It would be highly unrealistic to expect every student in an advanced Chemistry or English class to show mastery of complex learning outcomes.

Align your target success rates to previous target success rates

If you are more interested in identifying trends than in hitting a particular target, you will learn most by comparing this year's success rate with last year's.

Support Services / Instructional Support Services Methods

Most support services, whether instructional or general, have fairly clear and definable goals.


The learning outcomes for many academic and other support centers require simply that students know what resources the service center maintains, and how to use them. This is well served by administering a short quiz on available services and resources, or by simply asking students if they feel satisfied that they know the available services and resources.

Pros and Cons: Questionnaires are easy to administer. However, students may misrepresent or misunderstand what they know.

Pre/Post Test

Students may be given a brief test at the beginning and end of a unit, which permits faculty to assess the difference.

Pros and Cons: Pre- and post-tests are clear and straightforward to administer. However, many academic labs offer open-ended support, and their effects might not be measurable in one semester. And many support services offer ongoing support to students; there may be no convenient "exit" point.

Course Assessment Methods

For virtually all classroom courses, the best way to gauge student competencies in learning outcomes is through direct methods – namely, a direct appraisal of student work that shows the knowledge, skills and abilities required by the course.

Exit Exam

A course might culminate in one single exam, taken by students in all sections, that requires students to demonstrate the skills, knowledge and abilities identified in the learning outcomes.

Pros and Cons: A single exit exam gives both students and teachers a very clear focus and sense of purpose. And if every student in every course yields a possible data point for learning outcomes assessment, the data pool will become very large, very quickly.


On the other hand, not all students do their best in exams; and not all teachers want to give exams. Working towards a specific exit exam can undermine academic freedom, and limit instructor flexibility. 


Creating such exams might also be onerous. Some departments or courses are taught by only one faculty member, who would have to create and assess the exam. Exams would likely need to be changed each semester, to avoid cheating. And it would likely require more work, and a lot of collaboration, for faculty to assess these exams as well as give them an overall grade.

Capstone Assignment

Faculty can administer a single homework or in-class assignment, common to all sections in a course, that requires students to demonstrate all the skills, knowledge and abilities that students are supposed to leave with. Or each faculty member in a given section can create his or her own capstone assignment. Each completed assignment can be assessed for student competence in the learning outcomes.

Pros and Cons: Assignments – whether quizzes, essays, performances or tasks – represent the most direct and meaningful way of gauging whether students have acquired the knowledge, skills or abilities identified in the learning outcomes. A shared capstone assignment, like the exit exam, can give a shared purpose and focus to a course. Meanwhile, individual capstone assignments can offer flexibility.


Faculty can create and assess such assignments individually in their own sections. However, where the language of the outcome is subjective (for instance, where there are references to "critical thinking" or where students are supposed to show "effective" or "college-level" skills), it's important to make sure that faculty are on the same page.


Thus, faculty might consider adopting a single capstone assignment common to all sections of the same course.


Capstone assignments are also best assessed in small groups, where the assessment includes a norming around the outcome(s) to be assessed.

Multiple Assignments

Some faculty prefer to isolate different learning outcomes assess each through a separate task, and accumulate the data as the semester progresses.

Pros and Cons: Many courses have learning outcomes that can't be effectively measured through one single assignment. Faculty might prefer to assess each outcome separately, through different kinds of task (a written quiz for one, an oral performance for another) and at different times of the semester. 


Multiple assignments work best when assessed by individual instructors teaching their own sections. Organizing norming sessions at several different points of the semester is likely to be too difficult. If norming is an issue, multiple assessments may not yield meaningful data.

Program (Degree /Certificate) Assessment Methods

Assessing program outcomes through direct methods is more complicated. Community college students tend to come and go; many don't stay here for a clear two-year stint working towards a specific degree. They may not declare their major until they've already taken many of the courses required for it. 

Here are some options.


Students can keep an online portfolio of work through their academic career. This can include capstone assignments in their courses, as well as reflections on their work, or examples of their best work.  If and when they register for a degree or certificate, their ePortfolio can be assessed.

Pros and Cons: ePortfolios can be valuable to students as well as instructors. They use them to reflect on their work as they go, connect ideas and skills across disciplines, and build up a body of materials that can be shared with colleges or prospective employers. The ePortfolio also permits instructors to conduct direct program, certificate and general education level assessments. Uniquely, it allows instructors to see how well students retain skills, knowledge and abilities beyond individual classrooms, services or labs – whether they really take skills and knowledge away with them, and use it to enrich other learning.

However, the ePortfolio may produce a biased sample of students –those who succeed in maintaining an ePortfolio throughout their course of study. These students are likely to be more successful, and screen out many whose difficulties need our attention. The ePortfolio also requires wide collaboration amongst instructors, technical support and students.  And assessing the ePortfolios is time-consuming and requires collaboration.

Capstone Course

Faculty can create a course that requires students to demonstrate all the knowledge, skills and abilities that the course is supposed to equip them with.

Pros and Cons:
Student success in the course would be one measure of learning outcomes assessment. Along with giving an overall grade, faculty would need to assess the different learning outcomes in the course separately.

Capstone courses don't fit every program. They must, of course, be taken last, and it's not always easy to predict in which order students will take courses. Also, capstone courses can effectively create an added graduation requirement, which is both a bureaucratic headache and a burden for students. Finally, student work in the capstone course would likely need to be assessed in collaboration, which requires extra time.

The Roll-Up Method

Because each course or service-level SLO is connected to specific program-level or general education SLOs, it's possible to gauge the health of a program by looking at how students perform at the course level.  Thus, if Business faculty wanted to assess (for instance) students' ability to "prepare and analyze financial statements" (the first PSLO for the Business Administration degree), they could look at how students had performed in course-level SLOs connected to that specific PSLO, and infer the overall picture from there.

Pros and Cons: This is a relatively painless way to gauge PSLOs, which are difficult to capture.  It should serve to identify some areas of weakness.

However, since learning outcomes are supposed to emphasize outcomes, a programmatic assessment that derives from course-by-course assessments might miss the point. Students may not carry over skills from one course to another, or they might not be getting all the skills they need at one level to succeed in the next.


Students registering for a degree can be asked to participate in a questionnaire, asking them to assess their own knowledge, skills and abilities. Additionally, some of the questions in the Student Campus Climate and Satisfaction Survey ask participants to reflect on their skills in critical thinking and effective communication – and thus these responses can be used to assess some of the General Education outcomes.

Pros and Cons: The Student Campus Climate and Satisfaction Survey has wide participation, generally around 1,000 students, and is administered annually. Thus, this is a relatively painless method for collecting information, one that is already in place. Also in place is the questionnaire that pops up when students register to take a degree or certificate. This is particularly useful, since it captures the student at the important point: when he or she has pretty much completed all the requirements for the program.

On the other hand, self-reported questionnaires have an obvious drawback: they are indirect measures. Students may not be the best judge of their skills, knowledge and abilities, especially when they are asked out of the context of a classroom. Nor does an indirect measure tell us about specifics in our programs, courses or services that might help us get more out of our assessments. And in the case of degree certificates, numbers of graduation are so low that the data are meaningless.  Usually, questionnaires intended to ask students about the program SLOs capture perhaps two to six respondents.