Writing Rubrics for Program Assessment

Writing rubrics is no easy task, and it’s especially difficult if you want to write a broad, general rubric used for program assessment.  Writing rubrics for assignments is much easier because the instructor can easily list all of the things that it will take to create an A-level project.  But, it’s much harder to step back and think about the general things that a student must do in order to show competency in a certain area across a program’s curriculum.  For an assignment, it may be necessary to reference four publications, but is that always an appropriate requirement for the curriculum as a whole?  Probably not.

When writing a program rubric I suggest the following:

  1. Think broadly.
  2. Use other program rubrics as a guide.
  3. Don’t write alone.

Think Broadly

Your rubrics should be tied to your broad program learning outcomes.  List the skills that are necessary to competently obtain an outcome.  For instance, if one of your outcomes addresses oral communication, think about the things that students must do in order to be good speakers all of the time (not just for one speech in one class).  Possible categories for evaluation might include: topic selection, pace and timing, organization, mechanics of delivery, and language.  For program assessment I would recommend not focusing on things such as length of the presentation or number of references.  It may be valuable to note these things for a specific assignment, but at the program level it would be difficult to determine which length is the right length for a speech and exactly how many references are necessary.

Use Other Program Rubrics as a Guide

The wheel doesn’t need reinventing and neither does a program rubric.  As stated earlier program rubrics are notoriously difficult to write; so why bother?  It’s far easier to borrow rubrics that have already been successfully implemented at other universities and tweak them to suit your own program’s needs.  A valuable resource is the AAC&U VALUE rubrics.  Fifteen rubrics were created by faculty and academic administrators and staff to reflect various general learning expectations.  These rubrics were tested at 100 universities and colleges across the United States and are now being used, in some form, at hundreds more.

The VALUE rubrics are not your only resource, though.  A simple internet search will, likely, provide you with many more rubrics for your outcome.  You’ll probably find that there are more resources for K-12 education than higher education.  This is only logical; K-12 has been using rubrics for assessment of student performance for many years.  This is still a fairly new way to assess in higher education.  But, new resources are being added daily, so don’t hesitate to continue your search.  You’ll probably encounter new resources every time you look.

Don’t Write Alone

No, you won’t fall into some dark, rubric abyss if you write alone, but you’ll probably not get a great rubric, either.  It’s nice to hash out the intricacies of an outcome with others in your field.  The nuance of one word over another can make a big difference in a rubric.  Your peers can help think through these things.  Luckily we live in a time where it is easy to collaborate via the internet, so there’s no need to be in the same room.


Aligning Program Curriculum with Learning Outcomes: Curriculum Mapping

After program outcomes are written, the program should begin the mapping process.  Mapping is vital because it allows those in the program to see how the course(s) that they teach are linked to the overall purpose of the program.  Every course in the curriculum should link to at least one program outcome.  If it doesn’t, then there is either one of two reasons: one, the course is superfluous and doesn’t need to be in the curriculum; or two, the outcomes are not correct and the faculty should rethink the general goals of the program.

If an outcome is important, a single exposure is not enough.  Let’s assume that a program expects students to be proficient speakers.  Requiring oral presentations in one class will simply not offer enough exposure to oral communication in order to create good speakers.  Ideally, a program will offer exposure to outcomes so that students will first be introduced to the skill, then receive more in-depth exposure, and, finally, have their application of the skill reinforced.

Begin by listing all the courses in the curriculum.  Many choose to map curriculum from program entry to program exit.  While mapping, think about the ways that beginning experiences differ from the students’ experiences near the end of the program.  How do your expectations of the students change from course to course? Have students had the proper entry experiences to prepare them for the end experiences in your program?  In other words, has your program prepared them in a way that will allow them to successfully complete the tasks that you present?


Most choose to create a matrix for their curriculum map.  To do this list all the courses in the program’s curriculum on the left column and list each outcome on the top row.  Then mark where each course aligns to the various outcomes.  You may find that one course maps only to one outcome; but it is also possible to have a singular course map to many outcomes.

After you’ve aligned the courses in your program to each goal, it is suggested that you think about how each outcome is addressed in those courses.  Are outcomes introduced (I), emphasized (E), or reinforced (R)?    Then look for patterns.  Are there areas where the outcome has too little exposure, over-exposure, or, perhaps, no progression of exposure? Are there any courses that have no links to outcomes?


Use what you learned from the mapping process to examine your curriculum more closely.  Are there courses that no longer serve a purpose?  Are there courses that need to be added to the curriculum to satisfy the needs of a particular outcome? Would it be possible to rework a course so that it more closely fits the goals of the program?

In order to successfully complete this process, a program will need its entire faculty involved.  The faculty teaching the courses are the ones who can best answer if a course matches an outcome and the extent to which it does so.  But be aware that this can also be a tumultuous process.  It’s human nature to be protective of your property, and that is exactly how many faculty think of their courses.  Whoever leads the curriculum mapping process should be aware that this is not an easy task and it may put some people on the defensive.  Please be cognizant that some people might be emotionally tied to their courses and the way the course is currently structured.

And finally, good luck.  Of all the assessment processes that I’ve lead, faculty seem to find curriculum mapping to be the most painful and difficult.  It’s not easy to think objectively about a course that you created and have taught for the last eight years.  And despite that, curriculum mapping is consistently the one process that is noted for its usefulness.  No matter how painful it is, I am often told that mapping is single-handedly the one thing that helped build a better program.


Creating a Program Assessment Plan: Part II, Finding the Right Assessment Points to Measure Student Learning Outcomes

Now that you’ve written your program’s Student Learning Outcomes (SLOs) it’s time to assess them.  Through assessment your program can determine if the students are meeting or exceeding your goals, or if their performance is below your expectations.  To do this, you need to gather data.  Generally, there are two types of assessment data: Direct Evidence and Indirect Evidence.

Direct Evidence: Consists of student work or performances.  It shows what a student can do with their knowledge.
Indirect Evidence: Gives an approximation of a student’s knowledge.

Examples include but are not limited to:For a more thorough list visit Skidmore College’s webpage on Direct and Indirect Evidence.

Ideally direct and indirect evidence will be used together to make decisions about students’ achievement of program goals.  It’s ill-advised to use only indirect evidence to determine the level of achievement towards SLOs because they can only approximate the students’ success.  Direct evidence tells evaluators exactly how well students are performing.

Where Can I Gather Evidence in My Program?

  1. Begin with one SLO.  Ask yourself, “What classes directly link to this outcome?”   For instance, if one of your SLOs pertains to writing, which course or program activity emphasizes student writing?  You may find that there are many courses that do this.
  2. Once you’ve narrowed down the most suitable course or courses for assessing the SLO, think about the one assignment that best requires student to demonstrate their progress towards that outcome.  Obviously, if you’re measuring the students’ ability to write, you’ll want to choose an assignment that asks students to write.
  3. Think about the feasibility of assessing this assignment.  Is the product something that can easily be gathered?  How will you coordinate the assessment of live performances?  Managing the assessment cycle will be discussed in a later blog entry, but these are questions that should be thought about at this point, as well.
  4. Determine if you need more evidence.  Are you only interested in measuring students at the end of the program (summative assessment)?  Or, would you like to determine if student progress as they move through your program?  If the latter is the case, then you’ll want to measure students’ performance when they first demonstrate their abilities (formative assessment) and again at a point where they should have mastered the outcome.  This type of assessment helps your program prove that it adds to the students’ knowledge and skill set.
  5. Find an indirect evidence source.  What surveys, grades, pass rates, or other data might you use to help you determine if students are achieving as expected?  Don’t reinvent the wheel – use evidence that your program or department is already gathering, if possible.
  6. Repeat this process for all SLOs on your list.  At a minimum you should have at least two evidence sources for each SLO – one indirect and one direct.  Ideally, you’ll have at least three sources of evidence – one indirect, one formative direct, and one summative direct.  You can always use more data points, but be careful to not overwhelm yourself with data gathering.

There Are No Ways to Gather Direct Evidence for My SLO

If the outcome is important then it needs to be emphasized in the program’s curriculum.  Ideally, the program’s faculty agreed on the SLOs, which means that they agreed that each of these things is important for students to learn and be able to perform by the time the leave the program.  If you discover that there is no good assessment point for an SLO, then your curriculum has a gap, and this needs to be addressed.  Ask yourself, is this outcome really important?  If the answer is “Yes” then you and your program’s faculty need to review the curriculum and determine a way to address the outcome appropriately.  We’ll discuss curriculum mapping at a later date, but if you’re interested in information right now, I suggest the following resources:

Direct Assessment


Ideally, assessment data will be taken from a variety of sources, most importantly direct assessment.  Direct assessment is assessment that evaluates a student’s performance.  It allows us to appraise not only what the student knows, but what they can do with what they know.

I think Gary Larson was on to something!

Creating a Program Assessment Plan: Part I, The Student Learning Outcomes

The heart of any assessment plan is the student learning outcomes (SLOs). Good SLOs detail the things that a student should know or be able to do when they exit a program.  By establishing clear and concise SLOs, a program’s assessment will be more focused, and program faculty can more accurately determine if they are fulfilling their goals and identify those areas that need improvement.

What are Student Learning Outcomes?
Generally speaking, student learning outcomes are broad statements of what students should know or be able to do when they exit a program.  Program outcomes are different than course-level objectives, which are more specific and smaller in scope.  For instance a program-level outcome might generally state that, “Students will be able to write a publish-worthy research paper using appropriate references and APA style.”  There are many smaller, course-level objectives that must be met before students can successfully produce work of this nature.  For instance, students must first learn how to document using APA style.  In addition, they must be able to analyze reference sources and determine the adequacy and appropriateness of their references.  Both of these would be appropriate course-level skills, but are likely too insignificant to be a program-level outcome.

Well written SLOs are specific, measurable, meaningful, and manageable.  They describe broad goals for the program and require higher level thinking abilities.  They use action verbs (Bloom’s taxonomy is a good place to start) and avoid covert verbs like understand, appreciate, and know.  And, they can easily be mapped to a student product.  If research is the primary focus of an outcome, then examples of student research should be assessed.

Writing the Outcomes
This is the hard part because it requires faculty consensus.  The first step is agreeing on which 3-5 outcomes are the most important.   I suggest keeping the number of outcomes small.  The more outcomes a program has, the more complicated the assessment process becomes.  A program might also benefit from researching the program SLOs from their peer institutions, professional organizations, and accrediting bodies. Don’t be afraid to step out of the box and write something different from what you’ve read on various websites and publications.  Think about your model graduate.  What skills and abilities would that student have?  Write a list that speaks to that ideal graduate.

Once you have narrowed down your outcomes, perfect them by making them concise and complete.  Every outcome should contain four parts: the subject, an action verb, the learning statement, and the criterion/benchmark.

Example: Graduates of the program will articulate the predominate concepts, policies, laws and ethical standards that govern nonprofit organizations in the United States scoring a 4 out of 5 on a program-generated rubric.

  • Subject:  Graduates
  • Action Verb:  Articulate
  • Learning Statement:  The predominant concepts, policies, laws and ethical standards that govern nonprofit organizations in the United States
  • Criterion/Benchmark:  Scoring a 4 out of 5 on a program-generated rubric

Before you finalize your SLOs reread them as a student would.  Will your program’s students find these outcomes to be clear and comprehensible?  If not, I’d suggest that you rewrite them. These outcomes aren’t just for your use; they’re for your students’ use, too.  Write them so that they make sense to both parties.

Other Resources

There are many great resources that can be found with a simple Google search for those interested in writing or editing their program’s Student Learning Outcomes.  A few resources that I’ve found handy are: