Skip to content

Program Assessment FAQ

Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences. Assessment occurs at the course, department, college, and institution levels. The University of Tennessee, Knoxville, is accredited by the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC), yet assessment practices at UT extend beyond SACSCOC to include programmatic accreditation agencies as well as our own practices of assessment. However, assessment is important not just to our accreditors but also for student learning and continual improvement of our university’s programs.

Every academic program on campus undergoes an assessment reporting process each year. All academic programs are covered by the institution’s SACSCOC regional accreditation, and some academic programs also have programmatic accreditation from an external accrediting body. SACSCOC requires that UT submit departments’ progress with the decennial and fifth-year interim reports. The next interim report, due in March 2021, will need to include multiple years of assessments. This means that all departments must report on their assessments every year. Once those reports are completed, they are reviewed by the respective colleges and by members of the Assessment Steering Committee.

All reports should have the following:

  1. Student learning outcomes
  2. A description of the direct and indirect methods used to assess those learning outcomes
  3. An analysis and discussion of the results of the assessments and a plan for use of the results to improve student learning (that is, what the department will do, based on the assessment data, to improve the program)

The Assessment Steering Committee uses a rubric to determine the strength of annual assessment reports. Each department should have learning outcomes that describe the competencies students in the program should master by the time they graduate. The learning outcomes for the following term are usually discussed, developed, and revised in late spring. Once they have been established, the faculty in the department must decide how they will measure student performance in these areas. This is generally also decided in late spring or during the summer semester.

In the subsequent fall and spring semesters, data from the assessments chosen are collected. During the spring semester, faculty discuss the results and, if the data reflect a need for improvement, develop a plan to address what they will do as a department to improve the program. If the data reflect adequate improvement, there may not be any action taken for the following year. Irrespective of the results, all information is reported by the program in the Planning module of Compliance Assist by September 15 (or earlier, depending on the college).

A student learning outcome (SLO) is a statement that describes what a student should know once they complete a course or a program. A good SLO describes an observable behavior that can be measured within a specific time frame (e.g., by the end of a course or by the time the student graduates). Generally, SLOs use active verbs from a level of Bloom’s Taxonomy or a similar taxonomy.

Direct assessment is used to determine the level of student learning achieved against established learning outcomes. Activities in this category usually have a direct impact on measures of student performance (e.g., grades in a course). Some examples of direct assessment may include exams, quizzes, oral presentations, dissertations, theses, essays, and portfolios. Indirect assessment is typically used to evaluate the quality of student learning experiences. For instance, students might be given a survey to gauge their perceptions of their growth in a skill as a result of a class or a study abroad experience. They might also evaluate the quality of instruction in a course or during a service-learning experience. Some examples of indirect assessments include self-efficacy surveys, end-of-course evaluations, focus groups, and questionnaires for alumni regarding program effectiveness and retention.

Both forms of assessment should be completed as part of the curricular or co-curricular programs. For more information about direct and indirect measures, see this chart on Cornell University’s Center for Teaching Excellence website.

Conference papers and presentations cannot be considered a form of direct assessment because they are not requirements for all students and they are usually not evaluated by program faculty. Such work is generally categorized as an indirect assessment of student learning because it is reflective of the quality of the student learning experience in a program. However, if program faculty decide to score or evaluate conference papers or presentations as part of a course, they can consider the student work a direct assessment.

The terms formative and summative assessment refer to types of assessment. Direct and indirect assessments refer to methods of assessment.

A formative assessment can be direct or indirect given that it is a low-stakes activity used to guide instruction as the teacher helps lead the students to the fulfillment of a learning outcome he or she has set.  Having students complete a midterm self-assessment of their mastery of an outcome is an example of an indirect formative assessment because the activity gives the instructor information about the quality of the students’ learning experience while providing feedback that can be used to improve course delivery.

A summative assessment is always direct because it compares a student’s performance to an established learning outcome. Therefore, an exam for a class is a direct summative assessment.

REVIEW AND UPDATE

Read last year’s report for the program, available in Compliance Assist. The report will tell you the program’s outcomes are and the faculty’s plan for assessing them this year. If you are unable to access the system, contact Janelle Coleman, faculty consultant for assessment, at jcolema1@utk.edu, the Office of Accreditation at SACS_Liaison@utk.edu, or the Office of Institutional Research and Assessment at assessment@utk.edu.

Annual reports are submitted via the Planning module of Compliance Assist no later than September 15 unless the college has established an earlier deadline.

As a basic rule, faculty should strive to assess at least three student learning outcomes (SLOs) for a program each year. For programs with enrollments below 20 students, the number of outcomes assessed can be smaller. It is recommended that departments do not assess more than five SLOs per year to keep the reporting process manageable. If you are unsure how many outcomes your department should assess, or need assistance in revising or writing your learning outcomes, please contact us. For more information about the reporting process and the requirements for programmatic assessment, see the question “What should be included in the yearly reports, and how does the reporting process work?” earlier on this page.

If an academic program has a small number of students, the assessment coordinator should continue to assess the few students enrolled, and data can be stored in the Planning module; however, the assessment coordinator should put the outcomes on extended cycle to indicate a rolling review of students. Here is an example:

“The individualized nature of the certificate program and small enrollments require combining evaluations across academic years. Data will be collected annually, but for most objectives, assessment will take place as a rolling review of students who have completed the course in the most recent three years” (Women’s Studies Certificate).

To place an outcome on extended cycle, follow the instructions for entering results and, after hitting the Edit tab, go to the AY End Date field and adjust the year. Provide a justification for extended cycle under the Notes section of the report. Then, under Progress, select Extended Cycle from the dropdown menu. After making all the necessary changes, scroll down and click the Save and Close button to save all changes.

Course grades cannot be used as an assessment method because what they measure goes beyond a single outcome (i.e., they may also take into account attendance, quality of writing, etc.). For the purpose of assessing student learning outcomes, the method must be outcome-specific. A course grade provides little information about what could be enhanced to help students more effectively master the outcome. An alternative to a course grade could be a grade on an assignment whose focus is to demonstrate the outcome. Another example would be to submit a sample of student work focused on the outcome from a select group of courses, and for the assessment group to examine the artifacts using a rubric or criteria list.

If the sole purpose of the test is to measure one specific student learning outcome, the grade on the test can be used as a measure. If the test measures several outcomes, subscores for relevant questions should be used for each outcome.

The university currently uses several modules offered by Campus Labs in order to collect data using assessments. Annual programmatic outcome reports are entered into the Planning module. The new Outcomes module can be used to enter course-level data. The Baseline module can also be used to collect survey and rubric data. In addition to the Campus Labs modules, several programs utilize the survey tool Qualtrics to collect data for the assessment of student learning.

While there is no set scale for program rubrics, it is generally acceptable to have a scale of four to five levels. Three levels provide a baseline for student performance. For example, it is not uncommon for departments to use program rubrics with the levels excellent, proficient, and beginner. In most cases, it is useful to start with a three-point scale, grade a small sampling of student work to check the validity and user-friendliness of the rubric, and then add additional levels as needed. For instance, as an extension of the excellent-proficient-beginner categories, one can extend the scale to excellent, proficient, developing, and beginner. On the other hand, rubrics with a scale of fewer than four levels or more than five levels will almost always have validity issues and are often difficult to score. For assistance with creating a rubric, please us. For additional resources about rubrics in general (e.g., how to make them, samples, and types of rubrics), check out the following websites:

There are three main benefits to using a rubric or checklist—but, no, it is not a requirement. First, rubrics make grading easier. Because the requirements are explicitly included on the actual document, instructors do not have to spend as much time writing feedback. Moreover, a rubric created with the student learning outcomes (SLOs) in mind facilitates the reporting process. For example, if faculty want to assess student performance in the areas of oral presentation and writing proficiency in one assignment, they may create one rubric that measures both. However, in their report, they may discuss oral presentation and writing proficiency as two different SLOs. Having a rubric isolates specific data about each outcome so that reporting is easier for departments and programs. Finally, rubrics are an effective means of communicating expectations to students.

Appropriate sampling size varies according to the academic program. To determine the appropriate sampling size for an assessment report, it is helpful to look at trends of student involvement in the program over time. In larger departments, it is not uncommon to have a sample size of 30 to 100 students. However, in smaller departments, it is not uncommon to have a sample size of five to 10 students. In smaller departments, any sampling size below five students may be considered too small, and it is recommended that the outcomes be put on extended cycle so that faculty can continue to collect data until the sampling size is sufficient for analysis. Generally a good sample size is at least 20 percent of student enrollment in the program, with a minimum of five students. Therefore, given a total program enrollment of 200 students, the sample size would be 50 students. The following table includes some examples of sample sizes in assessment reports across disciplines.

Department Sample Size
General Education 20% of enrolled students
Global Studies—BA 17 students
Mathematics—BA 11 students
Women’s Studies—BA 15 students
Sociology—BA 39 students
Computer Engineering—BA 18 students
Nursing—BA 51 students
Social Work—PhD 5 students
Accounting—MA 108 students
Food Science and Technology—MS 6 students
Music—BA 19 students
Graphic Design – BFA 31 students

Interpret the results and compare them to last year’s results. Present the data and highlight the improvement in an explanation.

Evidence of improvement involves any positive change from one year to the next. To determine whether there has been improvement, compare the results from the current evaluative year to the results from the previous cycle. For example, note the following outcome from Veterinary Medicine:

Learner Outcome 2: Students will perform at or above the national mean on North American Veterinary Licensing Examination (NAVLE).

If 55 percent of students performed at or above the national mean on the NAVLE in spring 2014, and 65 percent of students scored at or above the national average in spring 2015, there is evidence of a 10 percent improvement from one year to the next. This data should be reflected and explained in the report. Let’s say, though, that the outcome is changed to the following:

Learner Outcome 2: 75 percent of students will perform at or above the national mean on North American Veterinary Licensing Examination (NAVLE).

Although the benchmark is not met, the previously stated data (55 percent of students in spring 2014 and 65 percent in spring 2015) would still show some evidence of improvement. This growth should be explained in the Assessment Analysis and Results section of the annual report.

In Compliance Assist there is an option to upload files in the report to provide evidence of student mastery or nonmastery of the established student learning outcomes (SLOs) and to demonstrate that faculty are making strides toward improving their programs. Compliance Assist will support t only he following file types: .pdf, .docs or .docx, .htm, .html, .ppt, .pptx, .xls, .xlsx. Some possible documents that might be helpful to upload include the following:

  • Samples of student work. These can be exams, papers, artwork, videos, or any other projects that are used to directly assess established SLOs. All names on student work must be redacted before uploading. It is recommended that faculty upload a small sample of work that is representative of the range of student ability (e.g., a report may include one example each of excellent, average, and poor work) to show how the assignments were evaluated. If a video is used, it is best to attach a link to a website instead of trying to upload the file.
  • Rubrics, checklists, or criteria sheets used to score assessments. These can be completed and the results summarized in a table to demonstrate students’ overall performance, or they can be uploaded without the results to show the criteria by which the assessments were evaluated.
  • Minutes of faculty meetings. These notes are useful to document when program faculty meet to discuss the results of the assessment completed during the academic year and make decisions about what changes, if any, should be made for the following assessment cycle. These files are typically uploaded either in the Actions Taken or the Assessment Analysis sections of the report.
  • Selfassessments, alumni questionnaires, senior exit surveys. These documents can be uploaded as examples of indirect assessments used to evaluate program effectiveness and the quality of student learning experiences.
  • Exams, capstone projects, writing prompts. These are examples of direct assessments that can demonstrate that a specific learning outcome or set of learning outcomes were assessed during an assessment cycle. When uploading this type of evidence, it is necessary to point out which parts of the assignment were used to assess the particular outcome.

In Compliance Assist, click on the My Dashboard tab, then the Academic Assessment tab (next to the role tab). Click on the outcome to open the form, then on the edit tab at the top of the form. Fill in all fields marked required. See the Planning Module Guide on the UTK SACSCOC Accreditation website for a step-by-step how-to guide on assessment reporting (If you need additional assistance working in the system, email us at assessment@utk.edu.

Email your department head and the dean of your college to inform them that your report is complete. The report will be reviewed by your college and then by a member of the Assessment Steering Committee. You will receive comments and suggestions via the Compliance Assist feedback form. Once the feedback has been received, you can update the report and resubmit it for review.

Once the results have been collected, the next step is to define the data. This involves asking yourself and your colleagues the following questions:

  • What was the benchmark? A benchmark is a quantifiable means of determining whether or not students have satisfied a learning outcome. For example, note the following outcome:Graduates of the Chemical and Biomolecular Engineering program communicate effectively in writing, speaking, and listening in a variety of contexts.

In the Direct Assessment Method(s) description box of the report, the reporter identified the benchmark: “80 percent of students are expected to maintain a 70 percent average in all graded components of the course, including written and oral reports.”

Setting a benchmark allows departments to quantify the student success rate in meeting an outcome while clearly defining areas where growth is needed.

  • Once a benchmark has been established, what do the data tell us? Now that a benchmark has been set to determine what success looks like in terms of fulfilling the outcome, faculty can begin to organize and report their findings in the Assessment Results and Analysis portion of the report. In keeping with the previous example, the department might look at the graded components of the course and find that 85 percent of students maintained an average of 70 percent or higher in their written and oral reports. Therefore, the analysis section might say something like this: “80 percent of students maintained an average of 70 percent or higher in their written and oral reports. This indicates that students are meeting the learning outcome.”

However, if the outcome is not met—imagine that only 60 percent of students met the learning outcome—the faculty must not only state this but will also want to discuss possible factors that may have contributed to the students not meeting the outcome.

  • What factors might have contributed to student failure or success in meeting an outcome? In addition to communicating the results, faculty should also think about what might have caused the results. Was there a change in the curriculum? Were students lacking in a certain skill? Was there a change or a loss in personnel? This discussion will also go in the Assessment Results and Analysis section of the report.
  • Now that we have discussed the results, how do we move forward? Assessment is an ongoing process where the ultimate goal is improvement. Therefore, after looking at the data and hypothesizing about what worked and didn’t work in terms of curricular activities, it is important to think about what should be done to enhance student learning and to improve the program curriculum. Imagine that 80 percent of students in the Department of Chemical and Biomolecular Engineering met the learning outcome. The department might respond in one of two ways:
  • Faculty may decide that, because students met the established SLO, no actions should be taken to alter the curriculum.
  • Faculty may decide that to change the benchmark to say “85 percent of students are expected to maintain a 70 percent average in all graded components of the course, including written and oral reports.”

Should the faculty decide to change the learning outcome, they will need to indicate this in the Actions Taken section of the report. Actions to change the SLO based on results can be documented in minutes or notes from a faculty meeting where the changes were discussed.

Conversely, if students did not meet the SLO, the faculty will want to explore what they can do to help students reach the benchmark they set. An effective strategy might involve a change in the curriculum—in this case, creating a technical writing course for engineers—or providing students with extra tutoring opportunities. Irrespective of the decision, the actions explored should be reported in the Actions Taken section of the report.

Once the data have been collected and analyzed, there are a number of actions that faculty can take to address the needs of the students in their programs. These actions must be reported in the Planning Module of Compliance Assist in the Actions Taken section of the report. The following are some examples of actions taken as derived from other reports:

  1. Course Revision

Use: For changes made to course content, such as adding a new unit, revising a required assignment, changing a required textbook, adding a practicum rotation, or adopting a common syllabus for multi-instructor course.

 Example 1: Revised persuasive speech evaluation rubric to include intercultural component; initiated textbook revision to include intercultural content; collect baseline data for review of delivery and content components of informative speech. (Communication Studies, BA)

 Example 2: Faculty instructors and clinical mentors who supervise CFS 470 students will . . . begin instruction on the assessment of young children earlier in the semester. (Child and Family Studies BS)

  1. Curriculum Revision

 Use: To reflect curricular changes including adding a new course, modifying the sequencing of courses, changing prerequisites, and dropping a course.

 Example 1: As a faculty, we have approved the creation of HIST 299, which will be a requirement of the major beginning in 2015–16 and a prerequisite for HIST 499. HIST 299 will place emphasis on teaching and learning historical thinking in small-format seminar-like settings. This course will have many different iterations, depending on the particular subject specialty of the faculty teaching it. But the emphasis in each case will be on intensive reading, problem solving, modeling critical analysis, and lab-like exercises designed to offer hands-on training in historical methods, research, and writing. (History, BA)

Example 2: Instructors of all 100- and 200- level classes must assign at least one piece of formal writing which will include instructions requiring students to discuss a play or performance through a broader historical, social, or theatrical context. (Theatre-BA)

  1. Faculty Development and Training

 Use: For activities aimed to more effectively prepare faculty to teach or assess a learning outcome, including training of practicum supervisors, convening of norming session for faculty using a program rubric, etc.

 Example 1: We are also planning to offer workshops in how to teach HIST 299 beginning in summer 2015. Faculty who choose to participate in these workshops will work together to develop standards for the course (length of writing assignments, basic skills to be learned, etc.) and to exchange ideas about how to teach it. (History, BA)

Example 2: To continue to improve critical thinking scores, a faculty workshop will be created during fall 2014 to help faculty understand how to incorporate critical thinking into the curriculum. (Hotel, Restaurant and Tourism, BS)

For examples of possible actions taken for both graduate and undergraduate programs and how they can be worded in the system, see Examples of Actions Taken.

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

Report an accessibility barrier.Privacy.