New York University Skip to Content Skip to Search Skip to Navigation Skip to Sub Navigation

Choose Methods and Measures

As with the other steps, there is a great deal of flexibility in developing this section of the assessment plan. The measures used to collect data (evidence of student learning) and the methods by which data are collected will vary by department. These decisions will be based on the particular goals established by the department, the needs and preferences of faculty, the structure of the curriculum, the discipline and other considerations.


The method for obtaining direct measures of student learning can be course-embedded (i.e., the measure is a regular course assignment, such as a final exam or paper) or add-on (such as an exit exam or project that is external to a specific course). However, whatever method is chosen, Middle States expects evidence that all students are achieving the goals set by the program. That is, evidence of student learning should be collected for all majors (or a representative sample), not just for a specific subset of majors (e.g., honors students).

  • All majors participate in a senior seminar which includes a course-embedded assessment (e.g. substantial research paper assessed using a rubric
  • In their final semester, all majors enroll in one (of many) advanced departmental courses (which may also include juniors). All students complete the assignment; however only graduating seniors are assessed
  • A wide array of advanced departmental courses are designated as “W” (writing) or “C” (culminating) courses. Each major is required complete at least one of these courses in which their learning is assessed directly (via rubric or other detailed assessment technique).
  • All majors enroll in several advanced core courses. Each course addresses a different learning goal (e.g., statistics, theory, and writing). A separate measure of direct student learning is used in each course in order to address all goals.
  • Majors take a licensing exam (add-on method), and the department receives specific feedback on each item or section. Items are aligned with one or more departmental goals. The specific feedback allows the department to identify aggregate student strengths and weaknesses which can then be addressed at the level of educational opportunities (curriculum, instruction, academic supports).

All majors are required to pass an exit exam which is comprised of items that are aligned with the program’s major learning goals (add-on method).


Departments must determine which type of assessment measures will give them information which addresses their student learning goals (i.e., provides evidence that students are learning what is expected of them). Evidence obtained to measure student learning can be either direct or indirect. While both types of evidence have a place in an assessment program, best practices suggest (and Middle States requires) at least some collection of direct evidence. Direct measures of student learning assess specifically what a student has learned, as demonstrated by his or her performance on a task (e.g., papers, exams, performances). Indirect measures of student learning give a general indication that students have probably learned something, but results may not be directly aligned with departmental goals (e.g., admission to graduate school, performance on standardized exams, student self-ratings of learning). Indirect measures are also methods that allow students to giver feedback regarding their learning experiences (e.g., surveys, exit interviews, focus groups).

When deciding on a direct measure of student learning, many programs find that an effective and convenient assessment measure for majors is a culminating project or experience (a capstone or capstone-like project) that entails demonstration of mastery of the most important program goals. An alternative approach to the capstone measure is the use of multiple smaller assessment measures (shorter papers and/or exams), each of which may address a different goal. A combination of these two approaches may also be used. Regardless of the measure chosen, it must be detailed enough to clearly demonstrate alignment with learning goals. This most often requires the use of a detailed scoring guide called a rubric or exam blueprint. Most faculty already use scoring criteria, though these criteria are not always explicitly expressed. Furthermore, faculty within the same department will often find that they share (albeit implicitly) the same criteria for assessing student success on an assignment.

NYU Footer