Glossary

The following glossary was developed from research and feedback gathered from faculty and researchers from within the California community colleges. It was created in response to ASCCC Resolution 09.01 S17, which asked the Academic Senate for California Community Colleges to address confusion in the field by researching and updating the 2009 glossary of common terms for student learning outcomes and assessment.

This glossary does not dictate terminology, nor is it meant to be exhaustive or comprehensive. With increased collaboration between researchers and faculty, enhanced dialogue about terminology increases our ability to serve our students and promote student success. This document was also updated to include references and resources as well.

In an attempt to help clarify terminology, this glossary groups terms into several categories:

  • Assessment of courses, programs, and institutions
  • Outcomes
  • General terminology

Assessment Glossary Topics

Assessment is perceived by many faculty as a burden and an obligation, when in truth, faculty routinely perform assessment of courses and programs even when they do not think of what they do as such. Curriculum development and offerings need assessment, and assessment drives modification of existing curriculum as well as development of new curriculum. Development and assessment go hand-in-hand and should be looked at through the prism of good practice in what faculty normally do in the course of their work.

The following terminology is most often associated with course, program, or institutional assessment discussions.

Assessment

In education, the term “assessment” refers to the wide variety of methods or tools that educators use to evaluate, measure, and document the academic readiness, learning progress, skill acquisition, or educational needs of students. Assessment efforts provide faculty with the opportunity to look honestly at courses and programs, relevance of course content, self-evaluation of teaching and evaluation methodology, and whether the vision of a course or program is resulting in success of the program. Assessment is the way in which faculty ensure curriculum effectiveness and relevance, and it allows for self-reflection that encourages enhancement or revision of curriculum when appropriate.

Assessment Artifact

An assessment artifact is a student-produced product or performance used as evidence for assessment. For example, artifact in student services might be a realistic and achievable student educational plan.

Assessment Cycle

"Assessment cycle" refers to the process of collecting data from assessment, using that data to develop or modify curriculum, and then assessing the new or modified curriculum to collect data for ongoing modification or development. Such a cycle is graphically represented below. As with any cycle, it has no beginning, and no end. The dynamic nature of curriculum includes matters such as curricular development, measurement of success, and modifications based on assessment leading to modifications of curriculum.

Assessment of Learning

Assessment of learning is a process in which methods are used to generate and collect data for evaluation of courses and programs in order to improve educational quality and student learning. This term refers to any method used to gather evidence and evaluate quality and may include both quantitative and qualitative data in instruction or student services.

Assessment for Accountability

Assessment for accountability is an assessment process conducted not as much for development and evaluation of a program, course, or other area, but more for the purpose of justifying or proving the effectiveness of the area or program being assessed. The primary drivers of assessment for accountability are external, such as legislators or the public, and the concept usually entails indirect or secondary data. Application of accountability data for educational improvement requires careful analysis of the alignment of the data and the ramifications of the actions.

Authentic Assessment

Traditional assessment sometimes relies on indirect or proxy items such as multiple-choice questions focusing on content or facts. In contrast, authentic assessment simulates a real-world experience by evaluating a student’s ability to apply critical thinking and knowledge or to perform tasks that may approximate those found in the workplace or other venues outside of the classroom setting.

Classroom Assessment Techniques

Often referred to as CATs, classroom assessment techniques are a collection of “simple tools for collecting data on student learning in order to improve it.” CATs are short, flexible classroom activities that provide rapid, informative feedback to improve classroom dynamics by monitoring learning, from a student’s perspective throughout the semester. Data from CATs can be evaluated and used to facilitate continuous modifications and improvement in the classroom.

Classroom-based Assessment

Classroom-based assessment is the formative and summative evaluation of student learning within a specific classroom, in contrast to institutional assessment that looks across courses and classrooms at student populations.

Course Assessment

Course assessment evaluates the curriculum as it is designed, taught, and learned. It involves the collection of data aimed at measuring successful learning in an individual course and improving instruction with a goal of enhancing learning.

Criterion-based Assessment

Criterion-based assessment evaluates or scores student learning or performance based on explicit criteria developed by student services or instructional staff and measures proficiency at a specific point in time.

Direct Assessment

Direct assessment data can provide evidence of student knowledge, skills, or attitudes for the specific domain in question and actually measure student learning, not perceptions of learning or secondary evidence of learning in the way that a degree or certificate does. For instance, a math test directly measures a student’s proficiency in math. In contrast, an employer’s report about student abilities in math or a report on the number of math degrees awarded would be indirect data.

Embedded Assessment

Embedded assessment occurs within a regular class or curricular activity. Class assignments linked to student learning outcomes through primary trait analysis serve as grading and assessment instruments, such as common test questions, CATs, projects, or writing assignments. Specific questions can be embedded on exams in classes across courses, departments, programs, or the institution. Embedded assessment can provide formative information for pedagogical improvement and student learning needs.

Formative Assessment

Formative assessment is a diagnostic tool implemented during the instructional process that generates useful feedback for student development and improvement. The purpose is to provide an opportunity for a student to perform and receive guidance—such as in class assignments, quizzes, discussion, or lab activities—that will improve or shape a final performance. This practice stands in contrast to summative assessment in which the final result is a verdict and the participant may never receive feedback for improvement, such as on a standardized test, a licensing exam, or a final exam.

Homegrown or Local Assessment

A homegrown or local assessment is developed and validated by a local college for a specific purpose, course, or function and is usually criterion-referenced to promote validity. This form of assessment stands in contrast to standardized state or nationally developed assessments. In student services, homegrown student satisfaction surveys can be used to gain local evidence, in contrast to commercially developed surveys that provide national comparability

Norm-referenced Assessment

In norm-referenced assessment, an individual’s performance is compared to that of another individual or group of individuals. Individuals are commonly ranked to determine a median or average. This technique addresses overall mastery to an expected level of competency but provides little detail about specific skills.

Such assessment yields an estimate of the position of the tested individual in a pre-defined population with respect to the trait being measured. This practice is often used in standardized testing.

Summative Assessment

A summative assessment is a final determination of knowledge, skills, and abilities. Such an assessment can be exemplified by exit or licensing exams, senior recitals, capstone projects, or any final evaluation that is not created to provide feedback for improvement but rather is used for final judgments.

OUTCOMES GLOSSARY TOPICS

Whether referring to student learning outcomes, program learning outcomes, or institutional outcomes, the term “outcome” has become synonymous with the work of justifying what faculty do. Most of the work in assessment is designed specifically around the outcomes of a course, a program, or the institution as a whole. Much like with assessments, which faculty routinely do as a part of their usual instruction, outcomes should be viewed as an essential component of course and program development rather than an obligation and a burden.

The following terminology is most often associated with course, program, or institutional outcome discussions.

Affective Outcomes

Affective outcomes relate to the development of values, attitudes and behaviors and are often associated with feelings rather than knowledge or skills. These outcomes include learning to accept an idea or concept or learning to appreciate a point of view. This practice is discussed as part of one of the three domains within Bloom’s Taxonomy.

General Education Student Learning Outcomes

GE SLOs are the knowledge, skills, and abilities a student is expected to be able to demonstrate following a program of courses designed to provide the student with a common core of knowledge consistent with that of a liberally educated or literate citizen. Some colleges refer to these outcomes as core competencies, while others consider general education a program.

Institutional Learning Outcomes (ILO)

Institutional learning outcomes are the knowledge, skills, and abilities with which a student is expected to leave an institution as a result of a student’s total educational experience. Because GE outcomes represent a common core of outcomes for the majority of students transferring or receiving degrees, some, but not all, institutions equate GE SLOs with ILOs. ILOs may differ from GE SLOs in that institutional outcomes may include outcomes relating to institutional effectiveness—such as degrees, transfers, and productivity—in addition to learning outcomes. Descriptions of ILOs should include dialogue about both instructional and student service outcomes.

Student Learning Outcomes (SLO)

Student learning outcomes, or SLOs, are the specific observable or measurable results that are expected subsequent to a learning experience. These outcomes may involve knowledge (cognitive), skills (behavioral), or attitudes (affective) that provide evidence that learning has occurred as a result of a specified course, program activity, or process. An SLO refers to an overarching outcome for a course, program, degree or certificate, or student services area such as the library. SLOs describe a student’s ability to synthesize many discreet skills using higher level thinking skills and to produce something that asks students to apply what they have learned. SLOs usually encompass a gathering together of smaller discrete objectives through analysis, evaluation, and synthesis into more sophisticated skills and abilities.

GENERAL GLOSSARY TOPICS

With the restructuring of this document from previous versions, an emphasis on assessment and outcomes as philosophical areas of consideration became paramount in importance. The following glossary topics may not be associated with either of these two areas.

Alignment

Alignment is the process of analyzing the way explicit criteria line up with or build upon one another within a particular learning pathway. When dealing with outcomes and assessment, one must determine that course outcomes align or match up with program outcomes and that institutional outcomes align with the college mission and vision. In student services, alignment of services includes matters such as aligning financial aid deadlines and instructional calendars.

Bloom’s Taxonomy

Bloom’s Taxonomy is one example of several classification methodologies used to describe increasing complexity or intellectual sophistication. The categories included in the original version of the taxonomy were as follows:

Knowledge: Recalling or remembering information without necessarily understanding it. Related behaviors include describing, listing, identifying, and labeling.

Comprehension: Understanding learned material. Related behaviors include explaining, discussing, and interpreting.

Application: The ability to put ideas and concepts to work in solving problems. Related behaviors include demonstrating, showing, and making use of information.

Analysis: Breaking down information into its component parts to see interrelationships and ideas. Related behaviors include differentiating, comparing, and categorizing.

Synthesis: The ability to put parts together to form something original. Related behaviors include using creativity to compose or design something new.

Evaluation: Judging the value of evidence based on definite criteria. Related behaviors include concluding, criticizing, prioritizing, and recommending.

An updated version of Bloom’s Taxonomy with renamed and slightly altered categories was published in 2002.

Calibration (rubrics)

Calibration is the process of ensuring that multiple evaluators of a single rubric are applying that rubric in the same manner. This process is essential to maintaining reliability and validity.

Closing the Loop

“Closing the Loop” refers to the use of assessment results to improve student learning, through collegial dialogue informed by the results of student service or instructional learning outcomes assessment. It is part of the continuous cycle of collecting and evaluating assessment results, using the evaluations to identify actions that will improve student learning, implementing those actions, and then returning back to collecting assessment results.

Continuous Improvement

Continuous improvement involves an on-going, cyclical process of identifying evidence and implementing incremental changes to improve student learning.

Core Competencies

Core competencies are the integration of knowledge, skills, and attitudes in complex ways that require multiple elements of learning that are acquired during a student’s course of study at an institution. Statements regarding core competencies speak to the intended results of student learning experiences across courses, programs, and degrees. Core competencies describe critical measurable or observable life abilities and provide unifying, overarching purpose for a broad spectrum of individual learning experiences. Descriptions of core competencies should include dialogue about both instructional and student service competencies. See also “General Education Student Learning Outcomes” and “Institutional Learning Outcomes.”

Culture of Evidence

“Culture of Evidence” refers to an institutional atmosphere that supports and integrates research, data analysis, evaluation, and planned change as a result of assessment to inform decision-making. A culture of evidence is characterized by the generation, analysis, and valuing of quantitative and qualitative data in decision making.

Evidence

Artifacts or objects that demonstrate and support conclusions are considered evidence. Evidence includes quantifiable and supported data, as opposed to intuition, belief, or anecdotes. As described by the Accrediting Commission for Community and Junior Colleges, “Good evidence, then, is obviously related to the questions the college has investigated and it can be replicated, making it reliable. Good evidence is representative of what is, not just an isolated case, and it is information upon which an institution can take action to improve. It is, in short, relevant, verifiable, representative, and actionable.”

Evidence of Program and Institutional Performance

Program or institutional evidence includes quantitative or qualitative, direct or indirect forms of data that provide information concerning the extent to which an institution meets the goals it has established and publicized that information to its stakeholders.

Indirect Data

Indirect data, sometimes referred to as secondary data, measures student performance in ways that are implied or inferred. For instance, certificate or degree completion data provides indirect evidence of student learning while not directly indicating what a student actually learned in the coursework.

Likert Scale

The Likert Scale is often used in the social sciences and in educational research. This scale assigns a numerical value to responses in order to quantify subjective data. The responses are usually placed along a continuum, such as responses of strongly disagree, disagree, agree, or strongly agree. Values are also assigned, such as 1 for strongly disagree to 4 for strongly agree.

Metacognition

Metacognition is the act of thinking about one’s own thinking and regulating one’s own learning. It involves critical analysis of how decisions are made and how vital material is consciously learned and acted upon.

Objectives

Objectives can be small steps that lead toward a goal, such as the discrete course content that faculty cover within a discipline. Objectives are usually more numerous than overarching student learning outcomes and create a framework for those outcomes, which typically address synthesizing, evaluating, and analyzing many of the objectives.

Pedagogy

Pedagogy is often defined as “the method and practice of teaching, especially as an academic subject or theoretical concept.” It is the art and science of how something is taught and how students learn it. Pedagogy includes how teaching occurs, the approach to teaching and learning, how content is delivered, and what students learn as a result of the process. Etymologically, “pedagogy” is applied to children and “andragogy” is applied to adult learners, but in modern English usage pedagogy is commonly used in reference to any aspect of teaching and learning in any classroom.

Primary Trait Analysis (PTA)

Primary trait analysis is the process of identifying major characteristics that are expected in student work. After the primary traits are identified, specific criteria with performance standards are defined for each trait. This process is often used in the development of rubrics. PTA is a way to evaluate and provide reliable feedback on important components of student work, thereby offering more information than a single, holistic grade.

Program

An educational program is defined in Title 5 §55000(m) and in the Chancellor’s Office Program and Course Approval Handbook as “an organized sequence of courses leading to a defined objective, a degree, a certificate, a diploma, a license, or transfer to another institution of higher education.” However, in program review, colleges often define programs as relating to specific disciplines. A program may refer to student service programs and administrative units as well.

Qualitative Data

As opposed to quantitative data, qualitative data offers descriptive information, such as narratives or portfolios. Such data is often collected using open-ended questions, feedback surveys, or summary reports and may be difficult to compare, reproduce, and generalize. Qualitative data, such as opinions, can be displayed as numerical data by using Likert scaled responses that assign a numerical value to each response (e.g., 4 = strongly agree to 1 = strongly disagree). These data sets are easy to store and manage and can provide a breadth of information. Qualitative data can provide depth but can be time and labor intensive. Qualitative data is most often heuristic in nature and is able to pinpoint areas for interventions and potential solutions, that are not always evident in quantitative data.

Quantitative Data

As opposed to qualitative data, quantitative data consists of numerical or statistical values. Such data uses actual numbers, such as scores or rates, to express quantities of an identified variable. Quantitative data can be generalized and reproduced but must be carefully constructed, analyzed, and interpreted to be valid.

Reliability

Reliability refers to the reproducibility of results over time or a measure of consistency when an assessment tool is used multiple times. In other words, if the same person took a test five times, the scores should be similar. This concept refers not only to reproducible results from the same participant but also to repeated scoring by the same or multiple evaluators. While the student learning outcomes process should be reliable, statistical reliability analysis may not exist for every item and aspect of classroom and program assessment but rather should indicate that assessments should be a consistent tool for testing students’ knowledge, skills, or abilities.

Rigor

Rigor refers to the degree to which a given set of standards are adhered to in order to make an educational experience academically or intellectually challenging, California community college faculty use the term “rigor” relating to courses in the context of Title 5, such as referring to course standards, grading policies, or intensity. For example, Title 5 §55002 (b) (2) (C) states, “In particular, the assignments will be sufficiently rigorous that students successfully completing each such course, or sequence of required courses, will have acquired the skills necessary to successfully complete degree-applicable work.” Researchers often refer to rigor as statistical rigor or compliance with good statistical practices.

Rubric

A rubric is a set of criteria used to determine scoring for an assignment, performance, or product. Rubrics may be holistic and not based upon strict numerical values, instead providing more general guidance. Other rubrics are analytical, assigning specific scoring point values for each criterion often as a matrix of primary traits on one axis and rating scales of performance on the other axis. A rubric can improve the consistency and accuracy of assessments conducted across multiple settings.

Sampling

Sampling is a research method that selects representative units such as groups of students from a specific population of students being studied so that by examining the sample, the results can be generalized to the population from which they were selected when everyone in the population has an equal chance of being selected. Sampling is especially important when dealing with student service data.

Triangulation

Triangulation is the collection and study of evidence from multiple sources—including both direct and indirect assessments—to determine student learning outcome achievement.

Validity

Validity is an indication that an assessment method accurately measures what it is designed to measure with limited effect from extraneous data or variables. To some extent, this concept must also relate to the integrity of inferences made from the data.

Content Validity

Content validity indicates that an assessment consistently and effectively measures the content it is intended to measure. For instance, when one takes a driver’s license exam, the test does not have questions about how to make sushi.

Variable

A variable is a discrete factor that affects an outcome.