Assessment Planning Resources
As higher education professionals, we understand that assessment is essential as it helps us determine whether programs and services are meeting the needs of our students. Assessment has to start at the beginning with a plan. We cannot leave assessment to the last minute, otherwise we will not effectively collect the best information, which means we don’t have the most optimal information to guide our decisions. Assessment is intentional and takes times. There are many components involved in developing assessment, which is why it is necessary to develop an assessment plan; it provides us with the necessary steps in conducting assessment. An assessment plan can be developed at the institutional, divisional, departmental or program/service level and ensures that we look at all possible steps needed to produce reliable and valid assessment.
Listed below are the steps in assessment planning:
Clarify mission and goals
Mission is the main purpose that identifies the path and target for our endeavors. It describes who our program serves, the main functions or activities, and the primary intention of the program. It sets the foundation that leads to the design of the program’s goals and outcomes. A well written statement is essential in order to conduct assessment. The mission statement helps us understand the master plan for a program and assessment helps us determine if our mission is actually being accomplished. For an academic program, it should convey the scope and purpose of faculty members’ expectations for student learning.
Goals are very broad statements that describe what the program should accomplish. Goals are important because they detail how the mission will be achieved. For that reason, the goals need to align with the mission. For academic programs, these are most often presented as characteristics of the student.
Create measurable program and/or learning outcomes
Outcomes are specifically what you want the end result of your efforts to be, the changes you want to occur. Outcomes are how you measure the goals. There are two different types of outcomes:
- Learning Outcomes are statements of what students will be able to think, know, do, or feel because
of a given educational experience (including co-curricular). A learning outcome consists
of four components: Audience, Behavior, Condition, and Degree (which will typically
be identified in the targets).
- Affective – growth in feelings or emotions (attitude)
- Cognitive – mental skills (knowledge)
- Psychomotor – manual or physical skills (skills)
- Program Outcomes are statements focused on what the program should accomplish each year with students in terms of program quality, efficiency, and productivity (e.g. retention and graduation rates), but not in terms of actual student learning. A program outcome consists of five components: Specific, Measurable, Achievable, Relevant, and Time-Sensitivity.
Determine how outcomes will be measured
Once the outcomes have been identified, the next step is to determine how the outcomes will be measured. Depending on the wording of your outcomes, you may elect to use qualitative, quantitative, and/or mixed methods data collection.
- Quantitative: Methods that assess objectives by collecting numeric data and analyzing the data using statistical techniques; useful for comparing and measuring across individual students or student population. Examples: rubrics, checklists, pre/post-tests, survey questions using scales, data from student information system.
- Qualitative: Methods that rely on and evaluate descriptions rather than numeric data; useful for understanding depth and richness of experience. Examples: written reflections, focus groups, interviews, open response survey questions.
- Mixed Methods: Sometimes referred to as mixed methodology, multiple methodology or multi-methodology research, mixed methods research offers the best of both worlds; the in-depth, contextualized, and natural but more time-consuming insights of qualitative research coupled with the more-efficient but less rich or compelling predictive power of quantitative research.
Determine whether to collect direct/indirect evidence and formative/summative
It is also important to select whether you will be conducting direct or indirect assessment and formative or summative assessment. Many times you will have a combination to meet your needs.
- Direct Measure includes student products or performances that demonstrate that specific learning has taken place. (e.g. exam, presentation using a rubric to evaluate).
- Indirect Measure asks students to reflect on their learning or experiences rather than to demonstrate it. (e.g. student perceptions of learning).
- Formative Assessment is conducted during the program to provide feedback and used to shape, modify, or improve the program; A means of gathering information about student learning that is built into and a natural part of the teaching-learning process.
- Summative Assessment is conducted after the program, which provides the opportunity to make judgment on quality, worth, or compares to standards and can be incorporated into future plans.
Identify what type of assessment to conduct
Usage numbers: track participation in programs or service
Student Needs: keeps you aware of the student body or specific populations.
Program Effectiveness: level of satisfaction, involvement, effectiveness, helpfulness, etc.
Cost Effectiveness: how does a program/service being offered compare with costs?
Campus Climate or Environment: assess the behaviors/attitudes on campus.
Comparative (Benchmarking): comparing a program/service against a comparison group.
Learning Outcomes: assess how a participant will think, feel, or act differently as a result of your program/course/service.
If Utilizing Indirect Assessment, discover what data already exists
Sometimes the information may already be available through institutional-wide surveys or data from the Student Information System. Make sure that you are looking into what already exists before jumping into developing a new assessment tool. Here are some examples:
- Skyfactor (EBI) Student Affairs Assessment Suite
- National Survey of Student Engagement (NSSE)
- Advising survey
- Exit Survey
- Course Evaluations
- Graduate Survey
- Retention Rates
- Faculty/Student Ratios
- Enrollment trends
- Completion rates
- Diversity of student body (demographics)
- Other surveys administered in Student Affairs (Welcome Week, Orientation, Resident Satisfaction)
- Alumni Survey
Determine whom to assess
Obviously if you are assessing a program or service, you will want to study those that utilize the program or service. You do still need to determine if you want to include all those students or a sample.
Determine timeline and details for administration
Next you need to determine when you will conduct your assessment. Who is responsible for administration of the assessment? If conducting qualitative assessment, is the facilitator unbiased? For example, the Director of Residence Life shouldn’t facilitate focus groups with housing students to determine their satisfaction. When do you think the best time will be and does it coincide with another assessment that will affect your response rate (survey fatigue), if applicable? Are you planning to conduct this just once or over the course of time (longitudinal study)? How will you be administering the survey, will it be online or on paper, will it be in the classroom, will you use tablets in the Student Center? Also, how will you communicate the assessment to students and will you incentivize it? What is your budget for the assessment?
Set Performance Targets/Minimum Performance Standard
Once you have selected how you are going to assess your outcomes, you need to set your targets. The targets provide a specific level which will define success towards achieving the outcome. For example, we want to determine if the following learning outcome is being achieved in a Biochemistry program: “Students will demonstrate the ability to dissect a problem into its key features and to test hypotheses through interpretation of experiments.” It is determined that a final exam will be used to assess this learning outcome. The targets are that 80% of students will correctly answer 80% of exam questions designed to assess application and analysis utilizing key principles and 80% of students will correctly answer 75% of exam questions designed to assess synthesis and evaluation of new ideas based on their knowledge of key principles. So if only 50% of the students on the first target are answering 80% of questions correctly, than we need to look at how to improve that number.
Setting a target should be based upon evidence (if available). It involves knowing where you are now (looking at past levels of student performance) and at what level you are trying to achieve. If you do not have historical data, you might consider using information from outside data sources. Targets are about setting a desired level of performance, not about the level you are currently achieving. Remember that it should be challenging, but realistic.
Determine how and who will analyze the data
Now that you have collected and have access to the data, you will need to start analyzing the data. How you analyze the information depends on what type of assessment methods you selected (quantitative vs. qualitative).
Quantitative Analysis – If you have selected quantitative, there are many ways that you can analyze your information. You will want to apply statistical methods to your information including descriptive statistics, correlational statistics, and/or inferential statistics, depending on what you are trying to discover.
Descriptive Statistics (Measures that describes or summarizes responses)
- Central Tendency (What is the average response/score?):
- Mean – the numerical average
- Median – middle score when displayed in order from low to high
- Mode – the most frequently occurring score
- Variability/Dispersion (How consistent or spread out were the scores/responses?).
- Range – the span of scores from highest to lowest
- Standard deviation – measure of the relationships of a set of scores to the mean
- Distribution (How many responses were there in each category?).
- Frequencies – the count or proportion (%) of cases/scores in a category
Correlation and Inferential Statistics (infer relationships between variables)
- Correlation Statistics (Are things related?).
- Correlation coefficient – numerical representation of the strength and directions of the relationship between two variables – correlation ≠ causation
- Inferential Statistics (Does one group differ from the other?).
- T-test – used to examine the relationship between measures for two groups (e.g. mean scores)
- ANOVA (analysis of variance) – similar to t-test, but used when there are more than two groups to compare
- Chi-square test (X^2) – measure of relationship of nominal/categorical variables (e.g. retention data)
Also think about how you will disaggregate the information and what groups you will use for comparisons.
Qualitative analysis – This is very different compared to quantitative analysis because it is focused on descriptive vs. numeric and the administrator of the assessment is considered the instrument for analysis. This type of analysis is inductive and comparative, and systematic yet flexible. Below are a couple methods that can be employed for qualitative analysis:
Generic Data Analysis Example:
- Organize the Data
- Familiarize yourself with the data
- Generate categories, themes, and patterns
- Code the data
- Search for alternative explanations
- Write the report
- Open Coding
- Axial Coding
- Review additional data
- Thematic coding
Communicate the results effectively
Not only is it important to conduct assessment, but also to share the results. Think of who would benefit from the information gained from the assessment and determine how you will share it. Will it be posted on the website? Will you present to various groups? Will you share it via social media? Will you develop posters to hang around campus to share with students?
Determine what will be done with the results
Once the results have been shared with the various groups on campus, the most important step is to determine how you will utilize those results. Will you use them to defend offering a new program or service to students as it was determined as a need? Will it help you determine how you can change aspects of a program or service? Remember that once you have implemented initiatives based on assessment results, communicate what has been changed to students. It will show them that we are actually interested in what they have to say.
Evaluation of assessment plan
The last and final step is to evaluate your assessment plan, especially if you plan to conduct the assessment more than once.