Writing a rubric V2
This article discusses the writing of rubrics that are used to grade student assessments at Southern Cross University.
Before you begin
When designing your rubric, it is vital to consider its intended purpose. The rubric should assess the achievement of those Unit Learning Outcomes (ULOs) covered by that assessment (not the assignment genre). To help determine your marking criteria, ask yourself:
- What knowledge and skills does the assessment measure?
- What observable criteria align with the ULOs being measured?
Writing a rubric is often more difficult without a clear understanding of the criteria that need to be assessed. Before you begin creating a rubric, think carefully about the performance objectives of the assessment and the unit learning outcomes (ULOs) that are mapped to the task. Consider how this assessment task fits into your unit and what type of informative feedback you wish to provide to students to prompt them to reflect on their work.
Tailor rubric design for each assessment
- It is important when designing a rubric for an assessment that it is specifically tailored to the learning outcomes of the assessment. This process is called constructive alignment. For more information on constructive alignment and rubrics see the following knowledge base article Rubrics
- Avoid using generic rubrics for multiple assessment items. This style of rubric does not address the unique learning outcomes (LOs) of each assessment task, making the rubric an invalid instrument.
- You can view, edit and reuse elements from other rubrics as needed however, it is critical to double-check that the final rubric criteria you develop are relevant to the learning outcomes for the assessment task.
Backwards rubric design
One design strategy to consider involves starting with rubric design first, map the rubric descriptors to the unit learning outcomes, and then developing a suitable assessment task - based on the learning outcomes and rubric descriptors. If your unit is still in the early design stages this approach can help avoid being locked into an assessment design that may not be the best fit.
Writing a rubric
When writing a rubric start with the learning outcomes for the assessment task and a list of the task requirements. List the most relevant objectives of the assessment, ensuring these are in alignment with the learning outcomes (LOs) mapped to the assessment task. In this way, students completing the objectives of the task provide evidence to demonstrate their achievement of the learning outcomes. The video below provides some ideas to assist you to plan out the design of your rubric and how to develop suitable marking criteria.
1. Choose the criteria that you will assess
Start with a list of the criteria you will assess. Look for similarities and how these might be chunked into more general groupings. For example, writing mechanics could cover the areas of spelling, grammar, tone, style and essay structure. It is critical that the criteria is measurable or observable.
When choosing criteria, consider:
What evidence will there be of whether or not students have achieved them?
These criteria then form the basis of the rubric design.
How many criteria should be included in a rubric?
Choose four to six criteria that satisfy the learning outcomes. This will depend on the complexity of the task, the number of objectives and how many learning outcomes are being assessed. Fewer than four compromises effective targeted feedback. More than six compromises marking efficiency, along with inter-rater reliability (see Tac Recommendations and guidance for writing rubrics)
Considerations when choosing criteria
- Criteria must link to the learning outcomes of the assessment and the unit. For example, if the learning outcome is "apply an understanding of the viral lifecycle to the progression of human disease" a rubric marking criteria could be use of evidence, with a descriptor "the student links findings from the literature to the consequences for human disease".
- Avoid the use of unrelated criteria such as deducting marks for late work, or student attendance and participation. Penalties for late work are already covered in the SCU Assessment Procedures policy. Where student attendance and participation is explicitly linked to a learning outcome (e.g. professional conduct) the natural consequence for students who attend less and/or don’t participate is that they demonstrate a poor standard of that criteria.
Each criterion should:
- Provide an observable link between the ULOs and assessment tasks (constructive alignment).
- Use terminology from ULOs.
- Relate to the assessment task in the assessment brief.
- Begin with a verb to indicate the level of cognition required and provide content and context. (Blooms Verbs)
- Be written without reference to performance quality (that is what standards are for).
- Be clear, using plain English where possible, so anyone can understand what is required
- Address the work produced, not the student personally (i.e., "this work" rather than "you")
- If possible, include only a single concept or element so that each criterion only deals with one observable characteristic.
- Be weighted so:
- It is clear what each criterion is worth (shown in percentage weighting), prioritising the achievement of learning;
- The cognitive complexity and effort for achieving the criterion are taken into account;
- All criteria total 100%.
Note on criteria for referencing and writing conventiones
- Avoid assigning a criterion for referencing if it's not in the ULOs.
- Only if a unit has ULOs dedicated to threshold academic skills should the rubric use explicit criteria to assess academic writing and referencing.
- Poor referencing may involve a breach of academic integrity. Even if there is no explicit criterion for referencing, there is a standard expected based on Academic Integrity Guidelines.
- If there is no related ULO, yet you consider it essential to include a criterion for referencing (or writing conventions), assign only a low weighting (e.g., 5%) to one criterion for referencing or writing conventions.
2. Setting up a rubric
First, assess whether a holistic or an analytic rubric is more appropriate for the type of assessment. Both have advantages and disadvantages, depending on the context in which they are used, and how they are used. Typically, most high-stakes written assessment at Southern Cross University is assessed using analytical rubrics. For more information on the difference between holistic and analytic rubrics, see this article: Types of rubrics.
Once the type of rubric to use has been established, set up a table that aligns with the format of the rubric type (Refer to Table I and Table II below). This can be done either using an Excel spreadsheet or using tables in a Word document.
Using the SCU Grade Descriptors for rubrics
- The University’s Grade Descriptors apply to all assessment tasks. Avoid using other performance levels when using rubrics (such as competent, advanced, poor) as these confuse markers and students alike. This also supports the validity of marking and assessment results.
- See the Grade Description guidelines and Rule 3 Coursework Awards to learn more about the grading standard policies in use at Southern Cross University.
- Turnitin digital rubrics can be used to grade student work, which can automate the calculation of rubric percentage ranges and criteria weightings.
Students must submit written assessments to Blackboard using Turnitin. Following the SCU Assessment, teaching and learning procedures, all assessment pieces where appropriate should have an associated rubric for grading. Therefore, you need to provide upload marking criteria and details of expected student performance, in the form of a rubric where appropriate, before the term starts. Two main tools to build and upload rubrics are Turnitin itself and the Blackboard rubric tool.
Using digital rubrics
It is important to note that even if using a digital rubric (either Blackboard or Turnitin) it is always recommended to create a rubric draft in either Excel or Word to avoid losing progress.
Table I: Example Rubric – Analytic
| High Distinction (85% - 100%) | Distinction (75% - 84%) | Credit (65% - 74%) | Pass (50% - 64%) | Marginal Fail (35% - 49%) | Fail (1% - 34%) | Not Addressed (0%) |
Criteria 1 | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement |
Criteria 2 | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement |
Criteria 3 | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement |
Criteria 4 | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement |
Criteria 5 | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement | Criteria descriptor statement |
Ordering the SCU Grade Descriptors in a rubric
It is recommended to write the Grade Descriptors from left to right starting with the highest level of performance High Distinction as illustrated in the rubric above. This encourages students to aim higher.
More recommendations for allocating grade standards is available here: Tac Recommendations and guidance for writing rubrics)
Table II: Example Rubric – Holistic
Grade Descriptor | Marking criteria |
High Distinction (85% - 100%) | Collective descriptor statement |
Distinction (75% - 84%) | Collective descriptor statement |
Credit (65% - 74%) | Collective descriptor statement |
Pass (50% - 64%) | Collective descriptor statement |
Fail (0% - 49%) | Collective descriptor statement |
3. Writing descriptors
Once the rubric table has been created, the next step is to write the rubric descriptors. A descriptor is a clear and concise statement that describes the performance level for a specific criterion within each grade descriptor. In other words, descriptor statements identify the evidence you seek to determine if a student's work has met the criterion and to what standard. It is important to write descriptors that students can readily understand and apply, so avoid using complex terminology that is difficult to understand. When writing a descriptor use positive language focusing on what is achieved, rather than what is lacking. Students should be able to distinguish what is lacking by comparing the descriptors between grades within the same criteria. For this reason, it is important to avoid the use of vague notions with variable interpretations as these undermine the accuracy and validity of marking, and provide limited information to students about what is required and expected.
When writing descriptors for each criterion, ask yourself:
When marking this assessment and judging performance for this criterion, what observable characteristics am I looking for, and what does this look like at the different grade levels?
Writing good descriptors
- Focus on observable characteristics.
- Start by noting the observable characteristics of students' work that you expect for each criterion. These should align with the ULOs measured in the assessment task and the verb descriptors used.
- When writing the descriptor, consider the extent to which these observable characteristics need to be demonstrated to meet the standard of each grade band.
- You may want to begin with the "Pass" descriptor and describe performance at the middle of the grade level range. Write the HD band next so you have the low Pass and the perfect HD, which provides the boundaries for the rest to fall within.
- Try to utilise positive language for all the band descriptors except for Fail, as it is crucial to describe the type of performance or work that does not meet the standards of the assessed learning outcomes. A Pass shouldn't be a Pass if the criterion has been met inadequately or unsatisfactorily. Instead of negatives in a band, if possible, include the positive reciprocal in the subsequently higher band descriptor; in this way, the rubric shows what is necessary to improve each band.
- Describe each observable feature for the criterion across every performance level.
- A good descriptor explains to students what they need to do to reach a standard, rather than focusing on what they haven’t done. (e.g. 'unreferenced sources were used', rather than 'no references')
- Sometimes the progression between standards is clearly understood using quantitative adjectives or adverbs (e.g. '10 references provided').
- Often the difference between a Credit, Distinction, High Distinction etc. is seen in the way something is done. (e.g 'comprehensively addressed', 'succinct and clear' or 'needs further development')
- When writing a good descriptor it is important to remember to explain and/or contextualise subjective adjectives (e.g. poor, good, excellent) by referring to the activities students must perform at the standard in question. For example, instead of 'Excellent data analysis and conclusion', be more specific: 'Avoids causal claims when the data does not permit clear conclusions and explores under what assumptions a causal claim would be more or less valid'. Descriptors should include qualitative, observable differences, as opposed to relying only on quantitative statements, or a numerical addition of how many elements students got right.
- When trying to write good descriptors for a rubric it might be useful to refer to Bloom's Taxonomy for insight into appropriate wording. For more information on Bloom's Taxonomy, see this article from Vanderbuilt University: Bloom's Taxonomy
4. Rubric design evaluation
Once the assessment criteria have been determined, the rubric type has been chosen, the rubric table has been created and the criteria, grade descriptors and descriptors have been added, the last step is to evaluate the rubric.
According to the SCU policy for Assessment, Teaching and Learning Procedures it is a requirement that: "All assessment tasks must be peer-reviewed by an academic colleague or the Course Design Team within the Faculty, College, Discipline, Course or Educational Collaboration" (12). This includes: "Alignment of the task deliverables and marking criteria with the Unit Learning Outcomes" (12b) and "Appropriateness of the marking criteria and expected student performance standards". It is required that "The Unit Assessor will develop, implement and articulate a consistent moderation process for each task, at all locations and partner collaborations, in line with the Assessment Moderation Guidelines" (42).
To evaluate the rubric, cross-match the marking criteria and descriptors in the rubric with the task instructions and learning outcomes, checking for any inconsistencies.
Consider:
- Is the rubric assessing what students have been instructed to do?
- Does the rubric assess the students’ performance of the unit learning outcomes?
- Does the weighting of each criterion reflect the work required to demonstrate the learning outcomes and task requirements?
- Is there anything being assessed in the rubric which is not related in some way to the learning outcomes? Is there anything important missing?
Evaluate your rubric
A useful resource to evaluate and moderate a rubric is the: Rubric design evaluation template