Recommendations and guidance for writing rubrics

The following article on rubrics best practice guidelines was approved by the Teaching and Learning Assessment Committee (TAC) and ratified in May 2023. Unit Assessors who wish to deviate from the recommendations and guidelines provided in this guide should speak with their course coordinator or their faculty Associate Dean of Teaching and Learning before they proceed. For example, an exception might be needed for a Satifies Requirements (SR) assessment task requiring competency-based methods that does not count towards a unit grade.

Under the SCU Assessment, Teaching and Learning Procedures, all graded assessment tasks must be assessed using clear, explicit criteria in the form of rubrics, where appropriate, aligned to the Unit Learning Outcomes.

An appropriately designed rubric meets the assessment policy guidelines by:

What is an Analytic Rubric?

Analytic assessment rubrics form a matrix to guide markers in making consistent and reliable judgements about the quality of student work. Rubrics show students what is needed to complete the assessment and achieve the associated ULOs. Rubrics also provide feedback to students about the quality of their work and how they might improve their work in the future.

A rubric consists of three essential elements (fig 1):   Criteria, Standards (or performance levels), and Descriptors.

Figure1: Analytic Rubric that meets the Southern Cross University guidelines

Some rubric standards are not used for Final Grades

The use of High Distinction+, Marginal Fail or Not Addressed, are divisions used to mark criteria in a particular assessment piece using a digital Turnitin rubric. These standards are not used as a Final Grade for a unit  as defined in Rule 3: Coursework Awards. These divisions are designed to support more rigorous marking around what the student has/has not achieved in a particular assessment task, rather than a student's overall graded result for a unit of study.


Guiding principles and recommendations for writing rubric criteria

When designing your rubric, it is vital to consider its intended purpose. The rubric should assess the achievement of those ULOs covered by that assessment (not the assignment genre). To help determine your marking criteria, ask yourself:

  • What knowledge and skills does the assessment measure?
  • What observable criteria align with the ULOs being measured?

Ensuring Constructive Alignment

Constructive alignment is a foundational element of assessment and rubric design embedded in policies and teaching practices at Southern Cross University. In constructive alignment, assessment tasks measure the attainment of the unit learning outcomes using 'aligned' rubric criteria.

To ensure alignment when writing rubric criteria consider;

  • Actions: the verbs or behaviours have been used. What will students demonstrate or do?
  • Content: the skills or content that is being covered. What is this about knowing or doing?
  • Context: the situations, conditions or circumstances. In what context and to what level will students demonstrate this?

Use Blooms Verbs to help identify exactly what students must 'do' or demonstrate to achieve the ULOs. Then clarify this by considering the content and context of the task. Figure 2 below illustrates an alignment between two unit learning outcomes (ULOs), an assessment task, and three rubric criteria.


Figure 2: Developing assessment and criteria constructively aligned with unit learning outcomes (ULOs).

More about Constructive Alignment

The following article provides a more in-depth explanation: Constructive Alignment in course design.

How many criteria should be used?

While there is no ‘ideal’ number, it is advisable to keep the number of criteria between four and six (ITaLI, 2023). Fewer than four compromises effective targeted feedback. More than six compromises marking efficiency, along with inter-rater reliability[i].

Each criterion should:

  1. Provide an observable link between the Unit Learning Outcomes (ULOs) and assessment tasks (constructive alignment).
  2. Use terminology from ULOs.
  3. Relate to the assessment task in the assessment brief.
  4. Begin with a verb to indicate the level of cognition required and provide content and context. (Blooms Verbs)
  5. Be written without reference to performance quality (that is what standards are for).
  6. Be clear, using plain English where possible, so anyone can understand what is required
  7. Address the work produced, not the student personally (i.e., "this work" rather than "you")
  8. If possible, include only a single concept or element so that each criterion only deals with one observable characteristic.
  9. Be weighted so:
    • it is clear what each criterion is worth (shown in percentage weighting), prioritising the achievement of learning;
    • the cognitive complexity and effort for achieving the criterion are taken into account;
    • all criteria total 100%.

A note on criteria for referencing and writing conventions

  • Avoid assigning a criterion for referencing if it's not in the unit learning outcomes.
  • Only if a unit has ULOs dedicated to threshold academic skills should the rubric use explicit criteria to assess academic writing and referencing.
  • Poor referencing may involve a breach of academic integrity. Even if there is no explicit criterion for referencing, there is a standard expected based on Academic Integrity Guidelines.
  • If there's no related ULO, yet you consider it essential to include a criterion for referencing (or writing conventions), assign only a low weighting (e.g., 5%) to one criterion for referencing or writing conventions.


Guiding principles and recommendations for allocating grade standards

Grade standards represent the performance levels across the range of performance for assessment. The following principles and recommendations are designed to support the use of Turnitin rubrics, which are used for submitting all written work at Southern Cross University as per the Text Matching Software Policy.

Using Rubrics with SCU Grade Standards:

  1. Use High Distinction, Distinction, Credit, Pass, and Fail (see also points 3, 6 and 7 below) to be consistent with SCU grade standards.
  2. The range for each of these standards should also be consistent with the SCU Grade Description Guidelines and Rule 3 - Coursework Awards.
  3. When using Turnitin to create rubrics, you must include an HD+ (100%) band in a Standard Turnitin Rubric to allow marks to be calculated accurately by Turnitin Feedback Studio.
  4. It is not recommended to use Turnitin’s Custom Rubric or Qualitative Rubric types as these do not support clarity and consistency of marking.
  5. Structure the rubric with the highest grade on the left, working towards the lowest on the right. When reading the rubric, students will see the HD first.
  6. Divide the Fail (0-49%) range into three: Marginal Fail, Fail, and Not Addressed (or Zero) performance standards. These multiple fail categories are consistent with other university practices (e.g., VU and 26 participating higher education institutions comprising the College of Peers). For more in support of multiple fail bands, see endnote[ii].
  7. The values for Standard Turnitin Rubric bands are: HD+: 100%, HD: 92%, D: 79%, C: 69%, P: 57%, MF: 42%, F: 17%, NA: 0%. Using these band midpoints will ensure that Turnitin Rubric grade calculations align with the Rule 3 - Coursework Awards graded percentages and distribution.


Guiding principles and recommendations for writing rubric descriptors

Descriptors are observable characteristics of a student's work for each criterion. They are designed to differentiate performance across each grade standard. Descriptor statements identify the evidence you seek to determine if a student's work has met the criterion and to what standard. When writing descriptors for each criterion, ask yourself:

When marking this assessment and judging performance for this criterion, what observable characteristics am I looking for, and what does this look like at the different grade levels?  

When writing Rubric Descriptors;

  1. Focus on observable characteristics.
  2. Start by noting the observable characteristics of students' work that you expect for each criterion. These should align with the ULOs measured in the assessment task and the verb descriptors used.
  3. Avoid terms like "good", "excellent", and "adequate", as it is unclear as to what one (students and markers) observes/demonstrates in the work that makes the work "good", "excellent", or "adequate", etc.
  4. When writing the descriptor, consider the extent to which these observable characteristics need to be demonstrated to meet the standard of each grade band.
  5. You may want to begin with the "Pass" descriptor and describe performance at the middle of the grade level range. Write the HD band next so you have the low Pass and the perfect HD, which provides the boundaries for the rest to fall within.
  6. Try to utilise positive language for all the band descriptors except for Fail, as it is crucial to describe the type of performance or work that does not meet the standards of the assessed learning outcomes. A Pass shouldn't be a Pass if the criterion has been met inadequately or unsatisfactorily. Instead of negatives in a band, if possible, include the positive reciprocal in the subsequently higher band descriptor; in this way, the rubric shows what is necessary to improve each band.
  7. Describe each observable feature for the criterion across every performance level.


Practical rubric considerations

Students must submit written assessments to Blackboard using Turnitin. Following the SCU Assessment, teaching and learning procedures, all assessment pieces where appropriate should have an associated rubric for grading. Therefore, you need to upload marking criteria and details of expected student performance, in the form of a rubric where appropriate, before the term starts.

The two main tools to build and upload rubrics are Turnitin itself and the Blackboard rubric tool.

Evaluate your rubric

The Centre for Teaching and Learning has developed a Rubric design evaluation tool that can help assesses whether a rubric design aligns with good academic practice and Southern Cross University assessment policies. 


Frequently Asked Questions

Please email any further questions to about these recommendations to the Center of Teaching and Learning via email: ctl@scu.edu.au.

What about Holistic Rubrics?

This guide pertains to two-dimensional analytic rubrics, the most common form of rubrics. There are also holistic rubrics, which are one-dimensional single-criteria rubrics. Holistic rubrics are useful for abstract or creative tasks (e.g., art works, performances, presentations). Many of the same principles in this guide apply to holistic rubrics.

If you are unsure whether to use a Holistic or Analytic rubric, you can discuss this with your course coordinator or contact the Center of Teaching and Learning via email: ctl@scu.edu.au. The following knowledge base article Types of Rubrics also offers guidance.

Have the Final Grades categories changed to include marginal fail etc.?

No changes have been made to the Rule 3 Coursework Awards or Grade Description Guidelines. The divisions High Distinction+, Marginal Fail or Not Addressed only support the grading of rubric criteria in a particular assessment task. These divisions better support students by providing feedback about how well they performed against particular criteria in an assessment task (e.g. where there was a "marginal fail" as opposed to the criteria being "not addressed"). Also see endnote [ii] below.

Can I use generic rubric criteria such as presentation, research or grammar?

Although the use of 'generic' rubrics saves time initially, this is a false economy. Rubrics should be designed to assess the task against the learning outcomes and task requirements, ensuring that they are valid instruments. 

Generic rubrics;

  • fail to establish student progress against learning outcomes
  • are not weighted to reflect the relevance of the criteria to the unit
  • include extraneous criteria not relevant to demonstrating learning outcomes
  • confuse students about the purpose and requirements of the task

Only use criteria that are relevant to demonstrating learning outcomes. If students have to use research, explain the criteria in a way that links it to a learning outcome (e.g. Summarise research to explain physics principles). Also refer to 'A note on criteria for referencing and writing conventions' in the writing rubric criteria section above.

What if I don't use a rubric to grade an assessment?

Some low-stakes assessment tasks (e.g. online quizzes) are not well suited to using a rubric; however rubrics are appropriate for most forms of assessment. Consult with your course coordinator if unsure whether using a rubric is appropriate. 

Do I have to show the grading rubric to students?

Under the SCU Assessment, Teaching and Learning Procedures, all graded assessment tasks must be assessed using clear, explicit criteria in the form of rubrics, where appropriate, aligned to the Unit Learning Outcomes, and provided to students as part of the assessment task one week before term starts.

Rubrics provide guidance to students by explicitly communicating how their work will be graded, and what they need to focus on when addressing the marking criteria. Without the information provided in a rubric, students may focus on irrelevant details, or fail to adequately address the unit learning outcomes because it is unclear how they will be assessed. Providing rubric transparency to students reduces student frustration, provides clear justification for grading, and assists markers and tutors in providing valuable feedback to students.


Further information

The following resources provide further guidance on the design and writing of rubrics, and how these are used for assessing student work at Southern Cross University: 


The Centre of Teaching and Learning also runs workshops and supports academics with the implementation of rubrics. You can contact the center by emailing ctl@scu.edu.au.

References

ITaLI. (2023). Creating and using rubrics. Retrieved from Institute for Teaching and Learning Innovation (ITaLI) - The University of Queensland: https://itali.uq.edu.au/teaching-guidance/assessment/creating-and-using-rubrics

Endnotes

[i] There is no definitive or "ideal" number of criteria in a rubric, as it depends on the purpose of the assessment and the complexity of the task being evaluated. However, academic literature explores the issue of the number of criteria in rubrics. Stiggins (2001) recommends a small number of criteria (typically three to five), Moskal (2000) recommends using no more than six criteria in a rubric; Reddy & Andrade (2010) advise the number of criteria in a rubric should be “kept to a minimum”, and Wilson & Sloane (2000) recommend four or five.

[ii] There are justifiable reasons for adopting several (narrower) fail bands instead of one (broad) fail band.

  • Using only one "Fail" level standard across a range of 0-49% can be problematic for achieving marking consistency and for clear feedback on student performance within this single standard.
  • a single band worth 49% skews the grade for the assessment downward, as a single 49% band weighs more heavily than the other bands (worth 15% each).
  • Turnitin's rubric uses midpoints instead of grade ranges, hence the default for any Failed criterion would always be the midpoint (25%), irrespective of performance standards within the Fail range.

In practice, the use of Marginal Fail, Fail, and Not Addressed (or ‘Zero’) performance and clear descriptors associated with the observable characteristics of each standard, serves to:

  • help to more accurately reflect the varying degrees of quality in student work.
  • allow for more granularity in the grading process by having separate categories for assignments that barely missed the pass mark and those that were significantly off-target, it.
  • better differentiate between students who made minor mistakes and those who need significant improvement in their work

If a student receives a grade in the Marginal Fail range, they may be more likely to be inspired to improve their work in order to pass; rather than disheartened by a strong Fail in a single band. 


(Please note - it's better to refer to the Online version rather than export, as it's always up to date)