Hi Guys

For a while now I have wanted to provide this blog on the requirements relating to the recording of assessment evidence. The first thing to note is that there is nothing new about the requirement to adequately record assessment evidence and assessment decisions. Assessors have always been required to record their judgement of competency in order for a unit of competency to be issued. This requirement was reinforced over the years by training package Assessment Guidelines which require the assessor to:

  • Establish and oversee the evidence-gathering process to ensure its validity, reliability, fairness and flexibility;
  • Collect appropriate evidence and match compatibility to the elements, performance criteria, range statement and evidence guide in the relevant units of competency;
  • Evaluate the evidence in terms of validity, consistency, currency, authenticity and sufficiency;
  • Record details of evidence collected; and
  • Make a judgement about the candidate’s competence based on the evidence and the relevant unit(s) of competency.

In addition to this, most State funding contracts include a requirement for the retention of all assessment evidence for a specific period. As an example, NSW is currently two years under the APL contract. Most of the old State regulators also had a requirement to retain evidence of the assessment for review at a regulatory audit. In QLD, the RTO was required to retain 100% of completed assessment items for the duration of the RTO’s appeal period and after that period 10% of the RTOs completions.

I have touched on two very different requirements here. One relates to the recording of assessment evidence and the other requirement relates to the retention of these records for review during a regulatory audit. The requirement that has been reasonably well defined (in most States, not all) over the years is the record retention requirement. Most regulators had a specific policy for this that defined what records were to be retained and for how long. I have experienced situations when doing renewal of registration audits for VETAB where there was absolutely no evidence of assessment other than in the RTO’s student management system which would show the units and qualifications issued. If the RTO was delivering training under a fee for service arrangement in NSW technically there was no requirement for them to retain any evidence used to make the assessment decision.

There have always been these urban myths in NSW that “the RTO was required to retain a sample for review at an audit”. Well,,, my view is, and has always been, if it is not written down as a regulatory requirement then it is not a requirement and certainly not something you can make an RTO non-compliant for (although many were). I specifically sought out an answer to this issue in VETAB at a senior level and was told that “it is ultimately up to each RTO to determine their own record management arrangements based on their own statutory, contractual and regulatory situation”. Seriously,,, how wishy washy. Its policy statements like this that lead to gross inconsistency in regulation between auditors. Auditors need to know exactly what the standard is and equally, RTOs just want to know what the requirement is so they can put arrangements in place to comply.

Enter Stage Right – ASQA

Let’s fast forward to 22nd June 2012. We have a National VET Regulator and to its credit, it has issued very clear guidance in this space. These are the facts:

  • The National Vocational Education and Training Regulator Act 2011, Part 2, Division 1, Subdivision B—Conditions of registration, Condition 28—compliance with directions given by the National VET Regulator says: (1) An NVR registered training organisation must comply with any general directions given by the National VET Regulator, in writing, to organisations on the way in which the VET Quality Framework or other conditions of this Subdivision are to be complied with. (2) The National VET Regulator must publish a general direction on its website. You can access that requirement on ComLaw: Click Here
  • On the 22nd June 2012, the National VET Regulator issued a General Direction titled Retention requirements for completed student assessment items. You can access that requirement on the ASQA site: Click Here
  • Just incidentally, the National Vocational Education and Training Regulator Act 2011, Part 6, Division 1, Subdivision A, Section 111 also says if the RTO contravenes a condition of the registration they can receive a maximum civil penalty (fine) of $40,800.00. The maximum allocated penalty unit is 240 and the Commonwealth increased their penalty unit on 23rd Jan 2013 to a whopping $170.00. Go the Commonwealth!

The culture of tick and flick

Now before I talk in depth about this General Direction, I just want to talk briefly about the culture of “tick and flick” in our VET sector. We have all seen it and regardless of all the effort to raise the standards in the sector, tick and flick is still going strong. Why? I put it down to a range of things. These include very poorly delivered Certificate IV in Training and Assessment (five days, give me a break!), inadequate quality arrangements in RTOs to monitor and improve assessment practices and an increasing lack of competence in assessment design. These things are systematic failings at the sector level but they are not the primary causative mechanism. I believe the primary cause is commercial pressure to do more with less being imposed by RTO managers and owners. Inappropriate assessor to candidate ratios, insufficient time allocated to assessment and inadequate resourcing to create valid assessment opportunities. I do not point the finger only at private RTOs. I see tick and flick happening in all RTO’s including privates, public, enterprise and government. The classic tick and flick is where you copy and paste performance criteria from the unit of competency into a table with the yes/no column on the right-hand side. Commonly, performance criteria are not written as observable behaviour nor do they relate to a discretely observable task. I often see these tools unsupported by any suitable assessment instructions which might guide the assessor and the candidate in what the assessment activity actually involves or what the candidate is being observed doing. I see these tick and flick documents in student files without any comments and simply a tick, tick, tick, tick, tick in the right-hand column. They may be signed and dated, but are frequently not.

Now you may think I am being overly negative. But these observations are based on my conduct of hundreds of regulatory and internal audits on RTO’s over a long period of time. On average Newbery Consulting undertake about 5-6 RTO audits a month as we prepare clients for their initial or renewal of registration. So if you’re an existing RTO you seriously need to reflect on the practice of tick and flick and consider if it is happening in your RTO. It is not valid assessment and certainly does not comply with the new requirements introduced in the General Direction, as I will highlight next.

The General Direction

This is an interesting document! Its title gives the impression that it’s about the retention of student assessment items, and it is. It very clearly states that:

“An RTO is required to securely retain, and be able to produce in full at audit if requested to do so, all completed student assessment items for each student, as per the definition above, for a period of six months from the date on which the judgement of competence for the student was made.”

So, the point I made earlier about the need for the regulator to define a period for assessment record retention has been achieved. Way to go ASQA! But, in this document the devil is absolutely in the detail. I believe that the secondary requirement relating to the recording of assessment evidence is still flying under the radar of many RTO’s. Certainly when this requirement was released, it simply reinforced the advice we have been providing to our clients over the last seven years. So what is the devil in the detail? The general direction defines completed student assessment items as the following:

“The actual piece(s) of work completed by a student or evidence of that work, including evidence collected for an RPL process. An assessor’s completed marking guide, criteria, and observation checklist for each student may be sufficient where it is not possible to retain the student’s actual work. However, the retained evidence must have enough detail to demonstrate the assessor’s judgement of the student’s performance against the standard required.”

In my opinion, this is a very deliberate definition to place a higher standard on the type and detail of evidence that is being retained by the RTO and is to be available at audit. My view is that this is a deliberate strategy to eradicate tick and flick from the VET sector and to require RTOs to be able to justify their assessment decisions and their issued units of competency. Ultimately this is about ensuring national consistency and confidence in the National Skills Framework. This is a mechanism to ensure that the assessment undertaken by an RTO can be justified against a common benchmark, being the unit of competency. I see the outcomes of regulatory audits come across my desk about twice a week. Some of these are our clients; some are RTOs seeking assistance following a negative audit result. I can tell you with confidence that the regulator expects retained assessment evidence to have enough detail to demonstrate the assessor’s judgement of the student’s performance against the standard required. Simply having ticks against criteria on an observation checklist will be found non-compliant.

The important statement within the definition is “enough detail to demonstrate the assessor’s judgement of the student’s performance against the standard required”. How do you record evidence of the assessor’s judgement? I have asked this question many times in professional development workshops, conferences and directly with clients. Ultimately we arrive at an agreed point,,, that a record of an assessor’s judgement of the student’s performance relates to the recording of the assessor’s observations about what they observed the candidate doing and why they thought the performance met the requirement (or not). This is generally achieved by recording written observations about the student’s performance. You could record these observations by video or voice recording or any other technical method of recording observations. But let’s face it, most RTOs will utilise hard copy documents that are prepared as an observation checklist to record their observations in writing. This is certainly the practice implemented by the majority of our clients, and with some support and professional development, assessors are able to make a transition to record appropriate assessment evidence.

A point I would make here is that sometimes observation criteria are very well written in a way that makes them self-apparent. They may be written in a very procedural or technical way that makes them completely observable to the assessor. In these situations an assessor may feel it unnecessary to provide comments which simply repeat or embellish the criteria. This position misses the point of recording their observations of the student’s performance. The observation criteria provided us the basis for comparison of the student’s performance with the requirements of the unit of competency. When the assessor records their comments the expectation is that they are recording their observations about the student’s performance taking into consideration their technical knowledge of the task and their ability to interpret evidence using the rules of evidence and the principles of assessment. I am not in any way asserting the need to record micro-level comments against each observation criteria. I think this is unnecessary, time-consuming and contrary to good assessment practice. My advice is that the assessor record their broad observations about what they observed the candidate doing and why they thought the performance met the requirement (or not). This requires them to apply their professional judgement and record that in the assessment record. As I will touch on next, I do not advocate the design of observation tools with a little box against each observation criteria. I suggest the comments recording area be merged and the opened up so that the assessor can use this entire space to record their professional judgement observations.

Make observations of actual workplace tasks being performed

Clearly there are many aspects to undertaking assessment in accordance with the requirements of SNR 15.5 or AQTF 1.5. What I am highlighting here only relates to recording evidence. So we will assume the assessment meets all of the requirements of the standards and it involves either (a) the observation of a student performing a relevant workplace task or, (b) the observation of a product the student has produced base on a workplace task such as a document or, (c) both of these. The important thing here is this requirement to analyse the unit properly and identify the actual workplace tasks that are relevant to the unit. The way units of competency are written generally results in them involving either demonstrable (practical) tasks or cognitive (thinking) tasks. A demonstrable task example is ‘Operate a forklift truck’. An example of a cognitive task is ‘Undertake a risk analyses’. In the forklift example we would certainly like to observe the student actually operating the forklift, shifting loads, etc. In the risk analysis we would observe the student’s work in a Risk Analysis Report. The criteria in this case may involve the assessment of the student’s consideration of consequence (in regards to the process of risk analysis). In both these examples we are observing a valid assessment task which we can make valid observations about.

Some principles to apply

We regularly design assessment tools for clients and there are a couple of principles I apply to the design of any assessment package. I am happy to share these with you:

1. Always analyse the unit of competency properly and identify the actual workplace tasks that are relevant to the unit and form the basis for proper observation assessment.

2. Always ensure that the observation criteria used in a checklist relate directly to the tasks being observed and these are expressed in a way that ensures they are observable to the assessor and provide a suitable point of comparison with the student’s performance.

3. Never copy and paste performance criteria into an observation checklist without properly unpacking the unit and designing the assessment in accordance with point 1 and 2.

4. Always provide lots of space in an observation checklist for the assessor to record their observations about the student’s performance against the standard required.

Note. We do this by allocating a minimum of half of the right side of the sheet as open space. This allows the assessor to record their comments about what they observed the student doing and why they considered their performance satisfactory (or not). The form can still incorporate a yes/no column down the middle if this is required. An interesting idea is to remove this option (yes/no column) altogether as this then requires the assessor to record more detailed observations to justify their decision.

5. Ensure any unit of competency assessment is supported by either a demonstrable (practical) task or cognitive (thinking) task. The focus is on the observation of the student’s performance. All units of competency involve skills, whether demonstrable or cognitive (or both), that must be observed. I consider the pinnacle of assessment evidence as the capturing of assessor observations based on a student demonstrating their skills integrated with their knowledge during the performance of a valid workplace task.

Note. I am not saying this is the only method of assessment. Clearly it is not, but the VET sector needs to get its focus back on assessing students doing things not only assessing their understanding of things.

Tip. Something we are trialling with a number of clients at the moment is using voice recognition software such as Dragon to speed the process for assessors to record their observations about the student’s performance. This works extremely well and is highly efficient. Whilst it does take some practice for assessors to get effective at this, it is worth considering a small trial with selected trainers and just see how it goes. This can be undertaken on a computer or using an iPad. It leads to very genuine observations each time as the assessor can simply verbalise their observations without any effort. This blog was completely written using voice recognition software.

I think that’s me. I could go on about this stuff forever. I hope this has been helpful. If I can just leave you with one main point to take away:

  • Recording and retaining enough detail in our assessment evidence to demonstrate the assessor’s judgement of the student’s performance against the standard required is a legal requirement now under the National Vocational Education and Training Regulator Act 2011. When you have your next audit, the auditor will make their judgements based on this requirement. Ticks on a sheet of paper do not demonstrate the assessor’s judgement.

 

Good training!

Joe Newbery

Published: 5th September 2014

 


Back to Articles

© 2024 Newbery Consulting