Evaluate

A guide for public sector agencies to develop and implement formal code of conduct training.

Evaluate as part of continuous improvement to identify:

  • whether training is achieving learning objectives
  • if training meets learner needs and expectations
  • knowledge and skills learners gain through training
  • how training is being applied.

Use evaluation to improve training in terms of design, content and delivery.

Feedback questionnaires

Questionnaires are a typical way of evaluating training. Some questionnaires measure learners鈥 initial reaction to the training, for example 鈥淒id you enjoy the training?鈥, 鈥淒id the training meet your expectations鈥 and 鈥淲ere you comfortable in your surroundings?鈥 Some may also attempt to test learning, for example 鈥淗ow much did you know about the topic before versus after the training?鈥

When developing a questionnaire be mindful how much time learners have to respond. Questionnaires provided at the end of the training may be done quickly without much consideration, especially if overly long. Consider providing for the questionnaire to be completed during the training or at set intervals, allowing for more thoughtful responses or using a 2 step feedback process, with an initial point in time questionnaire followed by a more detailed questionnaire in 3 months鈥 time to test how learners have transferred their knowledge to the workplace.

Develop questions

Questions can elicit quantitative data (鈥渉ow many鈥, 鈥渉ow much鈥 and 鈥渉ow often鈥) and qualitative data (鈥渨hat type鈥). Questionnaires usually elicit a combination of both and both are valuable.

Quantitative questions are generally closed to count and report on response, for example 鈥70% of learners said the resources were well presented.鈥 

Qualitative questions are usually open and therefore more difficult to analyse and draw conclusions from as they need categorising or 鈥榗oding鈥. For example, 鈥淲hat did you like about the resources?鈥 elicits different responses from each learner. These questions need to be analysed so improvements can be made. 

Sample questions 

鈥淵es鈥 or 鈥淣o鈥 questions (nominal scale; frequency of responses can be counted)

  • Content was what I expected/relevant to my work.
  • Slides were organised logically.
  • Photos, tables and graphics were related to the topic.
  • Resources were well presented and supported the learning.
  • The facilitator:
    • encouraged questions
    • answered questions fulsomely
    • encouraged learners to share experiences.

Likert scale questions (interval scale; frequency of responses can be counted, provides more nuance responses than yes/no)

  • How engaged were you with the training?
  1. It didn鈥檛 keep my attention at all.
  2. It kept my attention some of the time.
  3. It kept my attention most of the time.
  4. It kept my attention the entire time.
  5. Don鈥檛 know.
  • The purpose of this training was to [add purpose/learning outcome here]. How well do you think it achieved its purpose? 
    Rate from 1 鈥 Not at all to 5 completely
  • Rate how satisfied you were with the training from 1 鈥 very dissatisfied, 3 鈥 neither satisfied nor dissatisfied to 5 鈥 very satisfied

Recall questions (open, qualitative; tests immediate learning transfer)

  • List 3 of the risk areas covered in our code of conduct.
  • Name 2 ways to report a suspected breach of our code of conduct. 

Process questions (open, qualitative; tests higher order thinking, asks for an opinion)
Complete this sentence:

  • I learned鈥
  • The thing that was most/least helpful was鈥
  • The one thing I would improve is鈥
  • I will go back to my team and share鈥
  • Three things I will implement/change are鈥

Recall and process questions can be used during the training (as a quiz or reflection activity), immediately after or emailed to learners in 3 months鈥 time to ask if they have done what they committed to after the training.

Other evaluation criteria

Feedback tools provide some indication of the effectiveness of training. Effectiveness can also be measured by gauging learners鈥 involvement in the training; and participation in group discussions, activities and case studies. These are qualitative assessments and rely on the facilitator鈥檚 perceptions and judgements rather than direct feedback from learners.

Other indicators of successful training include:

  • learners implementing action plans or other 鈥榟omework鈥 activities set during the training (evidenced by completion rates, feedback on implementation)
  • evidence of increased integrity awareness in staff culture/perception surveys
  • trend analysis of incidences of discipline and misconduct cases, declarations of conflicts of interest and secondary employment applications increasing after training.

Consider carefully attributing the strength of any of these indicators directly to the training. Rather they provide a picture of how useful the training has been and how it contributes to wider efforts to promote understanding of the code.

Reporting on evaluation results

If time has been taken to collect and analyse feedback, report results to relevant parties (e.g. senior leadership team) in a timely way.

Last updated:

Have a question or want to report a problem?

Fill in the form to get assistance or tell us about a problem with this information or service.

Send feedback