January 2013

New White Paper: “Evaluationg and demonstrating the value of training”

CrossKnowledge knows that information technologies are a powerful tool for professional training evaluation. The leading European distance learning provider presents Evaluating and Demonstrating the Value of Training, a new white paper that looks at the current challenges, practices and trends in companies and among training professionals in the field of evaluation. It focuses in particular on the value of new technologies in putting evaluation theory into practice.

Most people agree that “training can never do any harm“, but few adequately measure how effective it is in terms of skills acquisition or how it impacts business performance. Billions of euros are invested in training every year, and budget decision-makers can hardly be expected to make such financial commitments without measuring their return on investment.

CrossKnowledge addresses this subject head-on in a new white paper focusing on evaluation in the new tech age.

Why evaluate?

CrossKnowledge insists that training evaluation is vital for a number of reasons.

Evaluation challenges - showing the value of multi-modal learning

It makes it easier for L&D departments to manage their training offer, and to carry out iterative adjustments to training plans based on the observed effectiveness of training paths and their ability to respond to the needs expressed by learners, managers, and company directors. It also makes it possible for training managers to demand higher performance levels from the people who front their training programmes“, explains Steve Fiehl, Chief Innovation Officer at CrossKnowledge.

Current economic tensions make the question of evaluation all the more critical: at a time when “doing more with less” has become a mantra, L&D departments must not only optimise budget allocations but also demonstrate the usefulness of training and the actual gains generated by their outlay, failing which their budgets will be seen as adjustment variables – in other words, they risk being first in line for cuts when savings have to be made.

New technologies to the rescue

Evaluation is becoming essential for any company wishing to roll out effective training programmes, both in terms of individual skills development and in order to respond to staff and business performance requirements.

CrossKnowledge stresses that new technologies, especially e-learning, can bring a raft of rewards. They make it possible to check that courses are being properly followed (essential when it comes to certified training), to interact with learners, to design new training formats combining theory and practice, and to foster sharing between learners when they start applying their newly acquired skills.

Technology also makes it easier to define the benchmarks that serve to measure the effectiveness of training and to build short knowledge testing sequences into the courses themselves – not to mention the savings made thanks to swifter deployment and the use of ’on-demand’ components (SaaS).

Making a stand against so-called “good reasons” not to evaluate

Despite all the above, training evaluation remains a thorny issue in many companies. CrossKnowledge has observed that many teams remain reluctant to carry out evaluation and claim to have “good reasons” not to.

Good reason not to evaluate training

These ’good reasons’ include lack of demand for evaluation on the part of sponsors (who don’t necessarily know how to express their objectives), increased workload for L&D teams, and implementation costs that are too high to justify acquiring an appropriate system. Some people are afraid of negative evaluation results, or feel that the skills addressed can’t be quantified (management, personal development, changes in team behaviour, etc.) Others mention the risk of ’financialising’ training to the detriment of its professional goals, or the difficulty of adequately assessing the actual contribution of training to corporate performance.

The fact that face-to-face learning plays such an important part in training programmes only serves to perpetuate this situation. “Because skills are often only applied in practical situations months after training, learners can start to forget what they’ve learned – training professionals agree that 90% of knowledge acquired during face-to-face sessions is lost within a month if it’s not put into practice“, says Steve Fiehl.

Who measures what in the value of multi-modal learning?CrossKnowledge has identified another obstacle: the highly technical approach to ROI in currently available analytical models (Kirkpatrick, Phillips, Bersin, etc). An alternative has been offered with the concept of ROE (Return On Expectation), but this, too, has many critics, who argue that the difference is purely semantic, that the concept introduces too much subjectivity into evaluation, or that the very definition of ROE is too imprecise.

What can be done?

CrossKnowledge recommends a 3-step approach in the white paper:

  • Clearly identify skills acquisition and business performance objectives, involving the sponsor directly and defining the roles of everyone involved.
  • Organise training as a ’process’ rather than an ’event’. To achieve this you have to divideOptimising training evaluation performance evaluation into stages and define at which points the “evaluators” intervene, involve managers as and when required, identify what facilitates the learning process, and provide all the support learners need as they acquire new skills.
  • Communicate – it’s communication that makes evaluation meaningful; state what information will be distributed and to whom, highlight the benefits for each stakeholder, etc.