Toby Harris explains how to define the right problem and design the complete experience.
In the last two decades, marketing has steadily abandoned simplistic audience-profiling and the traditional ‘spray and pray’ approaches to advertising spend. The insights of behavioural science show that buying decisions are related to mood-states, points of need and circumstances which are not fixed to a particular demographic. Instead, they relate to experiences which can be common to large groups.
This has drastically changed the shape of campaigns: now marketers focus on capturing and shaping an experience, and then inserting a brand into it. At the same time, online marketing has started to undermine ‘spray and pray’, big-ticket advertising campaigns. The combination of web-analytics and e-commerce payment systems now make it possible for relative amateurs to target marketing spend with laser-guided accuracy, producing a return at a very low risk. Rather than relying on individual genius or good luck, we leverage insights which are based on data.
Define the right problem
If the main themes of Learning Technologies 2014 are anything to go by, it seems likely that a more experiential and data-led approach will also influence the learning landscape in 2014-15. Content is increasingly losing its sovereignty in the design of learning programmes. It is no longer seen as separate from learners in a social dimension, our understanding of their emotional fabric and the specific problems which they face.
But whilst the learning industry is definitely becoming more focussed on delivery mechanisms which capture the broader experience of learners, there remains a worrying deficit of understanding about what the real goals of training should be.
In an industry focussed on increasingly sophisticated ‘solutions’, it is not surprising that few of us challenge some rather fixed notions of what the ‘problem’ is. We tend to assume that most organisational ‘problems’ can be aligned to one or other of the established ‘genres’ of training. Revenues dropping? You need some sales training. Negative media coverage? Better roll out some D&I training.
Typically a lofty initiative or management vision will sit behind any well-funded programme. Yet, however well-intentioned (and however specific it sounds) such an imposed ‘vision’ is often part of the problem. What modern marketers know very well, but what we forget, is that problems are specific to experiences and circumstances –and to organisations. Whereas it is a truism that best practice looks similar, dysfunction is unpleasantly diverse and varied. Why, then, do we expect the same (often generic) training content to solve a different problem every time?
Take mental resilience, for example, which is a key training theme of 2013/14. The cost of stress, depression and anxiety is a growing issue, and there are many managers who would like a ‘quick-fix’: some off-the-shelf content which they can roll out to solve the ‘problem’ of stress in their organisation. But to assume that the problem of stress is similar is a fallacy. The causes of stress are rooted in circumstances. This means they are different in every organisation.
Explaining to middle managers at a global firm that working 14-hour days is actually less productive and more harmful to their mental health than the alternative is distinctly doable. But providing a similar kind of resilience training to social carers who work alone, face difficult conditions and are paid the minimum wage is unlikely to be as effective.
The latter, as a group, face a set of very specific pain-points. To boost resilience in a measurable way, what you really need to do is ask them what the problem is and try to fix it. It’s rarely the case that a problem can be solved only by training. We know that, to a surprising degree, the right behaviours can overcome most obstacles. But behaviour must also be facilitated by the environment in which it takes place.
In this context, the kind of systemic issues – even crises – that training content is expected to ‘solve’ in isolation are astonishing. So it’s not surprising that most employees are inherently sceptical of the value of centrally mandated training. In an era of grand strategic initiatives, we have blurred the distinction between identifying a problem and inventing one.
The importance of ‘market-research’
Driven by ‘big data’, metrics-based consumer research is now indispensable to many industries. It’s how Hollywood knows how to make a blockbuster, and how supermarkets precisely match circumstances with products. For L&D professionals, conducting detailed internal market research enables us to identify the highest-value needs in an organisation. Once you have identified a concrete need, it becomes possible to build a learning experience around it.
Designing ‘the complete experience’ means combining content with the socially authentic contexts in which it exists and our desired outcomes. Technology firms nowadays tend to advertise their products as life experiences, highlighting how a particular device enriches every aspect of the customer’s life. What is emphasised is not so much the features of a certain smartphone or tablet, but the emotional outcomes for the person using it.
In learning and development, designing the ‘complete experience’ means disowning the primacy of learning content, and thinking outside of the ‘box’ of a particular learning intervention. Taking into account what happens before and after the course is as important as the design of the course content itself.
Before, after, and beyond
For example, we already realise how crucial it is to understand what learners already know. But it’s equally important to understand what they expect. No small business thinks about stocking a product without being absolutely sure of what the customer expects from it. Correspondingly, large manufacturers use marketing to shape the expectations of the customer in advance. In contrast, audience expectations are not often seriously considered when designing training, and they are almost never managed.
Most learners have already made up their minds on the value of a programme before they start it, and this is reflected in their feedback. Humans selectively confirm prior assumptions, so it’s vital that these expectations are managed. Research demonstrates, for example, that a manager’s involvement prior to the formal course is the single most powerful influence on whether learning transfer will occur (Broad and Newstrom, 1992).
By building up the right expectations and framing learning properly, we concretely impact the chances of it changing behaviours. We often view champions or super-users as part of the follow-up to a learning programme. In fact, we need to seek out these powerful influencers earlier, and involve them in our business communications.
So that was the ‘before’ – how about the ‘after’? This ‘after’ is the most overlooked aspect of learning, often because there is no intention to measure the effect of a programme anyway. Truly behavioural interventions begin only once the formal learning is finished. At the most basic level, we should design learning to be transferable. If learners don’t come away with a toolkit, action plan or some kind of personalised ‘take-away’, you’ve already made transferability more difficult.
Potential ‘after’ methods are numerous. Organise conference calls and meetings to report back on how the learning has been implemented. Use email or social media to break down the learning and communicate a tip of the day in the weeks following the course. If you have the platform to do so, try and create lasting communities of practice. If you think that this sounds like a lot of time and effort, you’re right. We need to recognise the importance of ‘before’ and ‘after’ and adjust our timescales, budgets and deliverables accordingly.
Less sledge-hammer, more scalpel
The authors of The Six Disciplines of Breakthrough Learning said in 2006 that ‘targeting educational efforts is analogous to targeting marketing campaigns.’ In 2014, we’re able to deploy even more sophisticated analytical weaponry when we target learning interventions. The multi-layered, finely-calibrated campaigns which have replaced conventional advertising have taught us that a particular change campaign should be less like a sledge-hammer and more like a well-used scalpel.
Our growing understanding of the way humans actually think and behave, or behavioural science, is challenging the outmoded shibboleths of learning design. We often think about ‘gamification’ in learning as an unusual or special ‘optional extra’, but everyday innovations in everything from urban street design to energy bills demonstrate that by integrating ‘gamified’ elements into the offline and online user experience, we are able to subtly modify environments in order to achieve a certain objective. In various ways, this understanding of experiential decision-making is now entering L&D.
An experiential and data-led approach to leaning cannot be sold as a simple cost reduction in the way that using elearning for compliance training once was. As part of a broader business transformation function, L&D budgets need to grow, not shrink. In return, programmes will be expected to deliver measurable ROI by targeting points of need. Just as for marketers, the new breed of learning professionals will need to combine an understanding of behavioural science with an ability to leverage insights from data. This demanding environment will shake up the learning industry, and severely undermine those who rely on the provision of proprietary content libraries. But it can only be positive for learners, and for those of us who believe in replacing ineffective ‘training content’ with change campaigns and measurable results.