Thesis v1: Bespoke elearning will never be produced by robots. Artificial intelligence is simply not going to replace the blood, sweat and tears of instructional designers, graphic designers, developers and project managers.
And when it does, we will find other things for them to do.
Let me explain. An algorithm can put text and pictures together and format them. An algorithm can assemble meaningful questions from raw content. In other words, an algorithm can probably do what a bad instructional designer or a bad elearning developer can do.
But the algorithm cannot choose the best picture. The algorithm cannot devise the right question. It cannot do what a good instructional designer can do. And as soon as it can, the good instructional designer will go one better.
I’m being deliberately contradictory. And this blog post is not the place to solve the conundrum of what endows a digital object with value. But I suspect it’s human effort, not software.
Thesis v2: Automation will make a direct, positive impact on bespoke elearning production. But it won’t be the quantum leap that has been suggested.
Instead, artificial intelligence will boost productivity at all phases of the development life cycle (automated testing is one example). That annoyingly expensive premium on creativity and insight will remain, but those insightful creatives will spend more time communicating or creating and less time fiddling with Microsoft Office documents.
But there is one snag to this thesis: literacy.
Many of us will have noticed a decline in standards of written literacy. This has been partially brought about by spellcheck: why learn the rules when software does it for you? But the software is awful at grammar. It’s even getting it wrong now, in this document, look:
But who cares about prescriptivist notions of grammar, right? Well, companies and governments do. And thanks to the joys of spellcheck, everyone has forgotten to learn the prescriptions.
The same thing with techno-literacy. Once upon a time to make a computer work you had to literally assemble it yourself. If you wanted some software you had to install it. To do that you needed to know something about how these things work: their rules. In comparison, an iPad is automated, pre-packaged. It’s supposed to just work, so the workings are hidden. But software doesn’t just work.
Frequently, it fails. Software updates cause certain features to break and the fix is different for every device. Hardware sputters out, or must be recalled. Systems are developed which are mutually incompatible. Algorithms throw up erroneous results.
The commonest common sense dictates that these flaws are the big flaw in the proposed utopia of seamless, end-to-end, endlessly amazing technology. It doesn’t ‘just work’. It never will.
The new automated authoring tools will suffer from the same problem. And so back to the snag: if artificial intelligence fixes all your problems for you, who fixes the artificial intelligence?
The answer is obvious: we do. But I wonder whether many people in the industry realise that. The truth is that as we leverage increasingly sophisticated technologies to personalise learning and automate the production of learning experiences, we will need to learn much more about how these technologies work and why they don’t work.
Thesis v3: To actually take advantage of automation, ‘learning technologists’ will need to start taking the second part of their job description seriously.