Thesis v1: Bespoke elearning will never be produced by robots. Artificial intelligence is simply not going to replace the blood, sweat and tears of instructional designers, graphic designers, developers and project managers.
And when it does, we will find other things for them to do.
Let me explain. An algorithm can put text and pictures together and format them. An algorithm can assemble meaningful questions from raw content. In other words, an algorithm can probably do what a bad instructional designer or a bad elearning developer can do.
But the algorithm cannot choose the best picture. The algorithm cannot devise the right question. It cannot do what a good instructional designer can do. And as soon as it can, the good instructional designer will go one better.
I’m being deliberately contradictory. And this blog post is not the place to solve the conundrum of what endows a digital object with value. But I suspect it’s human effort, not software.
Virtual reality (VR) has been talked about so frequently, both in and out of the learning industry that it seems to have lost its buzz. For a technology that would offer gamers, and now learners, the chance to experience a scenario first hand, the hype around it seems to have run itself into the ground.
There’s been a colossal amount of development in AI research. Last week my colleague Jay wrote about the origins of artificial intelligence (AI) and its application to modern day society. Today I want to talk about its future and highlight some of the challenges that currently prevent AI from becoming mainstream in learning technologies.
In elearning there are undoubtedly benefits to using artificial intelligences which correspond and react to human behaviour. Wherever it may not be possible or desirable to incorporate real people (for example, a mentor who guides you through the introduction to a programme or LMS) is where an artificial intelligence can come into play. A system that learns with the student simultaneously and acts as a peer that can match its own capabilities to that of a human creates just the right level of competition.
Remember that AI has been involved with computer games for decades. By 1950, Alan Turing had invented a software programme to play chess named Turbochamp. There was no computer powerful enough to run the programme at the time, so Turing played games himself by simulating the computer – taking half an hour per move. Finally, in 1997, the hardware caught up with the software. IBM built a computer program, Deep Blue, which beat the world chess champion at what he does best – chess. The involvement of AI in computer games gets us thinking about how it could be used as part of a gamification strategy: a simple AI program could compete with learners in an adaptive way in order to produce a more challenging and addictive elearning experience.
Although artificial intelligence as an independent field of study is relatively new, it has some roots in the distant past. In fact, we could say that it started 2,400 years ago when the Greek philosopher Aristotle invented the concept of logical reasoning! The effort to finalise the language of logic continued with Leibniz and Newton. George Boole developed Boolean algebra in the nineteenth century, which finally led to the base design of computer circuits.
However, the main idea of a thinking machine came from Alan Turing, who developed a hypothetical model for a ‘Turing engine’ that could handle any algorithmic computation, and proposed the Turing test which is still used today to measure the success of artificial intelligences. The term “artificial intelligence” itself was first coined by John McCarthy in 1956.
AI is now known as the science and engineering of making intelligent machines, especially intelligent computer programs. It’s related to the concept of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.