There’s been a colossal amount of development in AI research. Last week my colleague Jay wrote about the origins of artificial intelligence (AI) and its application to modern day society. Today I want to talk about its future and highlight some of the challenges that currently prevent AI from becoming mainstream in learning technologies.
In elearning there are undoubtedly benefits to using artificial intelligences which correspond and react to human behaviour. Wherever it may not be possible or desirable to incorporate real people (for example, a mentor who guides you through the introduction to a programme or LMS) is where an artificial intelligence can come into play. A system that learns with the student simultaneously and acts as a peer that can match its own capabilities to that of a human creates just the right level of competition.
Remember that AI has been involved with computer games for decades. By 1950, Alan Turing had invented a software programme to play chess named Turbochamp. There was no computer powerful enough to run the programme at the time, so Turing played games himself by simulating the computer – taking half an hour per move. Finally, in 1997, the hardware caught up with the software. IBM built a computer program, Deep Blue, which beat the world chess champion at what he does best – chess. The involvement of AI in computer games gets us thinking about how it could be used as part of a gamification strategy: a simple AI program could compete with learners in an adaptive way in order to produce a more challenging and addictive elearning experience.
The ability to hold meaningful dialogues with humans is another useful characteristic of AIs for learning. My first two years at university involved a series of rushed exam revision sessions in which each of us in the study group took it in turns to teach the rest about the topic at hand. We found this to be a learning process in itself for the student-turned teacher. The same can be done with a functional AI in an elearning course which acts as a student which the learner, in the act of teaching, can quiz and learn from– generating a fresh and better retained set of ideas about the subject.
These points beg the question: why haven’t these ideas been implemented yet?
Well they have, sort of. Intelligent Tutoring Systems (ITS) have been built that track students’ work, providing feedback and even giving hints to the learner. This sounds great, as exclusively one-on-one tutoring has been proven to deliver better results, but it’s unsustainable and expensive to offer it on a mass scale. With an ITS, the intimate experience of one-on-one interaction and coaching can be reproduced at a low or minimal cost.
So, what’s the catch?
Artificial intelligences are still underdeveloped. A real-world example of an ITS is eTeacher. This is an AI that runs in the background while a student partakes in elearning courses. The system observes and analyses what the student is doing, their activity and behaviour towards the course and then builds a student profile, so it can provide personalised assistance. This is a great concept and certainly heads towards the right direction. Below is an example of eTeacher providing feedback to the learner:
We can see that this idea works by simulating a teacher who follows a learner through a course and provides advice which is contextualised by the learner’s own actions. However, the way a learner can then respond to the teacher is quite limited:
The possibility of emulating one-to-one tutoring instantly vanishes due to the limited options available to the learner to interact with eTeacher.
The lack of sophistication in most AIs can be further illustrated by the fact that even large organisations that have started using artificial intelligence haven’t completely succeeded. For example, BMW and Panasonic developed artificial intelligences to promote their products and improve customer relations – BMW’s iGenius and Panasonic’s Sales Agent, which run off phones via text messages and interact with potential customers in a Q&A based format. Below is a screenshot of the iGenius in action:
As we can see from this conversation a potential customer had with iGenius, it isn’t exceptionally clever. Despite its sassy tone of voice, the bot fundamentally fails to engage with an obvious (but deceptively complex) question. The novelty wears off and the user will revert to the human equivalent – a salesperson in this case. This is very similar to some applications of AI today. SIRI and Cortana, from Apple and Microsoft respectively, are little more than personalised search engines: dumb terminals that create an impression of knowing you and attempt to emulate human sentiments.
“Those who know, do. Those that understand, teach.” ― Aristotle
This quote neatly describes our current stance when it comes to AI. We are still a long way away from an AI that can perceive, understand and respond to human interaction as well as a mortal can. For example, in traditional classroom based learning, if we take a machine that acts as a peer or a teacher, this machine would lack the ability of having human body language or facial expressions, and the lack of these characteristics would result in a lack of attention from the learner as these are so essential when engaging in conversation. Given the abilities of current electronics and robotics, this idea is far from what we can achieve at the moment.
Given time, artificial intelligence will play a major factor in changing the learning technologies industry as there is an exceptional amount of scope, but it needs work. Where do you think we are headed? How many years will it be before we can hop in a self-driving car to attend a training session hosted by our favourite robot?