How human-like are the most sophisticated chatbots?
2022-06-16 14:36:51
Chatbots have hit the headlines over the past few days after a Google engineer claimed that the firm's most advanced system has developed human-like feelings, or become sentient.
Simply put, a chat bot is a computer program deliberately designed to mimic and respond to human speech.
But just how lifelike are the best on the market? One thing for sure is that these intelligent virtual assistants, are now found everywhere.
From Amazon's Alexa, to Apple's Siri, or a retailer's website, an estimated 80% of us now use chatbots - whether they are responding to us verbally, or via written text.
In fact, chatbots are now said to be the fastest-growing way in which brands communicate with their customers.
Sabina Goranova, a student at York University in Toronto, Canada, is typical of many people in that she is used to using chatbots on a daily basis.
Firstly, she has Alexa at home, plus she consults with her university's own Savy system, via her mobile phone, to find required college information.
Savy was made for York and its students by IMB. It can quickly answer questions about everything from specific career advice to daily lunch menus.
"I appreciate the convenience of chatbots," says Ms Goranova. "I already used Alexa to save time, so Savy is another tool in my toolkit."
Guillaume Laporte is chief executive of French chatbot firm Mindsay, which is now part of Chinese artificial intelligence (AI) and intelligent virtual assistant company Laiye. Its customers include everyone from Nike to Walmart, and UK train firm, Avanti.
"Chatbots are beginning to mimic true human behaviour, but with robots essentially," he says.
Mr Laporte adds that chatbots are now "10 times better than they were 10 years ago", and that after initial programming, and then using machine learning and artificial intelligence (AI), they can learn and understand what the user is saying, or typing, and thus know what to reply.
Yet, he cautions that industry-wide chatbots are still not perfect, and that there still needs to be a human backup in place. "So the understanding rate differs between different companies and different industries. It can vary between 30% and 90%".
Jim Smith, professor in interactive artificial intelligence, at the University of the West of England, is an expert in chatbots.
He explains that when it comes to their ability to appear human-like it is important "to make a distinction between task-orientated ones delivering a service, and ones that are expected to have a wider chat about things".
"The former, are the ones most used, and they can work really well," he adds. "They are taught using masses and masses of text.
"So, if they are in a call centre, and they know the sort of question they will be asked, they can achieve human-like levels of [customer] service. And it is probably important, for the sake of transparency, that it is made clear to the caller that he or she is not talking to a human.
"For chatbots that are expected to have more of conversation with you, they can seem convincing to start, but they are doing statistics to work out what they likely should be saying to you next, and errors can keep multiplying.
"And ultimately if the systems get very good, say in 10 years [time], it is difficult to measure what is a human-like performance. I mean, pet parrots appear to be talking to you!
"And I'm not sure that it is meaningful to ever say that a chatbot is sentient. After all, you can turn it off and on again, it is not a living thing."
Prof Sandra Wachter, a senior research fellow in AI at Oxford University, says that chatbots are currently "still far away from appearing lifelike, or humanlike".
"But as we move forward, we also need to think about ethical responsibilities," she adds. "At first glance, chatbots might give the impression that we are chatting with actual humans. And we have an ethical responsibility to avoid this confusion because it can lead to potential harm.
"In the 'best' case, it merely leads to frustration when chatting with the bot - due to their limited functionality. In the worst case, we might trust them and share information that we otherwise would not."