dydu attended the Chatbot Summit on the 25th and 26th of June 2019 in Tel Aviv. On the agenda, two days of conferences, demos and presentations dedicated to automated conversations, whether digital – chatbots – or vocal – voicebots. This fifth edition lived up to all its promises once more, through its attendance, teachings, and the rich exchanges between participants. The dydu team identified 5 main trends for the coming years. Let’s take a look back at this event, which each year positions itself a little more as a reference within the industry.
Although all the participants agree that everything cannot be automated, there are certain very specific and well-defined use cases that lend themselves to full automation. Take Lemonade for example, a new-generation insurance company that has fully automated its subscription process; 100% of its policy holders subscribed to their insurance via the chatbot (Rasa technology). The process is completed with just a few clicks, and the user is immediately covered! Better still, the Lemonade bot can also handle its policy holders’ claims; in 2018, the chatbot automatically handled 30% of reported claims, without any human intervention, and paid out $1M to the said policy holders! This is clearly a level 3 bot; contextualised and transactional.
So, how do you choose the right use cases to automate? The best use cases are those that can be handled without human intervention. Start small, with 2 to 5 simple and well-defined use cases. Build your use cases by starting with simple questions such as: what are the main reasons your users contact you? What is the main irritant for your employees? What do people like to talk about?
Contextualised and transactional bots
Dynamic FAQs are no longer sufficient; users are demanding personalised answers, to be dealt with according to their specific situation, and they want to be able to complete transactions via the bot (orders, payments, saving an appointment to their diary, automatic reminders, etc.). As demonstrated by the Lemonade case above, this is the kind of interaction users expect; simple, natural, personalised and automated, both orally and in writing.
The technology has reached a sufficient stage of maturity to offer this kind of experience. Of course, implementing such a bot is more complex, as it often needs to be connected to a third party information system or additional applications in order to retrieve the user’s specific information and trigger the requested actions. But this is the price to pay for an experience the user will appreciate, and to ensure that the user, satisfied, returns to the bot.
Open-source technology is on the rise, and bots are no exception; more and more companies are getting involved. This approach has two main objectives; to reduce costs, and to not “get locked in” with one publisher and proprietary technology. In return, the company must have the necessary resources to implement, develop and maintain the built solution over time. This approach is therefore reserved for large groups who have made dialogue a strategic area in their development.
At dydu, we have another solution to avoid this “lock-in” effect with one publisher and technology; the Alliance for Open Chatbot. An initiative presented by our CMO, and treasurer of the Alliance, Thomas Dufermont, at the Chatbot Summit. The Alliance’s objective is to standardise chatbot communication interfaces so that they can dialogue with each other, right from their launch. As such, a company can work with several chatbot publishers, providing that they are all members of the Alliance, and ensure that all their bots can communicate easily with one another. The Alliance’s second objective is to develop an open-source single point of entry, which is precisely the purpose of trend n°4.
Over the past few years, companies have implemented bots left, right and centre; one for sales, another for after-sales service, a third to generate leads… This makes it difficult for the user to know which bot they should refer to when looking for a specific piece of information. It is therefore essential for the company to display one point of entry for the user, whatever the subject to be discussed. This was one of the main trends at the Chatbot Summit, and also presented by the Alliance during its presentation on stage.
In addition to enabling different technology bots to communicate, the Alliance’s second objective is to develop an open source meta bot, that companies can then take over and use as a single entry point for their users. The meta bot orchestrates all the different bots in order to retrieve the most relevant information for the user, according to their request.
The last major trend of this summit was microservices. AI applications have developed considerably over the past few years; Natural Language Processing, Natural Language Classifying, Language Translation, Language Detection, Sentiment Analysis, Entity Extraction, Personality Insight, Tone Analyzer, Text to Speech and Speech to Text, Message Resonance, Text Extraction, etc. It is impossible for the same publisher to master all these components and technologies. Publishers are also refocusing on their core business and developing services in the form of microservices, which can be deployed and nested on demand. A little like Lego, the client companies assemble these microservices depending on the cases they wish to address and the problems they wish to solve.
And to finish… a few figures
Finally, some key figures gleaned at random from the conferences:
- 1 billion dollars for the bot market in 2024 (Transparency Market Research)
- 70% of users who have a bad experience with a bot will never go back, hence the importance of making a good impression from the very first exchanges
- 73% drop out within 10 minutes if they do not find their answer
- 100k bots created in one year on Messenger (whereas “only” 50k apps were put online on the AppStore in the first year following its launch)
And for voice technology:
- 70M homes will have a vocal assistant by 2020
- 70% of users are uncomfortable with voice search in public
- 82% of interfaces in 2020 will be based on speech recognition
Any questions or comments? Don’t hesitate to get in touch via our contact form, available here.