Will artificial intelligence reconcile brands and users with chatbots?

As generative AI opens up new possibilities in the development of conversational tools, a return to grace for chatbots is yet to come.

Interview given by Samir Dilmi, Chief Revenue Officer at Dydu, for Siècle Digital.

Ah… Chatbots, those services behind which users were promised to converse with some form of intelligence to ask it what they wanted at any time, any day, with a response close to that of a human. What a disappointment this was for millions of Internet users around the world, who quickly lost interest! Will the emergence of ChatGPT and, more broadly, large language models (LLMs) bring brands and consumers back into contact with chatbots?

It’s 2016, and chatbots are invading the internet in leaps and bounds. They can be found on many websites, and especially in Facebook’s Messenger application, which has given them a prominent place. After-sales service, recruitment, human resources, fast food, banking, press, university… In a period when “digital transformation” is hammered into every meeting, chatbots present themselves as a godsend to show that we know how to innovate.

Most chatbots have been rather disappointing for end-users, and costly for companies, ” recalls Ghislain de Pierrefeu, Partner Artificial Intelligence and Data at Wavestone. Although some had already put natural language processing into practice, enabling agents to understand multiple intentions, they proved limited in their responses. For those brands who persevered, with the promise of decongesting their customer relations centers, the price was high. They had to mobilize resources to train their chatbots, to provide them with quality and quantity data so that they could correctly read a user’s requests, understand and use specialized vocabulary… ” The players have never managed to process the chain all the way to the end to fetch and integrate personal data. (…)

There was a phenomenon of disaffection. People got fed up and preferred to call an advisor or send an e-mail, ” he adds.

An observation shared by Samir Dilmi, Chief Revenue Officer at Dydu, a French company that pioneered the creation of chatbots. ” This big boom was followed by a big letdown. Many companies jumped on a business challenge without taking care of the product, and without always understanding that it’s not a replacement solution, but a helping solution.

Faced with a notable lack of interest from the general public, companies have turned to in-house applications, notably in IT support, human resources or sales assistance. “This is where the first HR chatbots began to appear, ” continues Samir Dilmi. In bank branches, advisors were able to access interfaces providing information on all products, and cross-reference them with other data such as account status.

The development of these uses has led to advances in natural language processing (NLP): “today, we’re able to understand a context, a question, and go and find the answer in the right place “. Even if the risk of in-house use is lower, brands have learned to devote time to training their chatbots. However, there are still shortcomings, such as the use of sector-specific vocabulary (banking, energy, healthcare…), or writing at a level of fluency close to human. These shortcomings could eventually be remedied by the integration of LLMs, which would bring additional benefits.

While the first major language models appeared in 2018, it was not until four years later that we discovered the extent of their capabilities, notably through generative AI. For businesses, OpenAI’s GPT, Meta’s LLaMA, or Google’s Bard offer skills that are both ergonomic, because you can ask them anything you like, and you’ll get an answer; and linguistic, because you have to admit they express themselves in near-perfect French.

An obvious advantage of using LLM is the time it takes to train your chatbot. ” The learning curve has been completely overhauled, ” says Samir Dilmi, who draws a parallel with a supervisory method in which a human validates responses, which is still widely used. ” We can retrieve service archives, study conversations, and train the robot before it’s even launched. We save precious time in setting up the bot and absorbing knowledge “.

But be careful not to leave everything in the hands of an LLM, to avoid a well-known but inexplicable phenomenon: hallucinations. These all-purpose AIs can invent answers, and for brands this represents an obvious risk. We should therefore expect to see LLM exploited primarily for simple needs, such as comparing products, creating cooking recipes… Rather than for confirming international insurance cover, or monitoring medical treatment. ” Before, we could be disappointed by a chatbot’s inability to provide an answer, today we could be disappointed by a wrong answer, ” analyzes Ghislain de Pierrefeu.

Castorama lent itself to the game with the launch of a chatbot on its site in November:

Hello Casto. The aim is to ” compensate online for what an in-store expert offers our customers, ” explains Romain Roulleau, Castorama’s Marketing and Digital Director. It operates in a closed environment, in an orchestrator called Athena designed at group level to develop artificial intelligence projects. It deals with issues relating to security, the relevance of responses, and the moderation of exchanges. Hello Casto is based on the three main LLMs on the market, Bard, GPT and Claude (Anthropic), who intervene according to the question and context. Despite encouraging feedback, the company is cautious. ” We make corrections several times a day. (.) We know that hallucinations do exist, and that’s why Athéna is here, ” adds Romain Roulleau.

As chatbots mature and technological solutions become available, a return to favor is definitely on the cards.