GPT-5 unveiled: opportunities or challenges for conversational AI

Man using chat gpt on his computer

Summary

  • On August 14, 2025, OpenAI unveiled GPT-5, presented as a major advance in generative AI. What does it promise? More sophisticated reasoning, truly native multimodality (text, image, audio), reduced hallucinations, and enhanced energy efficiency. But beyond its technical prowess, GPT-5 is already sparking debate and criticism: its limitations persist, the risks of misinformation remain, and questions of digital sovereignty are intensifying.
  • For conversational AI, this launch marks a turning point: GPT-5 paves the way for more fluid and personalized uses, while raising the question of autonomous agents capable of performing complex tasks or making decisions.

What are the major new features of GPT-5?

With GPT-5, OpenAI is seeking to reach a new level of maturity in language models. Several technical developments are particularly noteworthy.

  • Enhanced reasoning: GPT-5 follows complex instructions more accurately thanks to a hybrid architecture. Several models (fast, deep reasoning, etc.) are connected to a router, which is able to automatically choose the best approach based on the query.
  • Native multimodality: GPT-5 now understands and generates text, images, and audio in a fluid and integrated manner. In previous versions, this capability already existed, but in the form of extensions: the core of the model remained textual, and image or sound modules were added separately.

With GPT-5, these different modes are merged from the outset, offering a more natural and unified experience, for example, analyzing an image while generating a coherent verbal explanation or creating an illustrated report from a simple text instruction.

  • Reduced hallucinations: according to Vectara, GPT-5 has an error rate of only 1.4%, a record for an LLM.
  • Configurable personalities: four response styles to choose from, adapted to the context of use.
  • Energy optimization: a notable effort has been made to generate “useful tokens,” although some users regret that the tone is colder than that of GPT-4.
  • First steps toward agentic capabilities: GPT-5 can already chain tasks, make certain decisions, and execute actions more autonomously, paving the way for future intelligent agents.

For example, the model can:

  • Plan a series of steps to respond to a complex request,
  • Retrieve information from a document or authorized website,
  • Generate a report and send it by email via API integration.
  • These functions are still limited, but they foreshadow the arrival of truly autonomous agents, capable of acting in a digital environment while complying with rules defined by the company.

What limitations and risks remain?

Today, it is essential to understand the limitations of generative AI, and these limitations do not spare GPT-5. The new version of GPT can create text, images, code, and even sound. This power is impressive, but it also carries risks.

For example, GPT-5 can generate misleading content or deepfakes (falsified images or videos that appear real and can be misleading). This can be a problem for businesses, the media, or even in everyday life if such content is disseminated without verification.

Today, there is no perfect method for detecting all misleading content generated by AI. To regulate its use, several rules have been put in place: in France, the SREN law; in Europe, the Digital Services Act (DSA); and international initiatives such as C2PA, which seek to clearly identify the origin of digital content. These rules help to make platforms and content creators more accountable, but they do not replace the active role of the user.

That is why education and vigilance remain essential. Understanding that content can be manipulated, learning to verify sources, and knowing how to detect the signs of a deepfake are essential reflexes for using GPT-5 safely. In a professional context, these precautions are particularly important for teams that use AI in customer relations or communication.

GPT-5 and the competitiveness of European companies

GPT-5 is a very powerful model, but its use also raises strategic questions for businesses, particularly in Europe.

  • Dependence on American giants
  • OpenAI, Google, and Anthropic remain the major players in large language models. This means that European companies using GPT-5 may become dependent on these providers for access to the technology and for updates.
  • Local and specialized alternatives
  • In France, companies such as Mistral AI are developing local models. These models may be more suitable for certain specific uses and for ensuring better control over data. Sometimes, smaller, specialized models are sufficient to meet specific needs, without requiring a gigantic model such as GPT-5.
  • Sovereignty and regulation
  • Europe is working to regulate AI with the AI Act, in order to protect users and data. The rules are different in the United States and China, creating a complex environment for European companies. The issue of digital sovereignty is therefore becoming central: it is not just a question of choosing a high-performance model, but also of ensuring control over data and compliance with local standards.

What impact will this have on work and conversational uses?

GPT-5 opens up new possibilities for human-machine interactions, but it does not transform everything overnight. Its role remains that of an augmentation tool rather than a replacement.

On the one hand, it can enrich customer support with more fluid dialogues, automate certain tasks, and further personalize responses. These developments offer real opportunities to improve customer relations and increase efficiency.

But limitations remain. The model continues to produce errors (“hallucinations”) and its logical reasoning is sometimes fragile. This means that human supervision remains essential to avoid inaccurate or inappropriate responses.

In terms of work, GPT-5 does not spell the end of existing jobs. As Mathieu Changeat, co-founder of Dydu, points out:

“LLMs are augmentation tools, not autonomous entities: they require reflection, strategy, critical thinking, etc.”

Finally, this development highlights the importance of training and AI literacy: understanding the model’s limitations, identifying its biases, and learning how to use it safely and effectively.

GPT-5, a breakthrough to be exploited methodically

GPT-5 represents a significant advance for conversational AI. Its enhanced reasoning capabilities, multimodality, and automation potential open up new opportunities to enrich customer relationships and optimize internal processes.

However, it is not a miracle solution. Limitations remain: residual hallucinations, the need for supervision, and the need for a clear strategy. Indeed, the AI of tomorrow will not rely solely on giant models such as GPT-5. It will tend towards more specialized, lightweight, and sustainable approaches, driven by Small and Medium Language Models, capable of combining performance, cost control, and energy efficiency. But today, human expertise remains central to ensuring the reliability of interactions and fully exploiting the potential of AI.

At Dydu, our approach combines technological power with a strategic framework: choosing the right model, personalizing interactions, human supervision, and team training to maximize value. This method ensures that GPT-5 becomes a real lever for the company, without creating excessive dependence on AI.

Discover now how our solutions can transform your customer interactions with GPT-5.

Alexia Mendes
Alexia Mendes Correia
Marketing & Communications Assistant