Opinion piece by Samir Dilmi, Chief Revenue Officer at Dydu, for journaldunet.com

Generative AI models have transformed our relationship with digital technology, but at what cost? As companies seek to exploit their full potential, several major challenges are emerging: growing energy consumption, costly infrastructure, and the reliability of responses. The environmental impact of these technologies is becoming a central concern, especially as AI’s energy consumption could increase four- to ninefold by 2050.
Faced with these challenges, a new generation of models is emerging: Small Language Models (SLMs) and Medium Language Models (MLMs). More compact, more specialized, and less energy-intensive, they offer a more efficient and sustainable alternative that is better suited to the specific needs of businesses and individuals.
Smaller models, bigger impact: the efficiency of SLMs and MLMs
Small and Medium Language Models (SLMs and MLMs) stand out for their ability to respond in a targeted manner to user needs, while remaining much lighter than large models. Unlike LLMs, which require massive infrastructure for training and operation, they are designed to be lighter while maintaining high performance. An SLM has fewer than 30 billion parameters, such as Microsoft’s Phi-3, which has 3.8 billion. In contrast, an LLM such as GPT-4o is based on a much more massive architecture, with 1.8 trillion parameters. Thanks to their optimized design, SLMs and MLMs can perform complex tasks, such as text generation and semantic analysis, with significantly fewer resources.
This smaller size also allows for greater customization, giving companies the ability to tailor models to their specific needs. For example, Meditron, developed by Meta, assists healthcare professionals in clinical decision-making. Florence2 specializes in image recognition, while Spreadsheet is designed for data processing in spreadsheets. These specialized models are not only better suited to business needs, but also more accessible, allowing organizations to leverage AI without requiring massive investments in expensive infrastructure.
In addition, SLMs and MLMs benefit from increased flexibility thanks to multimodal models, which are capable of processing text, images, and sound simultaneously. This paves the way for richer applications. For example, in medical image analysis, these models can not only process visuals, but also generate textual descriptions to provide a more comprehensive diagnosis. Applications such as GPT-4o, which integrate audio capabilities, enable the development of innovative solutions for areas such as security and advanced voice assistance.
The ecological footprint of AI: towards more sustainable models
One of the great promises of Small and Medium Language Models is their ability to address environmental concerns related to AI. Due to their size and constant need for powerful computing, LLMs have a significant ecological footprint. Their training requires not only massive servers but also colossal energy consumption. For example, training large-scale models such as GPT-3 requires approximately 1,287 megawatt hours (MWh) of electricity, equivalent to the annual consumption of 120 US households. According to forecasts, global demand for AI could lead to the withdrawal of 4.2 to 6.6 billion cubic meters of water by 2027, an amount equivalent to 4 to 6 times Denmark’s annual consumption.
By adopting smaller models, companies can not only reduce their costs, but also lessen their impact on the environment. For example, companies that choose to use more specialized models for specific tasks can minimize the use of servers and therefore the energy required to run them. An SLM can run on a local computer without an internet connection, provided it has a GPU (Graphics Processing Unit) chip. For example, Microsoft’s Phi-3 can run on a MacBook M3, while a more massive model such as GPT requires hundreds of GPUs to operate. This approach contributes to a more responsible use of computing resources and is part of a general trend toward sustainability in the technology sector.
Trends to watch in 2025: between promise and challenge
Artificial intelligence is entering a phase of maturation where efficiency and specialization are taking precedence over the race for ever larger models. Thanks to specialized, compact, and multimodal models, AI is becoming more accessible, powerful, and adaptable, addressing the challenges of personalization, sustainability, and optimization for businesses.
However, its adoption remains hampered by human, technical, environmental, and legal constraints. Despite significant advances in understanding and processing context, AI remains limited by the quality of the data on which it is trained. Biased, incomplete, or obsolete data sets can lead to errors and compromise its reliability. Furthermore, the more complex a context is or the more specialized knowledge it requires, the more difficult it becomes for AI to provide accurate answers. In addition, it still struggles to interpret certain subtleties such as humor, irony, or specific cultural references.
To date, existing models do not yet allow for a transition to general artificial intelligence. It is therefore essential to move forward with caution and discernment. AI should be seen as a tool for assistance and optimization, rather than a substitute for human intelligence. The future of AI lies in a balanced collaboration between humans and technology, where intelligent and user-friendly tools, tailored to users’ needs, will become powerful levers for supporting innovation and business performance.
1- Source: Deloitte study, October 2024
2- Source: Sciences et Avenir No. 935, dated January 2025.
3- Source: Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models, October 29, 2023