On September 16th, 2025, our teams hosted an internal conference titled “A Journey Through Generative AI”. The goal was to demystify generative artificial intelligence, explain its core concepts, and share real-world applications.
Led by Roxane Jouseau and Hichame Haichour AI experts (from Novencia), the session brought together teams and enthusiasts to explore topics such as Large Language Models (LLMs), their architecture, how they are trained, and how they can be applied to real business problems.
- ANI (Artificial Narrow Intelligence): AI focused on a single, narrow task.
- AGI (Artificial General Intelligence): AI with human-like reasoning ability.
- ASI (Artificial Super Intelligence): AI that surpasses human capabilities (still theoretical).
Large Language Models (LLMs) rely on:
- Machine learning (supervised and self-supervised),
- Embeddings to represent language in vectors,
- The Transformer architecture, based on the attention mechanism to capture context and meaning.
Building a model like GPT-4 or Gemini requires:
- Thousands of high-performance GPUs,
- Massive compute time (weeks or even months),
- Huge financial investments, often in the tens of millions.
Example: GPT-5 and Grok 3 highlight the challenges of scalability, cost, and energy consumption.
New research shows the importance of models being able to admit uncertainty when they don’t know an answer.
Some LLM evaluators now outperform humans in consistency and coherence, making them powerful tools for quality assurance.
During the session, we explored a concrete example: automating JSON file generation for a data quality tool.
Two approaches were tested:
Outcome: faster configuration setup and improved data quality.
Upcoming trends to watch include:
- Multimodal LLMs (text, image, audio, video),
- Autonomous AI agents able to reason and execute tasks,
- Optimization and quantization to reduce inference costs,
New approaches to AI safety and reliability.
This conference helped our teams build a stronger understanding of Generative AI fundamentals and how they translate into real-world use cases. By combining theory with practice, we aim to spread AI culture across teams and spark new collaborations.
Generative AI is a branch of artificial intelligence that can create new content such as text, images, audio, or code by learning patterns from large datasets.
LLMs are advanced machine learning models trained on massive text datasets. They use transformer architecture to understand context and generate human-like language.
LLMs are tested through benchmarking (fixed datasets) and LLM-as-a-judge, where one AI model evaluates another’s output for quality and accuracy.
The future lies in multimodal models, autonomous AI agents, and energy-efficient optimization techniques that make AI more scalable and reliable.
Explore Generative AI fundamentals: LLM basics, training, evaluation, real-world use cases, and future perspectives.
READ MORELearn how advanced perception technologies, such as LiDAR and multispectral cameras, help autonomous robots move safely and effectively in complex environments.
READ MOREExplore how autonomous vehicles navigate extreme conditions through cutting-edge perception systems and validation frameworks. Discover industry insights for safer self-driving technology development.
READ MORE