Building Advanced AI Worflows with LLMs
Course Description
As organisations move beyond experimentation with AI, the ability to build custom workflows and integrate large language models into existing systems becomes a critical technical capability. This course offers an in-depth, hands-on exploration of modern language models and provides participants with the skills required to design, implement, and deploy AI-powered applications.
The module begins with a technical deep dive into the internal architecture of large language models, including transformers, tokenisation, and embeddings. Participants will gain a clear understanding of how proprietary and open-source models differ, and how these differences affect performance, cost, and deployment choices. Building on this foundation, the course focuses on practical integration, showing how to call models through APIs such as those provided by OpenAI, Anthropic, and open-source alternatives.
Participants will learn how to construct robust data-processing pipelines for real-world inputs, including emails, documents, and OCR-based content. The course also introduces semantic clustering using embeddings, enabling more advanced analysis and organisation of unstructured data. To complete the end-to-end workflow, participants will design a simple yet functional web interface using Gradio, allowing users to interact with AI-generated insights.
Throughout the course, emphasis is placed on reusable code, best practices, and practical deployment considerations. By the end of the module, participants will have built a complete AI-driven application, developed a reusable Python codebase, and acquired the technical confidence needed to integrate language models into production-ready workflows.
Course Content
Day | Content |
1 | Deep architecture overview of LLMs: Transformers, tokenisation, embeddings. Proprietary vs. open-source models |
2 | Calling LLMs through APIs (Python). Building data ingestion pipelines, email parsing, document OCR |
3 | Implementing semantic clustering. Designing a web interface (Gradio) to visualize results. |
Learning Outcomes
- Understand the internal mechanics of LLMs and embeddings at a technical level
- Use APIs (OpenAI, Anthropic, open-source LLMs) to integrate AI into workflows
- Leverage the HuggingFace ecosystem for model selection and deployment
- Build data-processing pipelines for emails, documents, OCR
- Create a functional user interface using Gradio
Practical Work
- Calling LLMs through APIs (OpenAI, Anthropic, HuggingFace)
- Building data ingestion pipelines (email parsing, document OCR)
- Implementing semantic clustering using embeddings
- Designing a web interface with Gradio
Deliverables
- A full end-to-end workflow that ingests text/image/sound and presents insights through a Gradio interface
- A reusable Python codebase with API integrations
- Technical guidelines and best practices for deploying AI tools.
Target Audience
Participants with basic programming skills (Python preferred): junior developers, data analysts, technical project managers, and IT staff seeking hands-on experience with AI integration.
Basic Python knowledge required
Trainer
Yann-Aël Le Borgne is a machine learning and AI expert working as an independent consultant, instructor, and AI engineer with the AI Coordination Team at Université Libre de Bruxelles (ULB). He brings over 15 years of experience in applied machine learning research, with a strong focus on real-world application in environmental monitoring, fraud detection, genomics, and mobility, helping public organizations translate advanced AI methods into practical and responsible solutions.
Price
Thanks to the support of the European Commission and Innoviris in the framework of the EDIH sustAIn.brussels, SMEs and midcaps receive this training free of charge (0€), in the context of de minimis aid. Large companies and participants without a company pay 3915, 09€ per participant.
Practical Information
Language : English (Bilingual exchanges FR/EN welcome)
Location: BeCentral, Cantersteen 12, 1000 Brussels.
Format: In person, interactive, hands-on.
Participants: Max 18 participants.
Duration: 12 hours (over 3 days)
Questions
Yavuz Sarikaya - Programme Manager