Overslaan naar inhoud

From Data to Prediction

Begint
Eindigt
Aan de kalender toevoegen:
Course Description 

From Data to Prediction is a practical introduction to machine learning and AI designed specifically for non-technical professionals who need to evaluate, approve, purchase, or oversee AI solutions. Rather than teaching programming, the course builds critical thinking skills and decision frameworks that help participants judge whether an AI system is reliable, fair, and genuinely useful.

Over three half-days, participants learn to move beyond marketing claims and “AI hype” by understanding core concepts, interpreting model results, and identifying common pitfalls such as bias, overfitting, and misleading performance metrics. Through guided exercises, case studies, and structured evaluation tools, they will practice asking the right questions to vendors and translating technical outputs into clear recommendations for leadership.

By the end of the course, attendees will be able to assess ML/AI proposals confidently, communicate risks to stakeholders, and make informed adoption decisions grounded in evidence rather than promises.

Course Content 

Day 

Content 

1

Thinking Tools for Prediction: Critical thinking foundations; review of common ML/AI concepts; recognizing AI hype vs real capabilities 

2

Evaluating ML Solutions: Model performance metrics; data adequacy; overfitting & data leakage; mismatch between target and goal; reproducibility and explainability challenges. 

3

Fairness, Ethics & Decision: Bias sources & fairness definitions; evaluation to action. 


Learning Outcomes 
  • Critically evaluate ML/AI vendor claims and model documentation 
  • Identify Bias, overfitting, and other methodological red flags in AI/ML solutions 
  • Apply structured evaluation frameworks (grids, checklists) to assess solution reliability and fairness 
  • Distinguish genuine capability from AI hype 
  • Communicate model risk assessments clearly to non-technical stakeholders. 
Practical Work 
  • Thought experiments; evaluation grid walkthrough; red flags checklist introduction 
  • Metrics and result interpretation workshop; vendor questions playbook. 
  • Audit case study; evaluation memo drafting. 
Deliverables 
  • ML Solution Evaluation Grid: Structured assessment grid across accuracy, fairness, transparency, and relevance. 
  • Red Flags Checklist: reusable tool for evaluating ML/AI proposals 
  • Model Evaluation Memo: one-page non-technical recommendation for senior leadership. 
Target Audience: 

Non-technical professionals from pubic or private organisations, including managers, procurement officers, project coordinators, innovation officers, and any non-technical professional involved in evaluating, purchasing, or overseeing ML/AI solutions. 

Prerequisite 

None. 

Trainer

​Natalia Garcia Colin is an experienced research scientist and technical leader specializing in AI, ML, and mathematical modelling, with proven success managing international projects and communicating complex concepts to diverse audiences in academia and industry. Throughout her career,  she has taught undergraduate and master's courses across engineering, mathematics, economics, data science, and financial engineering. Her work bridges technical innovation with education while fostering global collaborative networks.


Price 

Thanks to the support of the European Commission and Innoviris in the framework of the EDIH sustAIn.brussels, SMEs and midcaps receive this training free of charge (0€),  in the context of de minimis aid. Large companies and participants without a company pay 3067€ per participant.

Practical Information: 

Language : English (Bilingual exchanges FR/EN welcome) 

Location: BeCentral, Cantersteen 12, 1000 Brussels. 

Format: In person, interactive, hands-on. 

Participants: Max 18 participants. 

Questions: 

Yavuz Sarikaya - Programme Manager 

 yavuz.sarikaya@ulb.be