AIRoV 2026 Workshop • 3 hours

AI Certification, Fairness and Regulations

CERT AI focuses on the operationalisation of AI certification: translating legal and ethical requirements into measurable, testable, and statistically valid system properties across the AI system lifecycle.

Organizers

Bernhard Nessler*, Michal Lewandowski*, Simon Schmid*, Gregor Aichinger*, Iana Kazeeva*, Rania Wazir#

* Software Competence Center Hagenberg – SCCH
# leiwand.ai

Workshop Description

Artificial Intelligence systems are increasingly deployed in high-impact domains, including healthcare, public administration, finance, mobility, and industrial automation. With the entry into force of the EU AI Act and related regulatory frameworks, the focus has shifted from purely technical performance to trustworthiness, compliance, and certifiability of AI systems.

Since its introduction at AIRoV in 2024, the workshop on AI Certification, Fairness and Regulations has evolved into a recurring forum for interdisciplinary exchange between technical researchers, legal scholars, and practitioners in conformity assessment. The 2026 edition builds upon this foundation and shifts the focus towards the operationalisation of certification: how can legal and ethical requirements be translated into measurable, testable, and statistically valid system properties?

A central challenge lies in the notion of functional trustworthiness: the systematic alignment between an AI system’s intended purpose, its operational context, and verifiable performance and risk criteria. Beyond static evaluation, certification increasingly requires lifecycle-oriented methodologies that account for robustness, reproducibility, and statistical validity over time.

Submissions are encouraged across a broad range of topics, including but not limited to:

  • Certification methodologies for ML and AI systems
  • Functional trustworthiness and application domain definition
  • Statistical testing, sequential testing, and online hypothesis testing
  • Multiple testing control and lifecycle risk management
  • Robustness, adversarial resilience, and safety evaluation
  • Bias, fairness, and discrimination assessment
  • Explainability and transparency verification
  • Privacy-preserving AI and differential privacy
  • Standardisation and harmonised standards under the EU AI Act
  • Conformity assessment procedures and auditability
  • Benchmarking, reproducibility, and evaluation protocols
  • Continuous monitoring and post-market surveillance

Workshop Schedule

Online Link (Teams meeting)

Time Title Speaker
14:00–14:10 Opening Remarks Organizers
14:10–14:30 Invited Talk: GDPR-compliant Machine Learning and the Use of Personal Data in Large Language Models (15 min + 5 min Q&A) Gregor Aichinger
14:30–14:50 D02: Conversational Agents in Multi-User Environments (15 min + 5 min Q&A) Umut Tanriverdi, Tobias Halmdienst
14:50–15:10 D03: Safety Driven Hardware and Control Architecture for Automated Surface Vessel Systems (15 min + 5 min Q&A) Önder Hamamcioglu, Viktor Komyshan
15:10–15:30 D05: Explainable Selection of Machine Learning Algorithms in Social Sciences (15 min + 5 min Q&A) Dijana Oreski
15:30–16:00 Coffee Break
16:00–16:30 Invited Talk: The Current State of the AI Standardisation Process in CEN/CENELEC (20 min + 10 min Q&A) Rania Wazir
16:30–16:50 D01: Stochastic Application Domain Definition for Functional Trustworthiness Certification of AI Systems (15 min + 5 min Q&A) Simon Schmid
16:50–17:10 D04: Anthropomorphic Terminology in Artificial Intelligence (15 min + 5 min Q&A) Iana Kazeeva
17:10–17:20 General Discussion All
17:20–17:30 Closing Remarks and Outlook Organizers