AIRoV 2026 Workshop • 3 hours

AI Certification, Fairness and Regulations

CERT AI focuses on the operationalisation of AI certification: translating legal and ethical requirements into measurable, testable, and statistically valid system properties across the AI system lifecycle.

Organizers

Bernhard Nessler*, Michal Lewandowski*, Simon Schmid*, Gregor Aichinger*, Iana Kazeeva*, Rania Wazir#

* Software Competence Center Hagenberg – SCCH
# leiwand.ai

Workshop Description

Artificial Intelligence systems are increasingly deployed in high-impact domains, including healthcare, public administration, finance, mobility, and industrial automation. With the entry into force of the EU AI Act and related regulatory frameworks, the focus has shifted from purely technical performance to trustworthiness, compliance, and certifiability of AI systems.

Since its introduction at AIRoV in 2024, the workshop on AI Certification, Fairness and Regulations has evolved into a recurring forum for interdisciplinary exchange between technical researchers, legal scholars, and practitioners in conformity assessment. The 2026 edition builds upon this foundation and shifts the focus towards the operationalisation of certification: how can legal and ethical requirements be translated into measurable, testable, and statistically valid system properties?

A central challenge lies in the notion of functional trustworthiness: the systematic alignment between an AI system’s intended purpose, its operational context, and verifiable performance and risk criteria. Beyond static evaluation, certification increasingly requires lifecycle-oriented methodologies that account for robustness, reproducibility, and statistical validity over time.

Submissions are encouraged across a broad range of topics, including but not limited to:

  • Certification methodologies for ML and AI systems
  • Functional trustworthiness and application domain definition
  • Statistical testing, sequential testing, and online hypothesis testing
  • Multiple testing control and lifecycle risk management
  • Robustness, adversarial resilience, and safety evaluation
  • Bias, fairness, and discrimination assessment
  • Explainability and transparency verification
  • Privacy-preserving AI and differential privacy
  • Standardisation and harmonised standards under the EU AI Act
  • Conformity assessment procedures and auditability
  • Benchmarking, reproducibility, and evaluation protocols
  • Continuous monitoring and post-market surveillance

Program (Tentative)

Time Item Speaker
10 min Opening Remarks: Overview of certification challenges and workshop objectives Organizers
30 min Invited Talk (20 + 10 min Q&A) TBA
50 min Block I: Selected oral presentations TBA
30 min Coffee Break (aligned with AIRoV schedule)
80 min Block II: Selected oral presentations TBA
10 min Closing Remarks and Outlook Organizers