AI Certification, Fairness and Regulations

Organizers

Bernhard Nessler*, Rania Wazir+, Alexander Aufreiter*

+ leiwand.ai
* Software Competence Center Hagenberg (SCCH)

Workshop Description

The workshop will serve as a platform to discuss all topics that currently arise in the integration and use of AI in our society. This ranges from the technical need to measure or guarantee properties of the AI system, to addressing which properties are desirable, prompting ethical debates and necessitating the development of suitable regulatory measures.

As AI systems are introduced into broader application areas and impact many areas of people’s lives, potential issues of fairness and discriminatory effects become important. Understanding and addressing these risks requires a multidisciplinary approach, starting from a technical understanding, through a social perspective to identify stakeholders and social risks, to many legal issues, capturing potential challenges at technical, ethical and legal levels, and in particular looking at data and algorithms. Many fairness problems have their root cause in functional misrepresentations. Furthermore, fairness can have different meanings in different contexts of use and from the perspective of the stakeholders, which requires different operationalisations with different mathematical formulations that can then be tested.

This topic is interesting for researchers in various disciplines: AI, law, social science, and relevant to public administration, and certification.

Workshop Programm

Time Presentations Speaker / Authors
13:30-13:50 Invited Talk -The Concept of ‘AI system’ Under the New AI Act: Arguing for a Three-Factor Approach C. Wendehorst, B. Nessler, A. Aufreiter, G. Aichinger
13:50-14:05 Towards a Framework for Supporting the Ethical and Regulatory Certification of AI Systems F. Kovac, S. Neumaier, T. Pahi, T. Priebe, R. Rodrigues, D. Christodoulou,
M. Cordy, S. Kubler, A. Kordia,G. Pitsiladis, J. Soldatos, P. Zervoudakis
14:05-14:20 Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned (Upcoming Whitepaper) A. Gruber, B. Brune, B. Nessler
14:20-14:35 The Missing Modality: Open-Source Compliance Under the GPAI Code of Practice A. Aufreiter, G. Aichinger, B. Nessler
14:35-15:00 Discussion about the AI Act and Standardisation  
15:00-15:30 Coffee Break  
15:30-15:50 Invited Talk -Bias Requirements in the AI Act R. Wazir
15:50-16:05 DP-KAN: Differentially Private Kolmogorov-Arnold Networks N. Kalinin, S. Bombari, H. Zakerinia, C. Lampert
16:05-16:20 Human-Robot Interaction Through a Guided Speech Dialogue System: Leveraging Semantic Analysis with Large Language Models M. Dalkilic, W. Kurschl, J. Schoenboeck, S. Pimminger, G. Zwettler
16:20-16:35 FourMind: Dissecting Communication Styles of LLM-powere S. Bergsmann, M. Lewandowski, B. Nessler
16:35-16:50 Sequential Hypothesis Testing for Model Updates S. Schmid, M. Lewandowski, B. Nessler
16:50-17:00 Closing Remarks  

Paper Submission

We expect contributions in the form of extended abstracts that highlight the current state of work, discussions and problems. All accepted extended abstracts will be invited to submit a camera ready version of a full paper until 30th July 2025. The workshop accepts submissions in the following forms:

  1. Extended Abstract (2 - 4 pages)
  2. Full paper (approx. 8 pages)

Every contribution should be presented as a poster in the joint poster session. Submissions should use the AIRoV template.

Deadlines

  • 08. May 2025: Last opportunity to submit an extended abstract
  • 18. May 2025: Notification of acceptance
  • 07.-09. July 2025: AIRoV Symposium
  • 30. July 2025: Last opportunity to submit a full paper

Reviewers

  • Gregor Aichinger, JKU LIT Law Lab/SCCH
  • Thomas Doms, TrustifAI, TÜV Austria
  • Lukas Gruber, JKU Linz
  • Markus Isack, WU Wien
  • Simon Schmid, SCCH
  • Kajetan Schweighofer, JKU Linz