Workshop 5

AI Certification, Fairness and Regulations

The workshop will serve as a platform to discuss all topics that currently arise in the integration and use of AI in our society. This ranges from the technical need to measure or guarantee properties of the AI system, to addressing which properties are desirable, prompting ethical debates, and necessitating the development of suitable regulatory measures.

As AI systems are introduced into broader application areas and impact many areas of people’s lives, potential issues of fairness and discriminatory effects become important. Understanding and addressing these risks requires a multidisciplinary approach, starting from a technical understanding, through a social perspective to identify stakeholders and social risks, to many legal issues, capturing potential challenges at technical, ethical and legal levels, and in particular looking at data and algorithms. Many fairness problems have their root cause in functional misrepresentations. Furthermore, fairness can have different meanings in different contexts of use and from the perspective of the stakeholders, which requires different operationalisations with different mathematical formulations that can then be tested.

This topic is interesting for researchers in various disciplines: AI, law, social science, and relevant to public administration, and certification.

Organizing Team

  • Bernhard Nessler, SCCH / JKU LIT AI Lab

  • Rania Wazir, leiwand.ai

  • Gregor Aichinger, JKU LIT Law Lab / SCCH

  • Robert Ginthör, Know-Center, TU Graz

Schedule (final)

—-Time—- Titel
09:00– 09:10 Welcome and introduction
09:10 – 09:30 Alex Aufreiter, Gregor Aichinger, Bernhard Nessler
Definitions of AI in the AI Act
09:30 – 09:50 Rania Wazir
Fee Fi Fo Fair: Bias detection in AI Systems
09:50 – 10:10 Kajetan Schweighofer
Challenges in the Assessment of Fairness Requirements
10:10 – 10:30 Laura Waltersdorfer, Fajar J. Ekaputra, Tomasz Miksa, Marta Sabou
AuditMAI: Towards AI Auditability Infrastructure for Continouous Auditing
10:30 - 11:00 Coffee Break!
11:00 – 11:20 Bernhard Geiger, Roman Kern
Causal Semi-Supervised Learning with Factorized Priors
11:20 – 11:40 Simon Schmid, Bernhard Nessler, Alexander Aufreiter, Barbara Brune,
Lukas Gruber, Kajetan Schweighofer, Xaver-Paul Stadlbauer
Application Domain Definition for Functional Trustworthiness
11:40 – 12:00 Patrick Mederitsch, Michal Lewandowski, Bernhard Nessler
The Turing Game
12:00 – 12:30 Panel Discussion

Paper Submission

We expect contributions in the form of extended abstracts that highlight the current state of work, discussions and problems. All accepted extended abstracts will be invited to submit a camera ready version of a full paper until April 30. The workshop accepts submissions in the following forms:

  1. Extended Abstract (2 - 4 pages, AIRoV format)

  2. Full paper (approx. 8 pages, AIRoV format) Every contribution should be presented as a poster in the joint poster session.

Deadlines

  • 15 March 2024: Last opportunity to submit an extended abstract

  • 18 March 2024: Notification of acceptance

  • 26 and 27 March 2024: AIRoV Symposium

  • 30 April 2024: Last opportunity to submit a full paper

Further Reviewers

  • Simon Schmid, SCCH

  • Gregor Aichinger, JKU LIT Law Lab/SCCH

  • Kajetan Schweighofer, JKU Linz

  • Lukas Gruber, JKU Linz

  • Thomas Doms, TrustifAI, TÜV Austria

  • Markus Isack, WU Wien