AIRoV Keynotes
At AIROV 2025, we are proud to feature three distinguished keynote speakers who bring together cutting-edge technical insights and legal perspectives on AI and robotics. Please find below more detailed info on the respective keynotes as well as the authors.
Univ.-Prof. Dr. Nikolaus Forgó - Tue, 8. July 2025, 9:30
Europe as Standard Setter? Regulatory Approaches to Data, Cloud, AI and all the other Glitter
Prof. Forgó will give a critical overview on how Europe has been trying to regulate computers since 1970 with a particular focus on AI regulation.
Nikolaus Forgó studied law in Vienna and Paris from 1986-1990 and then worked as university assistant at the Faculty of Law at the University of Vienna. In 1997, he received his doctorate in law with a dissertation on legal theory. Since October 1998, he has been head of the university course for information and media law at the University of Vienna, which still exists today. From 2000 to 2017, he was Professor of Legal Informatics and IT-Law at the Faculty of Law at Leibniz Universität Hannover, where he headed the Institute for Legal Informatics for 10 years and was also Data Protection Officer and CIO. Since October 2017, he has been Professor of Technology and Intellectual Property Law at the University of Vienna and Director of the Department of Innovation and Digitalisation in Law at the same university. He is also an honorary expert member of the Austrian Data Protection Council and the Austrian AI Advisory Board.
Univ.-Prof.in Dr.in Martina Seidl - Tue, 8. July 2025, 11:00
Reasoning with Quantified Boolean Formulas
As the prototypical NP-complete problem SAT, the decision problem of propositional logic, is considered to be hard. Despite this hardness, SAT is very successfully applied in many practical domains, because very powerful reasoning techniques are available. There are, however, reasoning problems that cannot be efficiently encoded in SAT. For such problems, formalisms with decision problems beyond NP are necessary. One of such formalisms are quantified Boolean formulas (QBFs), the extension of propositional logic with existential and universal quantifiers over the Boolean variables. The QBF decision problem is PSPACE-complete, making QBF well suitable for encoding and solving many problems from formal verification, synthesis, and artificial intelligence. In this talk, we review the state of the art of QBF technology and show how to perform automated reasoning with QBFs.
Martina Seidl is head of the Institute for Symbolic Artificial Intelligence at the Johannes Kepler University (JKU) in Linz, Austria. She obtained her PhD from the Vienna Technical University and her habilitation in computer science from JKU. Her research focuses on symbolic reasoning techniques with special emphasis on quantified Boolean formulas and applications in formal verification and symbolic artificial intelligence. She is a director of the cluster of excellence Bilateral AI.
Univ.-Prof.in Dr.in Yufang Hou - Wed, 9. July 2025, 14:00
Automated Reasoning for Scientific Knowledge Synthesis and Claim Verification
The exponential growth of scholarly literature poses a substantial challenge for researchers seeking to stay current with the latest findings and synthesize knowledge effectively. This challenge is further exacerbated by the proliferation of misinformation and the increasing complexity of scientific data. In this talk, I will begin by presenting an overview of our efforts in processing scientific documents to support knowledge discovery, synthesis, and effective communication. I will then delve into our recent work on identifying and reconstructing fallacies in misrepresented scientific findings, as well as our approaches for supporting experts in generating forest plots for biomedical systematic reviews. Finally, I will highlight several open research challenges in modeling and reasoning over scholarly documents.
Yufang Hou is a university professor at IT:U – Interdisciplinary Transformation University Austria. At IT:U, she leads the NLP group with a strong focus on large language model (LLM) governance (with a particular focus on content veracity), computational argumentation, fact-checking, knowledge and reasoning, and human-centred multimodal NLP applications in education, science, and health.