Navigation and service

AI Observatory

3 Questions for Daniel Krupka (GI): explainable AI applications through the AI-Observatory

An interdisciplinary team is studying possible control and test mechanisms for AI systems in industrial production as well as recruiting.

Daniel Krupka represents the Gesellschaft für Informatik (German Informatics Society abrv. GI) as managing director for public, political, economic and association affairs, maintains the external networks, and manages the office in Berlin. He is responsible for GI projects on artificial intelligence, algorithm regulation, digital education and sovereignty, data literacy and data science, among others.

1. What is your research project about?

In the project "Feasibility of Testing and Auditing of AI-based Systems", an interdisciplinary team of (socio-)computer scientists, software engineers, and legal and political scientists investigates the question of what meaningful control and testing procedures for AI systems could look like, based on two concrete application areas. The two areas of application are human-machine cooperation in industrial production, and AI systems in personnel and talent management and recruiting.

The study examines and analyses the legal and socio-technical requirements for AI systems, the applicable and currently developed standards, norms and guidelines, as well as successful testing, control and certification practices. Accompanying the results of the project will be concrete recommendations for the legal and technical design of testing and auditing procedures.

The project is implemented by the Gesellschaft für Informatik e.V. together with the TU Kaiserslautern, Saarland University, Fraunhofer IESE and the Stiftung Neue Verantwortung e.V.

2. Which questions does the project want to answer?

Among others:

Which measures are suitable to strengthen trust in AI systems?

Which legal areas need to be considered when regulating AI systems in the decision-making process and production environment?

Which minimum technical standards have to be considered in the planning and implementation of AI systems which enable effective and efficient testing and auditing?

Which organizational framework conditions are necessary for reliable testing and auditing of AI systems?

Which recommendations for action can be made within the framework of the AI strategy of the Federal Government to ensure the embedding and control of algorithmic decision-making systems in accordance with the rule of law?

Which criteria should be applied by the public sector when developing, acquiring and using AI systems?

Should AI systems be regulated on a product-related basis, or is a horizontal approach appropriate?

 3. Which AI innovation would you personally wish for most?

Not only AI systems are affected by built-in discrimination and unequal treatment. Even analog, human-made processes and algorithms often contain discriminatory factors. An AI that helps analyze such analog processes and uncovers hidden discrimination would be a great opportunity.

Published at 03/19/2020 on the topic: AI Observatory

Did you like this article? Share it with others:

back to top