Anyone using AI-based applications as an employee or a consumer should be able to do so in the knowledge that they are reliable and safe. How to achieve this is the subject of a new collaborative project by the AI Observatory of the Policy Lab Digital, Work & Society under the auspices of the German Informatics Society (GI).
What kind of testing and auditing procedures are necessary to ensure the non-discriminatory use of artificial intelligence? What legal and technical requirements must be met in this respect? And how can transparency, comprehensibility, fairness, responsibility, reliability and data protection be guaranteed? As of May 2020 Fraunhofer IESE (Institute for Experimental Software Engineering), the Algorithm Accountability Lab at TU Kaiserslautern (Kaiserslautern’s technical university), the Institute for Legal Informatics at Saarland University and Stiftung Neue Verantwortung (SNV – Foundation for New Responsibility) will be researching these questions in collaboration with the GI. These affiliated partners are collaborating under the umbrella of a research project entitled "The feasibility of testing and auditing for AI-based systems", which was initiated by the AI Observatory of the Policy Lab. Over a 20-month period, they will evaluate options for the development and implementation of control and testing procedures and certifications for AI systems.
Against the background of the project’s launch, State Secretary Björn Böhning points out an important fundamental requirement relating to the use of AI: "We already know that trust is of the essence – in operations, in companies and in the most diverse work contexts. If we want to make lasting use of the benefits of AI processes and keep expanding this technology, we will also have to consider the issue of acceptance. Employees should always have a clear understanding of what AI solutions to use and when, why and how to use them. They should also be aware that AI systems are subject to high quality standards."
The research issues to be tackled by the affiliated partners will be applied to two concrete scenarios: firstly, how can humans and machines cooperate in industrial production; secondly, what potential role is there for AI systems in human resource management, talent management and recruitment? These two examples of issues selected by the researchers demonstrate that the basic conditions for the use of AI processes are currently throwing up questions covering an extremely wide range of application fields. They also illustrate that a high degree of social responsibility is essential when dealing with AI.
The multidisciplinary team of researchers will analyse the two application scenarios in multiple work steps from different perspectives – namely legal, technical and political. Once suitable control and testing methods have been identified, action recommendations will be formulated. These suggestions are to be addressed to stakeholders and politicians and evaluate how the implementation of enshrining the defined standards might be achieved in the near future – for example at the institutional level.