Jump directly to: Content

AI in Administration

What are the values on which the introduction and use of AI in Labour and Social Administration should be based?

Published on 09 Jun 2022

In the Network on AI in Labour and Social Administration, self- commiting guidelines for the introduction and use of AI applications in practice are developed in a bottom-up process. To ensure these guidelines have a solid foundation, the member organisations of the network have agreed on a shared system of values.

The potential of AI applications for administration is enormous. Processing times can be shortened, staff members can be supported and processes made more efficient. If anything, the pandemic has shown how important modern, effective administration is. At the same time, agencies involved in Labour and Social Administration have a special responsibility when it comes to using AI. They process very sensitive data and their services and decisions have a direct effect on members of the public who are often facing especially challenging circumstances in their lives. To fulfil this responsibility, the members of the Network on AI in Labour and Social Administration have jointly agreed on the fundamental rights, values and principles for the use of AI and developed a corresponding system of values for this purpose.

Human-centred approach and benefits for the common good

The use of AI should be human-centred and provide benefits for the common good. All stakeholders and their respective use requirements, needs and values must be incorporated in the development and implementation of AI systems. Any social consequences of the use of AI and any conceivable implications for basic societal values such as democracy and rule of law must be considered early on and possible risks assessed in a risk class analysis.

Fairness and non-discrimination

AI-aided decisions must be fair and non-discriminatory. There is a risk that developing AI systems will replicate and reinforce existing inequalities in the analogue world In handling the data collected from previous administrative practice, the process of introducing AI can help uncover existing points of discrimination and find solutions for these. Ensuring that AI systems are non-discriminatory is essential, because the results produced by these systems are typically the basis for a number of decisions made by authorities.

Clarity and transparency

AI models should be as transparent as possible and designed so that the suggestions they propose can be traced and explained. This is the only way other values such as a human-centred approach, fairness or non-discrimination can be effectively implemented: it must be possible to identify when data records have been skewed or the AI application has used discriminatory parameters. Transparency and clarity are also the basis for enabling humans to check and correct the AI system. In addition, clarity and transparency mean that members of the public are always made aware when they are dealing with an AI system or when an AI system has been involved in a decision-making process (even if just in the preliminary stages).

Privacy and protection of personal rights

There is always a special risk when AI systems process personal data, because this may result in assessments, for example, on which decisions are based that sometimes have far-reaching consequences for the data subject. Agencies involved in labour and social administration process the personal data of members of the public, and this data is often especially sensitive (e.g. data concerning a person's health, education and employment history, information on their personal, family, social and economic situation). In using AI, these agencies must therefore pay special attention to the protection of privacy and to the control of one’s own personal data. They must also comply with data protection regulations that guarantee these rights.

Safety, security and robustness

An AI system must be secure and robust. There must therefore be sufficient protection against misuse, attacks and security breaches (e.g. against hacking) and adequate contingency plans for security risks that occur. The safety of the people interacting with the system must also be ensured. It must always be possible to reproduce the results of an AI system correctly and reliably, and to accurately evaluate the circumstances of a case.

Responsibility and ability to intervene

It must be possible to adapt and switch off AI systems during use. Spheres of responsibility in the planning, development and use of AI must also be clearly defined and assigned, so that those responsible are appointed and aware of their special roles at all times. It is especially important – including for having a human-centred approach to AI – to ensure that the final decisions are always made by a human being. From the perspective of members of the public, the ability to intervene means that their ability to participate in a formal complaint process (appeals and claims) must not be hampered by the use of AI. The necessary knowledge must be spread throughout the agencies involved in labour and social administration as far as possible, so that staff members are able to interact with the AI systems as informed users and can identify and report any errors that occur.

Ecological sustainability and protection of resources

The more often AI is used and the greater the computing workload of individual AI applications, the more important it becomes to consider aspects of sustainability, protection of resources and energy efficiency. Public administration agencies have a big responsibility especially in the planning, development and procurement of sustainable AI. They should opt for AI systems that are as sustainable as technically possible. In doing so, government agencies are able to use their market power in procurement to raise the demand for sustainable AI and therefore actively contribute to climate protection.