Woman and man working at one desk.

AI in Public Administration

Hype, Risk or Game-changer?

Published on 22/03/2024

Artificial intelligence (AI) is a hot topic these days, especially since the launch of the AI chatbot ChatGPT at the end of 2022. The hype around it is fuelling hopes that machines will be able to perform routine tasks and complex research in the future. The topic of AI is nothing new in administration, but a lot of uncertainty still remains. What exactly do we mean when we talk about artificial intelligence? Which applications are suitable for public authorities? How can they change the way we work? And what do public administration bodies need to do to truly harness the potential of AI?

How AI is making its way into administration 

The breakthrough of AI happened seemingly overnight. The release of ChatGPT triggered a real boom that extended far beyond the borders of the IT world. Even though developments in the field of artificial intelligence are progressing rapidly, the concept of AI is nothing new. In fact, scientists have been researching it since the late 1950s. And even the public sector discovered AI several years ago. Strategies regarding AI in administration now exist at both federal and state level. Many AI applications are already being developed and tested as prototypes in public authorities. But before we evaluate the possibilities offered by AI in administration, let's first take a brief look at what this megatrend is all about.

The third wave of artificial intelligence 

The ability to learn, plan and make rational decisions no longer seems to be reserved for humans alone. The current generation of AI applications is already part of the third wave. This wave is characterised by the ability of programs not only to analyse data and recognise patterns, but also to learn and adapt decision-making processes themselves. The systems can automate tasks and imitate human abilities such as speech recognition, image analysis or even creative thinking.

Different types of artificial intelligence 

Terms such as machine learning, neural networks or natural language processing are often used in connection with AI. They are often used interchangeably, but there are different technologies behind them. These different approaches are often combined to create powerful, versatile AI systems.

Pictogram Machine

Machine learning 

This term refers to the learning process of an AI system which is based on data analysis. AI can use data as a basis to recognise patterns and draw conclusions from them. There are different types of machine learning, such as supervised learning, unsupervised learning and reinforcement learning. In deep learning, AI processes and analyses particularly large amounts of data. Unlike traditional machine learning, deep learning models can learn independently.

Pictogram neural networks

Neural networks 

Deep learning models use neural networks which, in turn, are a special type of machine learning. The mathematical models imitate the structure of the human brain and consist of interconnected artificial neurons. They are able to process and assess information. Neural networks are particularly good at recognising complex patterns in large amounts of data.

Pictogram language model

Language model 

A language model is based on natural language processing technology. The purpose of a language model is to understand and react to human language. Language models are used in text recognition and translation technologies. The most prominent example of this is Generative Pre-trained Transformer (GPT), which forms the basis for what is probably the most famous chatbot in the world. Due to its mass of parameters and training data, the GPT model is considered a Large Language Model (LLM). The current version, GPT-4, is also characterised by multimodality. This means that, in addition to text, it can process other types of data such as images, videos and audio material. The German start-up Aleph Alpha, known for its AI applications in public administration, uses various multimodal LLMs.

Pictogram chatbot

Generative AI 

These are AI models that can independently generate new content such as texts, images, audio material or videos. Generative AI does not stand in contrast to machine learning, neural networks and language models, but rather, uses these technologies to imitate humans. Examples of generative AI applications include ChatGPT, Aleph Alpha's chatbot Luminous, and image generators such as Midjourney or DALL-E.

Advantages of AI for administration in Germany 

There will be an increased focus on AI in public administrations in coming years. Given the increasing shortage of skilled workers and the fact that around 1.3 million employees are set to retire from the public sector by 2030, modernising administrative bodies and alleviating their workloads is vital. Automation can help optimise work processes and reduce the burden of routine activities involving repetitive tasks and large quantities of data. Employees would save valuable time, which they can then dedicate to more complex tasks. However, AI can also bring added value to these demanding tasks in administration, for example by organising and analysing huge pools of data. Data-based research results are helpful for things such as discussion documents, status reports and decision papers. If the administration makes use of its considerable data resources, it can act more efficiently, effectively and proactively.

Chatbots and other automated communication systems can provide rapid guidance around the clock for frequently asked questions, such as those concerning vehicle registration. Virtual assistants provide support when making appointments or filling out forms and applications. In future, they may be able to guide citizens through complex administrative processes and help them make informed decisions. Beyond the resource-saving automation of processes, the use of AI opens up numerous opportunities to improve citizen services.

The public sector too has its share of routine tasks, such as recording data, checking whether applications are complete, documenting and preparing protocols. AI can help lighten the burden on administrative staff by taking on such tasks. The keyword here is “automation”. According to Federal CIO Dr. Markus Richter, it has been shown that automated processes often also reduce the error rate. In 2023, the state government of Baden-Württemberg conducted trials of the F13 support system, designed to significantly reduce the time administrative staff spend on textual tasks. Following evaluation, the prototype could soon become a marketable product. F13 is able to summarise texts in various compression levels, provide support with research, and answer questions regarding uploaded documents. Using the “Vermerkomat” function, employees can instruct F13 to merge documents or texts and even analyse them in line with a specific topic. As an example, Richter cites this command: “Summarise the state of play of the analysis on transport transition.”

AI-based text recognition is another example of how artificial intelligence can help reduce the workload in administrative bodies. It filters information from unstructured data – for example from scanned documents – categorises it and converts it into a format that can be directly processed by technical systems. An integrated keyword search makes it easier for employees to find the information they need. AI text recognition can also automate the labour-intensive task of creating minutes or reports

The benefits of AI in public services go well beyond increased efficiency, however. It can serve as a supportive tool for decision-making in administration by making predictions or identifying trends and patterns on the basis of comprehensive historical training data. Such AI-based forecasting models or decision support systems can increase the degree to which administrative action is evidence-based and therefore make it more targeted. This would benefit virtually every policy area – at every level of administration. The Report by the Committee on Education, Research and Technology Assessment of the German Bundestag from 26 September 2022 summarises it as follows: “At federal level, AI-based forecasting models can strengthen proactive and preventive government action, e.g. in relation to security, supply, environmental protection, transport planning, consumer protection and migration.” 

PLAIN, a platform that enables AI-based data analysis for federal administration, has been in use since 2023. According to the report, forecasting AI could assist in planning school and daycare places or making future tax estimates in state politics. Local authorities, for example, could benefit in infrastructure planning—focusing on energy, pollutant, and cost efficiency, as explained by the committee.

What is needed to integrate AI into administration? 

The potential is huge, and several practical applications already exist. However, there is still a lot to do before AI technologies become established in everyday administration. On the one hand, in technical terms, ministries and offices must provide the requisite infrastructure, develop databases and set up AI systems in accordance with strict regulations. On the other hand, individuals must be willing and able to work with the new technologies.

A man and a woman working together at a computer.

The image shows a man and a woman working together at a computer.

A question of organisation: How AI is making its way into the public sector 

Technological progress requires people to change, too. Administrative employees not only need to be made aware of AI solutions – they also require specific digital skills to apply and use them. Employees need to become accustomed to using AI as part of their regular workflow. According to a study conducted by the Bertelsmann Stiftung in 2023, this requires far more than just IT expertise. For example, its pamphlet “Orientation in the Skills Jungle” lists seven types of skills that need to be honed in order to successfully implement artificial intelligence in public authorities. These include a technical understanding of AI systems, organisational and communication skills, as well as an awareness of social and ethical aspects.

Pictogram number 1

Step one for AI in public administration: Data collection 

In order to put AI into practice, authorities must first collect data of sufficient quality to train the AI. Although Germany’s administrative bodies have masses of data at their disposal, far too much of it has not been digitised yet. Or rather, what has been digitised is rarely suitably prepared for structured digital access. The advantage gained is mediocre, at best. The simplest solution? OCR (Optical Character Recognition). OCR capture systems recognise scanned text and automatically convert it into readable text. This process, known as information extraction or data extraction, also categorises the information that is read out. This means that, in addition to text, the OCR software also provides context, so it can distinguish what the information refers to.

Step two: Finding data and making it accessible 

Internal data collection is only the first step towards feeding artificial intelligence with training data. AI needs a veritable treasure trove of training data. which can usually be sourced from several locations. This applies in particular to federal administration, where a policy area or topic is distributed across various sources. The data inventories are equally confusing and unstructured. Finding information currently requires time-consuming, manual research. Bundesdruckerei GmbH developed a guide on behalf of the Federal Ministry of Finance: as part of the Data Atlas project, a prototype was created to provide an overview of the financial administration's databases. It uses a metadata catalogue to show employees where data on a specific topic can be found, what type of data it is, the legal basis for its collection and who the contact person is.

Pictogram number 2
Pictogram number 3

Step three: Preparing data for AI 

A solution such as the Data Atlas may make data findable, but that data is rarely in a format that makes it immediately usable for AI systems. Artificial intelligence can only be as accurate as the data you feed it with. If the data is not of high quality, this will be reflected in the outcome: “garbage in, garbage out”. Data must be prepared to ensure adequate quality. Above all, this means checking it for completeness and consistency or cleaning it up, for example to identify duplicates or incorrect data. Preparing data also means augmenting it, combining it, grouping it and more. This lengthy to-do list shows that the process takes a lot of time and expertise. And especially during a shortage of skilled workers, administrative bodies lack the personnel to carry out the processing themselves.

Step four: Using data – how AI is taking over in federal administration 

And this premise also applies to the use of data in AI projects. Here too, administrative bodies can only act independently if they have their own solutions. This is precisely why the Platform Analysis and Information System (PLAIN) project has been in place since 2023. With PLAIN, federal administration has a central platform for AI-supported data analysis. Departments can evaluate and visualise huge data sets in individual applications. This should ultimately benefit political decision-making. The platform, which provides the ministries with software, platform, and infrastructure as a service (SaaS, PaaS and IaaS), is backed by the Federal Foreign Office’s service provider for IT abroad – Auslands-IT – and Bundesdruckerei GmbH.

Pictogram number 4
Pictogram Testing

KI-KC and BeKI: Enablers for AI in the federal administration 

Before a ministry uses AI, it needs a clear understanding of the areas where AI can provide added value and the specific forms this should take. And even once it has an idea, it still has hurdles to overcome. This is because, in order to develop use cases and test solutions, the administration must invest human resources and develop implementation skills. The AI Competence Centre (KI-KC) set up by Bundesdruckerei GmbH for the Federal Ministry of Finance helps with this. It serves as a networking tool for federal administration, helping to identify potential and build up technological expertise. The KI-KC develops user-centric prototypes in close cooperation with the relevant authorities. As such, trial and error play a central role. The Advisory Centre for Artificial Intelligence (BeKI) at the Federal Ministry of the Interior and Community aims to create synergies at federal level and ensure that the development of AI expertise is coordinated, for example with a “marketplace of AI opportunities”.

AI in the strategies of the federal government 

The service and support offerings show that a lot is happening right now in terms of artificial intelligence, especially at federal level. The German government committed to this years ago. As part of its AI Strategy, Germany has been investing in the research, development and application of artificial intelligence since 2018. Under the heading “AI in public administration”, the document lists possible areas of application, for example in defence against cyber threats, in security authorities, in civil protection, or in sustainability policy. The AI and Big Data Application Lab, which is based at the Federal Environment Agency, has already been put into operation.

The picture shows the Chancellery in Berlin.

The image shows the Chancellery in Berlin.

In 2023, the German government decided to enhance their data strategy with the aim of optimising the data base required for AI systems. Data should be available to AI applications in better quality and on a larger scale. Especially with regard to AI in administration, the approach taken is described as follows: “We are examining whether and to what extent LLMs should be used sensibly in the public sector while ensuring compliance with data protection regulations.” For example, it should become easier to use unstructured data for LLMs and break down data silos within administration. Privacy-enhancing technologies (PET) could be used to preserve data privacy.

Regulation, data protection and data ethics 

With its reference to data protection and PET, the German government’s data strategy also sheds light on the much-cited elephant in the room. For all its benefits, artificial intelligence has many people worried. Critical voices warn of an AI that is not subject to any control. Democratic values and transparency must always be guaranteed. It is not without reason that Richter advocates for considered use of AI – wherever there is room for discretion, the use of AI is currently ruled out. Without question, there needs to be enough room for innovation to promote the positive development of AI. However, we also need rules that contain threats and protect civil rights. This is particularly important for AI within administration. 

GDPR and artificial intelligence 

Such rules are provided not least by the GDPR – namely when personal data is used. Personal data is any data that could be used to draw conclusions about natural persons. This could be an issue in administration, especially when training AI. Article 6 (1) GDPR regulates when exactly an organisation may process personal data. It seems utopian that the data subject could explicitly consent (point (a) of the paragraph) to providing personal information for AI training purposes. Points (c) and (f) of Article 6 (1) would be more likely to apply. Accordingly, the AI would be necessary for the authority to fulfill its legal obligation (c) or to act in line with a legitimate interest (f) that outweighs the interests, fundamental freedoms or fundamental rights of the data subjects. It sounds complicated, and it is – so it’s obviously a case for the data protection officers, and possibly for data trustees. As a neutral intermediary between data providers and data users, it can pseudonymise or anonymise anything that points towards specific identities. However, if citizens use artificial intelligence offered by the administration that processes personal data, the classic consent rule applies.

“For example, [when using personal data] it would be important to clearly pseudonymise or anonymise such data before it is used as training data to eliminate the risk that knowledge about a person can be extracted from an AI system via certain channels.” (source: dpa) 

Dr Ulrich Kelber, Federal Commissioner for Data Protection and Freedom of Information
Dr Ulrich Kelber, Federal Commissioner for Data Protection and Freedom of Information
Source: German Bundestag / Achim Melde

Article 22 GDPR is also relevant with regard to the use of AI. According to this, a natural person has “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” What’s special in Article 22 is that data protection almost merges into the related field of data ethics.

Pictogram EU security

The EU AI Act – highly relevant for AI in administrative contexts 

Setting limits on the use of AI is also an important aim of the European Union’s AI Act. With the most comprehensive law of its kind to date, the EU is attempting to regulate artificial intelligence, steer it in a safe direction and promote innovation. Essentially, the regulation assesses AI applications according to their risk of jeopardising the safety and rights of EU citizens. To this end, it establishes four risk levels. The higher the risk, the stricter the obligations that AI applications must fulfil. Systems with “unacceptable risk” are generally prohibited. This includes social scoring, which ties rights to desired behaviour. Real-time biometric facial recognition in public spaces is also prohibited, with a few exceptions. 

AI systems with “high risk” do not pose a threat per se. However, because they are used in sensitive areas such as law enforcement and critical infrastructures, they have to meet strict requirements. AI applications that decide on the approval of state benefits also fall into this category. However, chatbots tend to pose a “limited risk”, while spam filters and AI-supported computer games are considered AI systems with “minimal risk”. In addition, the regulation stipulates that AI-generated or AI-manipulated content such as images, videos, or texts must generally be labelled as such.

AI in public administration: also a question of ethical principles 

The AI Act attempts to curb the dangers posed by AI while still seeking to promote its use. After all, the added value for citizens can be immense – especially if the AI allows administrations to act more efficiently. However, real social benefits will only arise if AI systems take certain values into account. They must not discriminate against anyone, for example because certain groups are underrepresented in the training data. They must be explainable so that it remains clear how results are achieved. Security against cyber attacks and the ability for people to intervene at any time are equally important. 

The Bundesdruckerei Group follows four specific principles in its generative AI projects: 

  • Factual accuracy: AI systems are not allowed to invent or interpret answers.
  • Explainability: There needs to be full transparency around data sources and the legal basis for responses.
  • Data sovereignty: Administrative bodies must retain control over all data used throughout the entire data cycle.
  • Independence: The AI systems must not lead to dependencies, especially not on non-European companies.

AI in public authorities: Moderation to ensure success 

Provided that appropriate values are established, the necessary database is accessible, and employees are adequately trained, the deployment of AI applications should result in a genuine win-win scenario within the administration. If artificial intelligence takes over routine tasks, government employees will have more time for what is most important: citizens. And AI-based data analysis and decision support systems benefit society by promoting targeted policies. However, targeted policies are also required when it comes to planning AI projects in public authorities. This calls for coordinated, overarching cooperation between the ministries and the creation of synergies. With BeKI, KI-KC and PLAIN, Germany’s highest administrative level demonstrates that it has understood this requirement.