Projektdetails
Beschreibung
The European society increasingly relies on complex digital systems. In this context, we address the problem of working efficiently and reliably by using a large and complex set of interrelated technical documents of digital systems, where even human experts can hardly keep an overview or know all the details. This is particularly important in safety-critical domains such as electric vehicles, where such document sets are needed for specifying hardware-software interfaces correctly. Traditional methods of Information Retrieval do not allow for convenient access to the information in such documents, since they require skills of using specific query languages, which, e.g., safety experts may not have.
Recent advances in Generative Artificial Intelligence (GenAI) and, more specifically, Large Language Models (LLMs) suggest that for the problem at hand, dialogues in natural language will be possible for accessing the content of such a document set. At the current state of the art, however, there is the major problem of “hallucinations” of LLMs, i.e., replying with pieces of information that are not based on factual knowledge but rather stem from more “creative” generations by the language model.
This is a major impediment to trust in such replies and, in general, such an approach, especially if it may be applied in a safety-critical domain such as electrical vehicles. Avoiding such hallucinations is key to reducing safety risks with such vehicles in the course of their development.
Trust is a complex psychological and sociological concept, and trustworthiness of a trustee is a major aspect of establishing trust. The significance of trust is not limited to the interpersonal domain; trust can also define the way people interact with technology. Trust in automation was transferred from trust relationships between humans. The truster is human, the trustee is an automated system.
Hence, we address ensuring trust through creating an LLM-based approach that is trustworthy. This is in line with the Regulation Artificial Intelligence Act (AI Act), which has just been approved by the European Parliament. The AI Act states:
“The purpose of this Regulation is to promote the uptake of human centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation and improving the functioning of the internal market. …”
Since the AI Act is a legal framework, our consortium includes legal experts, who will make sure that our research and development will really be in line with the AI Act and support its application. For making our approach human-centric, our consortium includes an expert on human-centered design that is diversity- and gender-sensitive.
Our proposed project includes research on a new approach to Requirements Engineering for specifying the needs in this regard. Our proposed research on Trust-related Requirements Engineering (RE) will be highly innovative by integrating quality attributes (in the sense of properties) of both systems (in our case, LLM-assisted systems for querying technical documentation) and data (in our case, primarily technical documentation) with each other, since properties of both system (such as transparency) and data (such as correctness) are relevant. At the current state of the art, only quality attributes of systems are taken into account in RE, but the importance of data and their qualities is ever increasing. Hence, for any non-trivial system based on data, quality attributes of both the system and its data are essential, and our new RE approach will take that into account in the development of trustworthy systems, in particular based on LLMs.
Based on concrete requirements on a specific LLM-assisted system to be defined according to this new approach, a specific tool owned by a scientific partner in this consortium (AERIALL, which supports free-to-use open-source models such as Mistral) will be further developed to satisfy these requirements for making it more trustworthy. Particular emphasis will be on avoiding hallucinations, while taking properties of both the system and its data into account.
This approach will be evaluated with real-world documents owned by the industrial partner in this consortium. In due course, it will be compared with the well-known ChatGPT approach from OpenAI, which may even be less performant than our proposed approach on custom information (in our case large and complex hardware-software interface documentation in a safety-critical domain), and especially not trustworthy.
With these innovations, the consortium will make a significant contribution to applying state-of-the-art AI technology to an important real-world problem, even taking safety-critical applications into account. Our overall approach is new and, therefore, not available yet, neither in Austria nor in Europe nor abroad. It will seamlessly link the AI Act with a new approach to Requirements Engineering for trustworthy systems, and its use for an LLM-assisted approach to working with a large and complex set of technical documentation. This is particularly important for safety-critical domains.
Recent advances in Generative Artificial Intelligence (GenAI) and, more specifically, Large Language Models (LLMs) suggest that for the problem at hand, dialogues in natural language will be possible for accessing the content of such a document set. At the current state of the art, however, there is the major problem of “hallucinations” of LLMs, i.e., replying with pieces of information that are not based on factual knowledge but rather stem from more “creative” generations by the language model.
This is a major impediment to trust in such replies and, in general, such an approach, especially if it may be applied in a safety-critical domain such as electrical vehicles. Avoiding such hallucinations is key to reducing safety risks with such vehicles in the course of their development.
Trust is a complex psychological and sociological concept, and trustworthiness of a trustee is a major aspect of establishing trust. The significance of trust is not limited to the interpersonal domain; trust can also define the way people interact with technology. Trust in automation was transferred from trust relationships between humans. The truster is human, the trustee is an automated system.
Hence, we address ensuring trust through creating an LLM-based approach that is trustworthy. This is in line with the Regulation Artificial Intelligence Act (AI Act), which has just been approved by the European Parliament. The AI Act states:
“The purpose of this Regulation is to promote the uptake of human centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation and improving the functioning of the internal market. …”
Since the AI Act is a legal framework, our consortium includes legal experts, who will make sure that our research and development will really be in line with the AI Act and support its application. For making our approach human-centric, our consortium includes an expert on human-centered design that is diversity- and gender-sensitive.
Our proposed project includes research on a new approach to Requirements Engineering for specifying the needs in this regard. Our proposed research on Trust-related Requirements Engineering (RE) will be highly innovative by integrating quality attributes (in the sense of properties) of both systems (in our case, LLM-assisted systems for querying technical documentation) and data (in our case, primarily technical documentation) with each other, since properties of both system (such as transparency) and data (such as correctness) are relevant. At the current state of the art, only quality attributes of systems are taken into account in RE, but the importance of data and their qualities is ever increasing. Hence, for any non-trivial system based on data, quality attributes of both the system and its data are essential, and our new RE approach will take that into account in the development of trustworthy systems, in particular based on LLMs.
Based on concrete requirements on a specific LLM-assisted system to be defined according to this new approach, a specific tool owned by a scientific partner in this consortium (AERIALL, which supports free-to-use open-source models such as Mistral) will be further developed to satisfy these requirements for making it more trustworthy. Particular emphasis will be on avoiding hallucinations, while taking properties of both the system and its data into account.
This approach will be evaluated with real-world documents owned by the industrial partner in this consortium. In due course, it will be compared with the well-known ChatGPT approach from OpenAI, which may even be less performant than our proposed approach on custom information (in our case large and complex hardware-software interface documentation in a safety-critical domain), and especially not trustworthy.
With these innovations, the consortium will make a significant contribution to applying state-of-the-art AI technology to an important real-world problem, even taking safety-critical applications into account. Our overall approach is new and, therefore, not available yet, neither in Austria nor in Europe nor abroad. It will seamlessly link the AI Act with a new approach to Requirements Engineering for trustworthy systems, and its use for an LLM-assisted approach to working with a large and complex set of technical documentation. This is particularly important for safety-critical domains.
Akronym | TrustInLLM |
---|---|
Status | Laufend |
Tatsächlicher Beginn/ -es Ende | 1/10/24 → … |
Projektpartner
- WU Wirtschaftsuniversität Wien (Leitung)
- Interdisziplinäres Forschungszentrum für Technik, Arbeit und Kultur (IFZ)
- Pro2Future GmbH
- Robert Bosch AG