On the Security and Privacy Implications of Large Language Models: In-Depth Threat Analysis

Luis Ruhländer, Emilian Popp, Maria Stylidou, Sajjad Khan*, Davor Svetinovic

*Korrespondierende*r Autor*in für diese Arbeit

Publikation: Beitrag in Buch/KonferenzbandBeitrag in Konferenzband

Abstract

Large Language Models (LLMs) have gained popularity since the release of ChatGPT in 2022. These systems utilize Artificial Intelligence (AI) algorithms to analyze natural language, enabling users to have sophisticated real-time conversations with them. The existing literature on LLMs is mostly focused on system design and lacks dedicated research on investigating privacy and security issues. To safeguard the interests of various stakeholders, it is crucial to understand the associated security and privacy risks of these models. Our study utilized STRIDE and LINDDUN methodologies to investigate security and privacy threats of LLMs. We presented a detailed system model of LLMs and analyzed the potential threats, vulnerabilities, security considerations, and mitigation tactics intrinsic to the design and deployment of various system components. Our comprehensive threat assessment showcases potential threats …
OriginalspracheEnglisch
Titel des Sammelwerks2024 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics
Untertitel des SammelwerksCopenhagen, Denmark, 2024
VerlagIEEE
Seiten543-550
ISBN (elektronisch)979-8-3503-5163-7
DOIs
PublikationsstatusVeröffentlicht - 2024

Österreichische Systematik der Wissenschaftszweige (ÖFOS)

  • 202022 Informationstechnik

Zitat