On the Security and Privacy Implications of Large Language Models: In-Depth Threat Analysis

Luis Ruhländer, Emilian Popp, Maria Stylidou, Sajjad Khan*, Davor Svetinovic

*Corresponding author for this work

Publication: Chapter in book/Conference proceedingContribution to conference proceedings

Abstract

Large Language Models (LLMs) have gained popularity since the release of ChatGPT in 2022. These systems utilize Artificial Intelligence (AI) algorithms to analyze natural language, enabling users to have sophisticated real-time conversations with them. The existing literature on LLMs is mostly focused on system design and lacks dedicated research on investigating privacy and security issues. To safeguard the interests of various stakeholders, it is crucial to understand the associated security and privacy risks of these models. Our study utilized STRIDE and LINDDUN methodologies to investigate security and privacy threats of LLMs. We presented a detailed system model of LLMs and analyzed the potential threats, vulnerabilities, security considerations, and mitigation tactics intrinsic to the design and deployment of various system components. Our comprehensive threat assessment showcases potential threats …
Original languageEnglish
Title of host publication2024 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics
Subtitle of host publicationCopenhagen, Denmark, 2024
PublisherIEEE
Pages543-550
ISBN (Electronic)979-8-3503-5163-7
DOIs
Publication statusPublished - 2024

Austrian Classification of Fields of Science and Technology (ÖFOS)

  • 202022 Information technology

Keywords

  • LLMs
  • STRIDE
  • LINDDUN
  • Threat Modeling
  • cybersecurity
  • privacy

Cite this