|国家预印本平台
首页|Generating Privacy Stories From Software Documentation

Generating Privacy Stories From Software Documentation

Generating Privacy Stories From Software Documentation

来源:Arxiv_logoArxiv
英文摘要

Research shows that analysts and developers consider privacy as a security concept or as an afterthought, which may lead to non-compliance and violation of users' privacy. Most current approaches, however, focus on extracting legal requirements from the regulations and evaluating the compliance of software and processes with them. In this paper, we develop a novel approach based on chain-of-thought prompting (CoT), in-context-learning (ICL), and Large Language Models (LLMs) to extract privacy behaviors from various software documents prior to and during software development, and then generate privacy requirements in the format of user stories. Our results show that most commonly used LLMs, such as GPT-4o and Llama 3, can identify privacy behaviors and generate privacy user stories with F1 scores exceeding 0.8. We also show that the performance of these models could be improved through parameter-tuning. Our findings provide insight into using and optimizing LLMs for generating privacy requirements given software documents created prior to or throughout the software development lifecycle.

Wilder Baldwin、Shashank Chintakuntla、Shreyah Parajuli、Ali Pourghasemi、Ryan Shanz、Sepideh Ghanavati

计算技术、计算机技术

Wilder Baldwin,Shashank Chintakuntla,Shreyah Parajuli,Ali Pourghasemi,Ryan Shanz,Sepideh Ghanavati.Generating Privacy Stories From Software Documentation[EB/OL].(2025-06-28)[2025-07-22].https://arxiv.org/abs/2506.23014.点此复制

评论