Assessing Privacy Policies with AI: Ethical, Legal, and Technical Challenges

Publikation: Vorabdruck/Dokumentation/BerichtVorabdruck (Preprint)

Beitragende

Abstract

The growing use of Machine Learning and Artificial Intelligence (AI), particularly Large Language Models (LLMs) like OpenAI's GPT series, leads to disruptive changes across organizations. At the same time, there is a growing concern about how organizations handle personal data. Thus, privacy policies are essential for transparency in data processing practices, enabling users to assess privacy risks. However, these policies are often long and complex. This might lead to user confusion and consent fatigue, where users accept data practices against their interests, and abusive or unfair practices might go unnoticed. LLMss can be used to assess privacy policies for users automatically. In this interdisciplinary work, we explore the challenges of this approach in three pillars, namely technical feasibility, ethical implications, and legal compatibility of using LLMs to assess privacy policies. Our findings aim to identify potential for future research, and to foster a discussion on the use of LLM technologies for enabling users to fulfil their important role as decision-makers in a constantly developing AI-driven digital economy.

Details

OriginalspracheEnglisch
PublikationsstatusVeröffentlicht - 10 Okt. 2024
No renderer: customAssociatesEventsRenderPortal,dk.atira.pure.api.shared.model.researchoutput.WorkingPaper

Externe IDs

ORCID /0000-0003-1340-4330/work/170582671
ORCID /0000-0002-6505-3563/work/170583450

Schlagworte

Schlagwörter

  • cs.CY