Abstract
This dissertation explores how machine learning-based artificial intelligence (ML-based AI) impacts information privacy, particularly analysing how inference as a process associated with ML affects information privacy. Furthermore, this research highlights the limitations of the General Data Protection Regulation (GDPR) in addressing issues concerning inference, and suggests design requirements to embed the value of privacy into systems.
In its philosophical investigation, this dissertation distinguishes between various components and activities related to inference, including inferred information, AI models’ performance, and accessing anonymous information uncovered by ML models. Two aspects of privacy are considered: the descriptive, which pertains to its definition, and the normative, which relates to its value and the right to privacy. The investigation explores how inferred information affects the definition of privacy, the influence of AI models’ performance on the social value of privacy, and the implications of accessing information uncovered by ML models for group privacy, more precisely the group right to privacy.
In its legal investigation, this dissertation examines the GDPR’s effectiveness in addressing privacy issues related to information inferred about or ascribed to a person as a member of a group, as well as information derived from inference about a group as a whole.
In its technical investigation, this research proposes design requirements to embed the social value of privacy into systems. It develops a value hierarchy for privacy in which the highest layer examines the relationships between privacy and social autonomy, the middle layer identifies norms regarding promoting or protecting social autonomy, and the lowest layer translates those norms into design requirements.
In its philosophical investigation, this dissertation distinguishes between various components and activities related to inference, including inferred information, AI models’ performance, and accessing anonymous information uncovered by ML models. Two aspects of privacy are considered: the descriptive, which pertains to its definition, and the normative, which relates to its value and the right to privacy. The investigation explores how inferred information affects the definition of privacy, the influence of AI models’ performance on the social value of privacy, and the implications of accessing information uncovered by ML models for group privacy, more precisely the group right to privacy.
In its legal investigation, this dissertation examines the GDPR’s effectiveness in addressing privacy issues related to information inferred about or ascribed to a person as a member of a group, as well as information derived from inference about a group as a whole.
In its technical investigation, this research proposes design requirements to embed the social value of privacy into systems. It develops a value hierarchy for privacy in which the highest layer examines the relationships between privacy and social autonomy, the middle layer identifies norms regarding promoting or protecting social autonomy, and the lowest layer translates those norms into design requirements.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 16 May 2024 |
Place of Publication | Enschede |
Publisher | |
Print ISBNs | 978-90-365-6011-5 |
Electronic ISBNs | 978-90-365-6012-2 |
DOIs | |
Publication status | Published - May 2024 |