Abstract
Deep learning-based face recognition systems produce templates that encode sensitive information next to identity, such as gender and ethnicity. This poses legal and ethical problems as the collection of biometric data should be minimized and only specific to a designated task. We propose two privacy constraints to hide the gender attribute that can be added to a recognition loss. The first constraint relies on the minimization of the angle between gender-centroid embeddings. The second constraint relies on the minimization of the angle between gender specific embeddings and their opposing gender-centroid weight vectors. Both constraints enforce the overlapping of the gender specific distributions of the embeddings. Furthermore, they have a direct interpretation in the embedding space and do not require a large number of trainable parameters as two fully connected layers are sufficient to achieve satisfactory results. We also provide extensive evaluation results across several datasets and face recognition networks, and we compare our method to three state-of-the-art methods. Our method is capable of maintaining high verification performances while significantly improving privacy in a cross-database setting, without increasing the computational load for template comparison. We also show that different training data can result in varying levels of effectiveness of privacy-enhancing methods that implement data minimization.
Original language | English |
---|---|
Pages (from-to) | 352-363 |
Number of pages | 12 |
Journal | IEEE Transactions on Biometrics, Behavior, and Identity Science |
Volume | 6 |
Issue number | 3 |
Early online date | 17 Apr 2024 |
DOIs | |
Publication status | Published - Jul 2024 |
Keywords
- Face recognition
- Training
- Privacy
- Minimization
- Task analysis
- Biological system modeling
- Training data