Generating Synthetic Training Data for Supervised De-Identification of Electronic Health Records

Claudia Alessandra Libbi, Jan Trienes*, Dolf Trieschnigg, Christin Seifert

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

13 Citations (Scopus)
378 Downloads (Pure)


A major hurdle in the development of natural language processing (NLP) methods for Electronic Health Records (EHRs) is the lack of large, annotated datasets. Privacy concerns prevent the distribution of EHRs, and the annotation of data is known to be costly and cumbersome. Synthetic data presents a promising solution to the privacy concern, if synthetic data has comparable utility to real data and if it preserves the privacy of patients. However, the generation of synthetic text alone is not useful for NLP because of the lack of annotations. In this work, we propose the use of neural language models (LSTM and GPT-2) for generating artificial EHR text jointly with annotations for named-entity recognition. Our experiments show that artificial documents can be used to train a supervised named-entity recognition model for de-identification, which outperforms a state-of-the-art rule-based baseline. Moreover, we show that combining real data with synthetic data improves the recall of the method, without manual annotation effort. We conduct a user study to gain insights on the privacy of artificial text. We highlight privacy risks associated with language models to inform future research on privacy-preserving automated text generation and metrics for evaluating privacy-preservation during text generation.
Original languageEnglish
Article number136
JournalFuture Internet
Issue number5
Publication statusPublished - 20 May 2021


Dive into the research topics of 'Generating Synthetic Training Data for Supervised De-Identification of Electronic Health Records'. Together they form a unique fingerprint.

Cite this