TY - JOUR
T1 - LLP-GAN
T2 - A GAN-Based Algorithm for Learning From Label Proportions
AU - Liu, Jiabin
AU - Wang, Bo
AU - Hang, Hanyuan
AU - Wang, Huadong
AU - Qi, Zhiquan
AU - Tian, Yingjie
AU - Shi, Yong
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 61702099, Grant 71731009, Grant 61472390, Grant 71932008, Grant 91546201, and Grant 71331005; in part by the Science and Technology Service Network Program of the Chinese Academy of Sciences through the STS Program under Grant KFJ-STS-ZDTP-060; and in part by the Fundamental Research Funds for the Central Universities in the University of International Business and Economics (UIBE) under Grant CXTD10-05.
Publisher Copyright:
© 2012 IEEE.
PY - 2023/11/1
Y1 - 2023/11/1
N2 - Learning from label proportions (LLP) is a widespread and important learning paradigm: only the bag-level proportional information of the grouped training instances is available for the classification task, instead of the instance-level labels in the fully supervised scenario. As a result, LLP is a typical weakly supervised learning protocol and commonly exists in privacy protection circumstances due to the sensitivity in label information for real-world applications. In general, it is less laborious and more efficient to collect label proportions as the bag-level supervised information than the instance-level one. However, the hint for learning the discriminative feature representation is also limited as a less informative signal directly associated with the labels is provided, thus deteriorating the performance of the final instance-level classifier. In this article, delving into the label proportions, we bypass this weak supervision by leveraging generative adversarial networks (GANs) to derive an effective algorithm LLP-GAN. Endowed with an end-to-end structure, LLP-GAN performs approximation in the light of an adversarial learning mechanism without imposing restricted assumptions on distribution. Accordingly, the final instance-level classifier can be directly induced upon the discriminator with minor modification. Under mild assumptions, we give the explicit generative representation and prove the global optimality for LLP-GAN. In addition, compared with existing methods, our work empowers LLP solvers with desirable scalability inheriting from deep models. Extensive experiments on benchmark datasets and a real-world application demonstrate the vivid advantages of the proposed approach.
AB - Learning from label proportions (LLP) is a widespread and important learning paradigm: only the bag-level proportional information of the grouped training instances is available for the classification task, instead of the instance-level labels in the fully supervised scenario. As a result, LLP is a typical weakly supervised learning protocol and commonly exists in privacy protection circumstances due to the sensitivity in label information for real-world applications. In general, it is less laborious and more efficient to collect label proportions as the bag-level supervised information than the instance-level one. However, the hint for learning the discriminative feature representation is also limited as a less informative signal directly associated with the labels is provided, thus deteriorating the performance of the final instance-level classifier. In this article, delving into the label proportions, we bypass this weak supervision by leveraging generative adversarial networks (GANs) to derive an effective algorithm LLP-GAN. Endowed with an end-to-end structure, LLP-GAN performs approximation in the light of an adversarial learning mechanism without imposing restricted assumptions on distribution. Accordingly, the final instance-level classifier can be directly induced upon the discriminator with minor modification. Under mild assumptions, we give the explicit generative representation and prove the global optimality for LLP-GAN. In addition, compared with existing methods, our work empowers LLP solvers with desirable scalability inheriting from deep models. Extensive experiments on benchmark datasets and a real-world application demonstrate the vivid advantages of the proposed approach.
KW - 2023 OA procedure
U2 - 10.1109/TNNLS.2022.3149926
DO - 10.1109/TNNLS.2022.3149926
M3 - Article
SN - 2162-237X
VL - 34
SP - 8377
EP - 8388
JO - IEEE transactions on neural networks and learning systems
JF - IEEE transactions on neural networks and learning systems
IS - 11
ER -