首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Learning fair representations by separating the relevance of potential information
Institution:1. School of Management, Wenzhou Business College, Wenzhou, China;2. School of Finance and Trade, Wenzhou Business College, Wenzhou, China
Abstract:Representation learning has recently been used to remove sensitive information from data and improve the fairness of machine learning algorithms in social applications. However, previous works that used neural networks are opaque and poorly interpretable, as it is difficult to intuitively determine the independence between representations and sensitive information. The internal correlation among data features has not been fully discussed, and it may be the key to improving the interpretability of neural networks. A novel fair representation algorithm referred to as FRC is proposed from this conjecture. It indicates how representations independent of multiple sensitive attributes can be learned by applying specific correlation constraints on representation dimensions. Specifically, dimensions of the representation and sensitive attributes are treated as statistical variables. The representation variables are divided into two parts related to and unrelated to the sensitive variables by adjusting their absolute correlation coefficient with sensitive variables. The potential impact of sensitive information on representations is concentrated in the related part. The unrelated part of the representation can be used in downstream tasks to yield fair results. FRC takes the correlation between dimensions as the key to solving the problem of fair representation. Empirical results show that our representations enhance the ability of neural networks to show fairness and achieve better fairness-accuracy tradeoffs than state-of-the-art works.
Keywords:Fair representation  Fair classification  Interpretability of representations  Fair machine learning
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号