首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Achieving privacy-preserving cross-silo anomaly detection using federated XGBoost
Institution:1. University of California, Irvine, United States;2. City University of Hong Kong, City University of Hong Kong Shenzhen Research Institute, China;3. University of Miami, United States;4. Dalian University of Technology, China;5. Sun Yat-sen University, Guangxi Key Lab of Multi-source Information Mining & Security, China;1. LAJ Laboratory, University of Jijel, Algeria;2. Université Paris-Saclay, Univ Evry, IBISC, Evry 91020, France;1. Key Laboratory of Advanced Perception and Intelligent Control of High-end Equipment, Ministry of Education, Anhui Polytechnic University, Wuhu 241000, China;2. Key Laboratory of Smart Manufacturing in Energy Chemical Process (East China University of Science and Technology), Ministry of Education, Shanghai 200237, China;1. College of Information Science and Technology, Donghua University, Shanghai 201620, China;2. Department of Electronic Engineering, Jiangsu University, Zhenjiang 212013, China;3. College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China;4. Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China
Abstract:Privacy has raised considerable concerns recently, especially with the advent of information explosion and numerous data mining techniques to explore the information inside large volumes of data. These data are often collected and stored across different institutions (banks, hospitals, etc.), or termed cross-silo. In this context, cross-silo federated learning has become prominent to tackle the privacy issues, where only model updates will be transmitted from institutions to servers without revealing institutions’ private information. In this paper, we propose a cross-silo federated XGBoost approach to solve the federated anomaly detection problem, which aims to identify abnormalities from extremely unbalanced datasets (e.g., credit card fraud detection) and can be considered a special classification problem. We design two privacy-preserving mechanisms that are tailored to the federated XGBoost: anonymity based data aggregation and local differential privacy. In the anonymity based data aggregation scenario, we cluster data into different groups and using a cluster-level data feature to train the model. In the local differential privacy scenario, we design a federated XGBoost framework by incorporate differential privacy in parameter transmission. Our experimental results over two datasets show the effectiveness of our proposed schemes compared with existing methods.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号