Seong Oun Hwang
Bio:
Seong Oun Hwang is a Professor of the Department of Computer Engineering at Gachon University, Korea. In 2004, Dr. Hwang received the PhD degree in Computer Science from the Korea Advanced Institute of Science and Technology, Korea. He founded and is currently serving as Chair of the IEEE Seoul Section Sensors Council Chapter since 2021. He has published six books with more than 200 technical papers, including top journals such as IEEE IoTJ, TC, TETC, TCE, TMC, TKDE, and TSC. He was appointed as Fellow of IET (Institution of Engineering and Technology) in recognition of his work and achievements in 2022.
Abstracts:
Data-Centric Artificial Intelligence: A New Engineering Discipline
The advent of AI has transformed the IT industry with a significant impact on nations and societies across the globe. In this talk, I will introduce a brand new and big concept related to AI, named Data-Centric AI. Specifically, I will present fundamentals, use cases, and opinions of industry experts related to Data-Centric AI, which is a promising research area with a lot of potential than the conventional Model-Centric AI. I will discuss the advantages of this new engineering discipline that can pave the way in rectifying the unsustainable research trajectories in the AI domain. Lastly, some enabling technologies targeting Data-Centric AI will be discussed, sharing and analyzing our research results as well.
Defense Against Poisoning Attacks in Federated Learning: An Optimal Approach
Federated learning (FL) has gained widespread adoption for training artificial intelligence (AI) models while ensuring the confidentiality of client data. However, this privacy-preserving nature of FL also makes it vulnerable to poisoning attacks. To counter these attacks, several defense methods have been developed to identify and filter out poisoned local models/data before the aggregation process. Nevertheless, these defense methods demonstrate suboptimal performance in retaining benign local models while discarding poisoned local models primarily due to inadequate filtering strategies. Consequently, these defense methods filter out large proportions of benign local models/data that are not poisoned, resulting in high false rejection rates or low detection accuracy, which leads to test accuracy degradation of the global model as well. In this talk, I will explain our approach which outperforms state-of-the-art defense methods on benchmark datasets, showing it is generally applicable to any dataset.
How to Enhance Privacy and Utility in an AI Era
The advent of AI has transformed the IT industry with a significant impact on nations and societies across the globe. The two core components that are vital in solving any real-world problem using AI technology are data and model. Good quality data is paramount in training AI models, but it encompasses the sensitive information about individuals, leading to privacy issues of various kinds (e.g., identity disclosure, sensitive information derivation/prediction, etc.). Generally, privacy techniques such as anonymization, differential privacy, encryption, etc. are used to secure privacy and utility issues in AI environments. However, the rigorous (but often unnecessary) setting of parameters by privacy models could lead to poor data utility. In some cases, the data composition and global information concerning data is not taken into account by the privacy models, and therefore, solving privacy-utility trade-off becomes even more challenging. Of late, many privacy preserving solutions have been developed to address the issues of privacy and utility in the AI era. In this talk, I will share our research results to strike a balance between privacy and utility, and explore promising research directions.
Links:
Website