Malware Detection

This project is supported by Samsung SDS (2020~2021)


  • Intrusion detection system

  • Explanable AI (XAI)

This project is supported by the LIGNex1 (2020~2021)

Intelligent Vehicle Control

  • Vehcle Speed Predition

  • Model Compression

This project is supported by the Hyundai Motor Group (2019~2021)

Deep Model Compression

Model compression is an application of sparse coding, where we "compress" models by excluding many zero values in sparse parameter vectors from storage and compution. In this project, we use L1-norm and its variants as regulaizers to induce various forms of zero-value patterns in parameter tensors in DNNs, expecially in CNNs.

We study sparse coding, a technique to use regularizers to induce certain structure in trained model paramters. L1-norms are the most popular regularizers, appearing in machine learning & statistics (e.g. LASSO) and signal recovery (e.g. compressed sensing), where elementwise sparsity of parameter vectors leads to discovery of important variables/signals.

We're also interested in an actual implementation of the idea, so that training and testing of DNNs can be performed on embedded systems using model parameters in compressed forms. We study parallel implementations using CUDA and OpenCL backends, on embedded platforms such as Nvidia Jetson and Samsung Exynos 8890.

This project is supported by the Electronics and Telecommunications Research Institute of Korea (ETRI, 2018~2020) and the National Research Foundation of Korea (grant NRF-2018R1D1A1B07051383, 2018~2020)

Air Quality Prediciton

Predict PM10 and PM2.5 of South Korea.

This project is supported by the NATIONAL INSTITUTE OF ENVIRONMENTAL RESEARCH (2019~2021)

Smart Factory

Computer vision system for automatized product inspection.

This project is supported by the Myunghwa Industry (2019~2020)

Adversarial Attack & Defense

Recently it has been shown that ML models can be fooled by creating so-called adversarial examples, modifying data points to maximize the training loss function. Adversarial examples have been studied actively in computer vision and computer security.

We have investigated recently proposed attack mechanisms against ML models, studying why such attacks are ever possible regarding learning models and theory. We also have investigated available defenses for those attacks, analyzing their potential problems.

We now investigate malware detection problems, where ML-based detectors are getting more interest due to their capability to prevent zero-day attacks. In our research, we try to build adversarial examples with binary code constraints, to check that if it is possible to obfuscate ML-based malware detectors by modifying malware binary code in a systematic fashion.

This project is supported by the National Security Research Institute of Korea (grants 2017-125, 2018-150)