Publication List

: Lifelong Learning; : Robustness; : Data-Model Efficiency; : Distributed Learning

Project Highlights

...
NeurIPS 2023 [Spotlight]
Model Sparsity Can Simplify Machine Unlearning
Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu

The MSU team investigates the concept of “forgetting” (or “machine unlearning”) in ML training, specifically in the context of “targeted forgetting” to respond to data deletion tasks from users. This research demonstrates, both theoretically and practically, that “model sparsity” can effectively facilitate unlearning for better outcomes. The research on machine unlearning also links to “continual learning”.

...
NeurIPS 2023 [Spotlight]
Minimum-Risk Recalibration of Classifiers
Zeyu Sun, Dogyoon Song, Alfred Hero

The UM team investigates the realm of transfer learning, particularly when recalibrating a pre-trained ML model to new local data. This work provides valuable insights, demonstrating the advantages of calibrating a pre-trained model compared to training from scratch on all data, and has significant relevance in scenarios like federated learning, where models are shared among agents, one of whom possesses a large pretrained model.

...
NeurIPS 2023
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
Yihua Zhang, Yimeng Zhang, Aochuan Chen, Jinghan Jia, Jiancheng Liu, Gaowen Liu, Mingyi Hong, Shiyu Chang, Sijia Liu

The MSU team explores the efficient pruning of large-scale source datasets to facilitate more efficient source/foundational model training without compromising transfer learning performance on downstream tasks.

...
ICASSP 2023
Robustness-Preserving Lifelong Learning Via Dataset Condensation
Jinghan Jia, Yihua Zhang, Dogyoon Song, Sijia Liu, Alfred Hero

The MSU-UM team proposes a new memory-replay LL strategy that leverages modern bi-level optimization techniques to determine the "coreset" of the current data (i.e., a small amount of data to be memorized) for ease of preserving adversarial robustness over time.