Publications

drawing

Thesis

Striking the Balance: Optimizing Privacy, Utility, and Complexity in Private Machine Learning,

Slides Group Photos


2024

Lei Gao, Amir Ziashahabi, Yue Niu, Salman Avestimehr, Murali Annavaram,
Enabling Resource-Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines,
NeurIPS Workshop on Efficient Natural Language and Speech Processing (ENLSP), 2024

Paper

Sunwoo Lee, Tuo Zhang, Saurav Prakash, Yue Niu, Salman Avestimehr,
Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training,
IEEE Transactions on Mobile Computing, 2024

Paper

Lei Gao*, Yue Niu*, Tingting Tang, Salman Avestimehr, Murali Annavaram,
Ethos: Rectifying Language Models in Orthogonal Parameter Space,
North American Chapter of the Association for Computational Linguistics (NAACL), 2024

Paper Code Project Page

Yue Niu, Saurav Prakash, Salman Avestimehr,
ATP: Enabling Fast LLM Serving via Attention on Top Principal Keys,
Preprint on arXiv, 2024

Paper

Yue Niu, Ramy E. Ali, Saurav Prakash, Salman Avestimehr,
All Rivers Run to the Sea: Private Learning with Asymmetric Flows,
IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2024 (acceptance rate: 23.6%).

Paper Project Page

Tingting Tang*, Yue Niu*, Salman Avestimehr, Murali Annavaram,
Edge Private Graph Neural Networks with Singular Value Perturbation,
Privacy Enhancing Technologies Symposium (PETs), 2024 (acceptance rate: 22%).

Paper Code

Lei Gao*, Yue Niu*, Tingting Tang, Murali Annavaram, Salman Avestimehr,
Ethos: Rectifying Language Models in Orthogonal Parameter Space,
AAAI workshop on Responsible Language Model (ReLM), 2024 (Spotlight)

Paper Code Project Page Workshop


2023

Yue Niu, Saurav Prakash, Souvik Kundu, Sunwoo Lee, Salman Avestimehr,
Overcoming Resource Constraints in Federated Learning: Large Models Can Be Trained with only Weak Clients,
Transaction on Machine Learning Research (TMLR), 2023

Paper Code OpenReview Project Page

Sara Babakniya, Souvik Kundu, Saurav Prakash, Yue Niu, Salman Avestimehr,
Revisiting Sparsity Hunting in Federated Learning: Why the Sparsity Consensus Matters?,
Transaction on Machine Learning Research (TMLR), 2023

Paper Code OpenReview

Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr,
mL-BFGS: A Momentum-based L-BFGS for Distributed Large-scale Neural Network Optimization,
Transaction on Machine Learning Research (TMLR), 2023

Paper Code OpenReview Project Page

Xiruo Liu, Furqan Khan, Yue Niu, Pradeep Natarajan, Rinat Khaziev, Salman Avestimehr, Prateek Singhal,
Performance and Failure Cause Estimation for Machine Learning Systems in the Wild,
Conference on Computer Vision Systems (ICVS), 2023

Paper Amazon Science Report

(work done during internship at Amazon Alexa in 2022)


2022

Yue Niu, Ramy E. Ali, Salman Avestimehr,
3LegRace: Privacy-Preserving DNN Training over TEEs and GPUs,
Privacy Enhancing Technologies Symposium (PETs), 2022

Paper Code Video

Yue Niu, Saurav Prakash, Souvik Kundu, Sunwoo Lee, Salman Avestimehr,
Federated Learning of Large Models at the Edge via Principal Sub-Model Training,
International Workshop on Federated Learning in Conjunction with NeurIPS, 2022

Paper Code Workshop

Sara Babakniya, Souvik Kundu, Saurav Prakash, Yue Niu, Salman Avestimehr,
Federated sparse training: Lottery aware model compression for resource-constrained edge,
International Workshop on Federated Learning in Conjunction with NeurIPS, 2022

Paper Code Workshop


2021

Yue Niu, Salman Avestimehr,
AsymmetricML: An Asymmetric Decomposition Framework for Privacy-Preserving DNN Training and Inference ,
ICLR Workshop on Distributed and Private Machine Learning, 2021.

Paper Code Workshop

Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr,
SLIM-QN: A Stochastic, Light, Momentumized Quasi-Newton Optimizer for Deep Networks,
ICML Workshop on the Optimization, 2021

Paper Code Workshop


2020

Yue Niu, Rajgopal Kannan, Ajitesh Srivastava, Viktor Prasanna,
Reuse Kernels or Activations? A Flexible Dataflow for Low-latency Spectral CNN Acceleration,
ACM/SIGDA International Conference on Field-Programmable Gate Arrays (FPGA), (Oral), 2020

Paper Code

Yue Niu, Hanqing Zeng, Ajitesh Srivastava, Kartik Lakhotia, Rajgopal Kannan, Yanzhi Wang, Viktor Prasanna,
SPEC2: SPECtral SParsE CNN Accelerator on FPGAs,
IEEE International Conference on High Performance Computing (HiPC) (Oral), 2020.

Paper Code


Before 2020

Wei Zhou, Yue Niu, Guanwen Zhang,
Sensitivity-oriented layer-wise acceleration and compression for convolutional neural network,
IEEE Access, 2019.

Chunsheng Mei, Zhenyu Liu, Yue Niu, Xiangyang Ji, Wei Zhou, Dongsheng Wang,
A 200MHZ 202.4GFLOPS@10.8W VGG16 Accelerator in XILINX VX690T,
IEEE Global Conference on Signal and Information Processing (GlobalSIP) (Oral), 2017.

Yue Niu, Chunsheng Mei, Zhenyu Liu, Xiangyang Ji, Wei Zhou, Dongsheng Wang,
Sensitivity-Based Acceleration and Compression Algorithm for Convolutional Neural Network,
IEEE Global Conference on Signal and Information Processing (GlobalSIP) (Oral), 2017.

Yue Niu, Wei Zhou, Xiaocong Lian, Xin Zhou, Jiamin Yang,
A Stepped-RAM Reading and Multiplierless VLSI Architecture for Intra Prediction in HEVC,
The Pacific-Rim Conference on Multimedia (PCM), 2016


back