Special Anniversary Edition: Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning

Special Anniversary Edition: Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning ><

About this Event

This paper presents a systematic overview and comparison of parameter-efficient fine-tuning methods covering over 40 papers published between February 2019 and February 2023. These methods aim to resolve the infeasibility and impracticality of fine-tuning large language models by only training a small set of parameters. We provide a taxonomy that covers a broad range of methods and present a detailed method comparison with a specific focus on real-life efficiency and fine-tuning multibillion-scale language models.

Speaker

Vladislav Lialin ><

Vladislav Lialin is a computer science PhD student at University of Massachusetts Lowell advised by Anna Rumshisky. His research areas include continual learning for large language models, multimodal learning, and model analysis. In particular, he is hyping large-scale models and thinks that every task is a language modeling task if you try hard enough. He is currently interning at Amazon Alexa AI.