Reality Check: Natural Language Processing in the era of Large Language Models

About this Event

Large language models (LLMs) contributed to a major breakthrough in NLP, both in terms of understanding natural language queries, commands or questions; and in generating relevant, coherent, grammatical, human-like text. LLMs like ChatGPT became a product used by many, for getting advice, writing essays, troubleshooting and writing code, creative writing, and more. This calls for a reality check: which NLP tasks did LLMs solve? What are the remaining challenges, and which new opportunities did LLMs create?

In this talk, I will discuss several areas of NLP that can benefit from but are still challenging for LLMs: grounding, i.e. interpreting language based on non-linguistic context; reasoning; and real-world applications. Finally, I will argue that the standard benchmarking and evaluation techniques used in NLP need to drastically change in order to provide a more realistic picture of current capabilities.

Speaker

Vered Shwartz ><

Vered Shwartz is an Assistant Professor of Computer Science at the University of British Columbia, and a CIFAR AI Chair at the Vector Institute. Her research interests include commonsense reasoning, computational semantics and pragmatics, and multiword expressions. Previously, Vered was a postdoctoral researcher at the Allen Institute for AI (AI2) and the University of Washington, and received her PhD in Computer Science from Bar-Ilan University. Vered’s work has been recognized with several awards, including The Eric and Wendy Schmidt Postdoctoral Award for Women in Mathematical and Computing Sciences, the Clore Foundation Scholarship, and an ACL 2016 outstanding paper award.