Fine-Tuning Transformers for Your NLP Problem - A Journey Through Past Kaggle Competitions

About this Event

We will talk about leveraging the power of pretrained transformer models by adjusting and fine-tuning to fit custom problems. We illustrate the flexibility by discussing diverse use cases that were part of recent kaggle competitions like question answering, sentiment segmentation, multilingual toxicity detection or ordering of the code cells in a jupyter notebook.

Speaker

Dr. Christof Henkel ><

Dr. Christof Henkel works as an applied deep learning researcher at NVIDIA. His main interests are novel deep learning architectures related to graphs, computer vision, text and audio. He holds a PhD from the Ludwig-Maximilians-University in Munich where he studied mathematics and specialized in stochastic processes. He is a triple kaggle grandmaster and after participating in more than 50 kaggle competitions he reached rank #1 in the world-wide competition ranking in 2022.