AI Summit Munich 2023

About this event

How does model architecture, pre-training objective, the side of the dataset and parameter count affect model’s linguistic abilities? They don’t 🤯. Or at least not as directly we thought. Evaluation of generated text remains a significant issue. Recently-introduced model-based metrics have shown promising results compared to n-gram-based metrics like BLEU, yet they still suffer severe drawbacks (http://arxiv.org/abs/2205.10696).

Slides

Speaker

Muhtasham Oblokulov ><

Muhtasham Oblokulov is a Machine Learning Engineer at Munich Re. He is passionate about applied AI, specifically transfer learning in NLP. Apart from that, his experience lies in Brain Computer Interfaces, Anomaly and Out of Distribution detection, Synthetic Data Augmentation, and NLP for Low-Resource Languages. In May 2022, he co-founded Munich NLP with colleagues from TUM and LMU.