CHDH Seminar Series 2020: What Do Neural Models "Know'' About Natural Language? — Ekaterina Vylomova (Part 1)

Image for CHDH Seminar Series 2020: What Do Neural Models

Presenter: Ekaterina Vylomova

Date: Monday 20 April

Time: 12 - 1 PM

Location: Via Zoom. Please use this link at the time of the event to join: https://unimelb.zoom.us/j/820655212

Seminar Description: The history of neural approaches goes back to 1950s. The evolution of connectionist models had periods of ups and downs.  Recent advances in computational resources and their availability enabled neural models to overcome certain principal limitations and triggered their revival phase. Still, it is important to understand the factors of the current success: is it due to more advanced architectures, larger parameter space, richer training data, or "black box magic''? Do these models generalize as good as humans? Can they be biased?

In the first week of this seminar presentation, Ekaterina will address these questions specifically focusing on recent progress in Natural Language Processing (NLP) tasks such as machine translation, language modeling, part-of-speech tagging, and morphological inflection.

The second week of this seminar presentation will be devoted to Ekaterina's current project that involves diachronic analysis of language change. Most models in NLP are typically trained on a single (static) corpus. But, indeed, language constantly changes: words change their meanings, senses become broader, or more narrow, or/and switch their connotation from positive to negative. How can contemporary neural models help to automatically detect such changes? Here, Ekaterina will discuss and contrast multiple approaches to training epoch-specific models and outline existing metrics to evaluate semantic change.  Finally, Ekaterina  will enumerate major challenges and possible ways to overcome them.

Presenter Bio: Ekaterina Vylomova is a Postdoctoral Fellow at the Melbourne School of Psychological Sciences, University of Melbourne. Being a part of Nick Haslam's group, Ekaterina works on the automatic detection of lexical semantic change over time. Prior to that, Ekaterina received a doctorate degree in Computer Science specifically focusing on Natural Language Processing. In her thesis, she developed and evaluated neural models of linguistic morphology. Her research interests include diachronic language modeling, machine translation, computational morphology and typology.

See Ekaterina's Google Scholar page here: https://scholar.google.com.au/citations?user=JlVHhVUAAAAJ&hl=en&oi=ao