RepEval2019

Logo

The Third Workshop on Evaluating Vector Space Representations for NLP

June 6th 2019, Minneapolis (USA)
(co-located with NAACL)

About

Call for papers

Program

People

The program of the workshop

The workshop will be held in Nicollet A 1-2 of Hyatt Regency in Minneapolis.

Poster session location: Hyatt Exhibit Hall on the Main Level.

The floor plan is available here.

09:00 - 09:30 Opening remarks. Evaluation of meaning representations for NLP: directions and milestones (SLIDES).

09:30 - 10:30 How well do neural NLP systems generalize? (SLIDES)
Invited talk by Tal Linzen (Johns Hopkins University)

Neural networks have rapidly become central to NLP systems. While such systems often perform well on typical test set examples, their generalization abilities are often poorly understood. In this talk, I will demonstrate how experimental paradigms from psycholinguistics can help us characterize the gaps between the abilities of neural systems and those of humans, by focusing on interpretable axes of generalization from the training set rather than on average test set performance. I will show that recurrent neural network (RNN) language models are able to process syntactic dependencies in typical sentences with considerable success, but when evaluated on more complex syntactically controlled materials, their error rate increases sharply. Likewise, neural systems trained to perform natural language inference generalize much more poorly than their test set performance would suggest.

Speaker bio: Tal Linzen is an Assistant Professor of Cognitive Science and Computer Science at Johns Hopkins University. He directs the Computation and Psycholinguistics Lab, which develops computational models of human language comprehension and acquisition, as well as methods for interpreting, evaluating and extending neural network models for natural language processing. Dr. Linzen is one of the co-organizers of the BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (EMNLP 2018, ACL 2019).

10:30 - 11:00 Coffee break

11:00 - 12:00 Learning and evaluating generalizable vector space representations of texts
Invited talk by Kristina Toutanova (Google AI)

I will talk about our recent and forthcoming work on pre-training vector space representations of texts of multiple granularities and in different contexts. I will present evaluation on end-user tasks and an analysis of the component representations on probing tasks. Finally, I will motivate the need for new kinds of textual representations and ways to measure their ability to generalize across tasks.

Speaker bio: Kristina Toutanova is a research scientist at Google Research in the Language team in Seattle and an affiliate faculty at the University of Washington. She obtained her Ph.D. from Stanford University with Christopher Manning. Prior to joining Google in 2017, she was a researcher at Microsoft Research, Redmond. Kristina focuses on modeling the structure of natural language using machine learning, most recently in the areas of representation learning, question answering, information retrieval, semantic parsing, and knowledge base completion. Kristina is a past co-editor in chief of TACL and was a program co-chair for ACL 2014.

12:00 - 13:30 Lunch

13.30 - 14.45 Oral session

14:45 - 15:00 1-minute poster madness

15.00 - 15.45 Poster session

15:45 - 16:00 Coffee break

16.00 - 17.15 A linguist, an NLP engineer, and a psycholinguist walk into a bar… Panel discussion with Sam Bowman, Ryan Cotterell, Barry Devereux, Allyson Ettinger, and Tal Linzen.

17:15 - 17:30 Closing remarks