About

I am interested in neural network models and learning algorithms which are able to discover, capture, represent, generalize, and transfer (meta)understanding for and across tasks of human cognition involving languages.

My thesis focused on structured prediction methods for models including recurrent neural networks, with the unifying theme of integrating structured learning with inexact search. As one contribution, I developed a reinforcement learning-style learning algorithm – with applicability beyond parsing (as shown in Edunov et al., 2018) – along with the first neural network parsing model optimized for the final evaluation metric with a structure-level loss. This is also the first work that used beam search to enable the learning of non-locally-normalized RNN models that condition on the full input, and the first work to do so with an expected loss. It demonstrates label bias is present even with models that exploit unbounded lookahead, and that global normalization is a strategy for mitigating its negative effects. As good side effects, the model also mitigates exposure bias and loss-evaluation mismatch.

As another contribution, I solved a long-standing problem in CCG parsing by developing the first dependency model for a shift-reduce parser, in which the key components are a dependency oracle and a learning algorithm that integrates the dependency oracle, the violation-fixing structured perceptron, and beam search. The dependency oracle is also a general hypergraph search algorithm with other potential applications.

I approach parsing as a typical and interesting structured prediction task, and tend to believe it is likely to be of limited use in end-to-end language processing approaches, and in the endeavor to achieve automated human-level language understanding, which remains elusive with currently available language technologies (more on this on the last page here).

As the shift from natural language text processing to natural language understanding swiftly taking place, quite a few controversial issues have also been raised. Is structured prediction still important for neural network models? Is structure/linguistics necessary or a necessary evil? Are insights from neuroscience more relevant? Is language just a side effect of general intelligence?

These are all unsettled issues, but I want to test unconventional ideas – not to seek controversy, but to deal with it and make small steps towards solving automated language understanding.

Here is my research statement.

Papers

Structured Learning with Inexact Search: Advances in Shift-Reduce CCG Parsing
Thesis pdf slides

LSTM Shift-Reduce CCG Parsing
Wenduan Xu
In EMNLP 2016 pdf code

Expected F-measure Training for Shift-Reduce Parsing with Recurrent Neural Networks
Wenduan Xu, Michael Auli, and Stephen Clark
In NAACL 2016 pdf slides code

Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards
Laurent Simon, Wenduan Xu, and Ross Anderson
In PETS 2016 pdf blog
Andreas Pfitzmann Best Student Paper runner-up


CCG Supertagging with a Recurrent Neural Network
Wenduan Xu, Michael Auli, and Stephen Clark
In ACL 2015 (short paper) pdf slides code

Shift-Reduce CCG Parsing with a Dependency Model
Wenduan Xu, Stephen Clark, and Yue Zhang
In ACL 2014 pdf errata slides code

Learning to Prune: Context-Sensitive Pruning for Syntactic MT
Wenduan Xu, Yue Zhang, Philip Williams, and Philipp Koehn
In ACL 2013 (short paper) pdf poster code

Extending Hiero Decoding in Moses with Cube Growing
Wenduan Xu and Philipp Koehn
PBML pdf
(Presented at the 7th MT Marathon 2012.)