I am interested in neural network models and learning algorithms which are able to discover, capture, represent, generalize, and transfer (meta)understanding for and across tasks of natural language understanding.
As the shift from natural language text processing to natural language understanding swiftly taking place, quite a few controversial issues have also been raised. Is structured prediction still important for neural network models? Is structure/linguistics necessary or a necessary evil? Are insights from neuroscience more relevant? Is language just a side effect of general intelligence?
These are all unsettled issues, but I'm open to test any idea.
Here is an outdated
rant which contains a small sample of my interests.
Structured Learning with Inexact Search: Advances in Shift-Reduce CCG ParsingThesis
don't-click-here pdf slides
LSTM Shift-Reduce CCG ParsingWenduan Xu
In EMNLP 2016
pdf code
Expected F-measure Training for Shift-Reduce Parsing with Recurrent Neural NetworksWenduan Xu, Michael Auli, and Stephen Clark
In NAACL 2016
pdf slides code
Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android KeyboardsLaurent Simon, Wenduan Xu, and Ross Anderson
In
PETS 2016
pdf blog
Andreas Pfitzmann Best Student Paper
CCG Supertagging with a Recurrent Neural NetworkWenduan Xu, Michael Auli, and Stephen Clark
In ACL 2015 (short paper)
pdf slides code
Shift-Reduce CCG Parsing with a Dependency ModelWenduan Xu, Stephen Clark, and Yue Zhang
In ACL 2014
pdf errata slides code talk
Learning to Prune: Context-Sensitive Pruning for Syntactic MTWenduan Xu, Yue Zhang, Philip Williams, and Philipp Koehn
In ACL 2013 (short paper)
pdf poster code
Extending Hiero Decoding in Moses with Cube GrowingWenduan Xu and Philipp Koehn
PBML
pdf(Presented at the 7th MT Marathon 2012.)