Understanding of human language by computers has been a central goal of Artificial Intelligence since its beginnings, with massive potential to improve communication, provide better information access and automate basic human tasks. My research focuses on technologies for automatic processing of human language, with several applications including automatic translation (akin to Google and Bing's translation tools). My core focus is on probabilistic machine learning modelling of language applications, particularly handling uncertain or partly observed data and structured prediction problems.

News

  • We'll be hiring a 2-3 year postdoc in machine translation and deep learning shortly. Please watch this space for more details (e.g., March/April), or email me if you're interested and I'll keep you posted.

Current Projects

  • Efficient storage and access to text count data: An application to unlimited order language modelling. 2016 – 2017. Google Research Award, $US 85k.
  • Learning Deep Semantics for Automatic Translation between Human Languages. 2016 – 2019. ARC Discovery with Reza Haffari, $450k.
  • Ariel: Analysis of Rare Incident-Event Languages. 2015 – 2018. DARPA LORELEI (sub-contract), $300k.
  • Adaptive Context-Dependent Machine Translation for Heterogeneous Text. 2014 – 2018. ARC Future Fellowship, $730k.
  • Pheme: Computing Veracity Across Media, Languages, and Social Networks. 2014 – 2017. EU FP7 with Kalina Bontcheva and others, £494k.

Selected Papers

Take and Took, Gaggle and Goose, Book and Read: Evaluating the Utility of Vector Differences for Lexical Relation Learning
Ekaterina Vylomova, Laura Rimell, Trevor Cohn and Timothy Baldwin. In Proceedings of ACL-16, 2016.
PDF
pigeo: A Python Geotagging Tool
Afshin Rahimi, Trevor Cohn and Timothy Baldwin. In Proceedings of ACL-16 (Demonstrations), 2016.
Hawkes Processes for Continuous Time Sequence Classification: An Application to Rumour Stance Classification in Twitter
Michal Lukasik, P.K. Srijith, Duy Vu, Kalina Bontcheva, Arkaitz Zubiaga and Trevor Cohn. In Proceedings of ACL-16 (Short papers), 2016.
Incorporating Structural Alignment Biases into an Attentional Neural Translation Model
Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vylomova, Kaisheng Yao, Chris Dyer and Gholamreza Haffari. In Proceedings of NAACL-16, 2016.
Abstract PDF Code
An Attentional Model for Speech Translation Without Transcription
Long Duong, Antonios Anastasopoulos, Steven Bird, David Chiang and Trevor Cohn. In Proceedings of NAACL-16, 2016.
Abstract PDF Code
Incorporating Context into Recurrent Neural Network Language Models
Cong Duy Vu Hoang, Gholamreza Haffari and Trevor Cohn. In Proceedings of NAACL-16 (short), 2016.
Abstract PDF
Document Context Language Models
Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, Jacob Eisenstein. In Proceedings of ICLR-16 Workshop, 2016.
Abstract PDF
Convolution Kernels for Discriminative Learning from Streaming Text
Michal Lukasik and Trevor Cohn. In Proceedings of AAAI-16, 2016.
Abstract PDF