Understanding of human language by computers has been a central goal of Artificial Intelligence since its beginnings, with massive potential to improve communication, provide better information access and automate basic human tasks. My research focuses on technologies for automatic processing of human language, with several applications including automatic translation (akin to Google and Bing's translation tools). My core focus is on probabilistic machine learning modelling of language applications, particularly handling uncertain or partly observed data and structured prediction problems.

News

  • Melbourne will host ACL in July 2018, at the Melbourne Convention Centre, with Tim Baldwin, Karin Verspoor and myself serving as the local chairs.
  • Co-organising the ALTA 2016 workshop to be held in Monash Caulfield campus in Melbourne, on the 5th - 7th of December 2016.
  • I'll be giving a tutorial on succinct data structures for NLP with Matthias Petri at COLING 2016 in Osaka, Japan on 12th December, 2016.
  • My student Oliver Adams won the Best Short Paper Award at EMNLP 2016! My group also has several other papers appearing at the conference and a TACL presentation (listed below).

Current Projects

  • Efficient storage and access to text count data: An application to unlimited order language modelling. 2016 – 2017. Google Research Award, $US 85k.
  • Learning Deep Semantics for Automatic Translation between Human Languages. 2016 – 2019. ARC Discovery with Reza Haffari, $450k.
  • Ariel: Analysis of Rare Incident-Event Languages. 2015 – 2018. DARPA LORELEI (sub-contract), $300k.
  • Adaptive Context-Dependent Machine Translation for Heterogeneous Text. 2014 – 2018. ARC Future Fellowship, $730k.
  • Pheme: Computing Veracity Across Media, Languages, and Social Networks. 2014 – 2017. EU FP7 with Kalina Bontcheva and others, £494k.

Selected Papers

Decoding as Continuous Optimization in Neural Machine Translation
Cong Duy Vu Hoang, Gholamreza Haffari and Trevor Cohn. In arXiv preprint, 2017.
Abstract PDF
DyNet: The Dynamic Neural Network Toolkit
Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, Pengcheng Yin. In arXiv preprint, 2017.
Abstract PDF Code
Context-Aware Prediction of Derivational Word-forms
Ekaterina Vylomova, Ryan Cotterell, Trevor Cohn and Timothy Baldwin. In Proceedings of EACL (short), 2017.
Robust Training under Linguistic Adversity
Yitong Li, Trevor Cohn and Timothy Baldwin. In Proceedings of EACL (short), 2017.
Pairwise Webpage Coreference Classification using Distant Supervision
Shivashankar Subramanian, Timothy Baldwin, Julian Brooke and Trevor Cohn. In Proceedings of WWW (posters), 2017.
Cross-Lingual Word Embeddings for Low-Resource Language Modeling
Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird and Trevor Cohn. In Proceedings of EACL, 2017.
Abstract PDF
Multilingual Training of Crosslingual Word Embeddings
Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird and Trevor Cohn. In Proceedings of EACL, 2017.
Abstract PDF
Learning a Lexicon and Translation Model from Phoneme Lattices
Oliver Adams, Graham Neubig, Trevor Cohn, Steven Bird, Quoc Truong Do and Satoshi Nakamura. In Proceedings of EMNLP (short papers), 2016.
Winner of Best Short Paper Award
Abstract PDF