![]() ![]() Enhancing context modeling with a query-guided capsule network for document-level translation. Zhengxin Yang, Jinchao Zhang, Fandong Meng, Shuhao Gu, Yang Feng and Jie Zhou. Context-Aware Monolingual Repair for Neural Machine Translation. of ACL 2019.Įlena Voita, Rico Sennrich and Ivan Titov. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. of EMNLP 2018.Įlena Voita, Rico Sennrich and Ivan Titov. Document-level neural machine translation with hierarchical attention networks. Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas and James Henderson. Selective attention for context-aware neural machine translation. Sameen Maruf, Andr´e FT Martins and Gholamreza Haffari. The performances of differentm methods on various discourse phenomena are listed in the following table. use En-Ru OpenSubtitles2018 corpus, and create hand-crafted test sets to evaluate discourse phenomena. * means that the resutls are not reported in the original paper but are reimplemented by Maruf et al. We list several recent BLEU results on these datasets as follows. Below we show the training/development/test corpora statistics of the datasets (Docuemnt Length denotes average sentence number in each document). evaluated their method on three diverse datasets on English-German translation including TED, NEWS and Europarl. Unfortunately, there is no widely used dataset in document-level translation. In general, document-level machine translation aims at exploiting the useful document-level information (multiple sentences around the current sentence or the whole document) to improve the translation quality of the current sentence as well as the coherence and cohension of the translated document. Document-level Neural Machine Translation MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning. ![]() Guangxiang Zhao, Xu Sun, Jingjing Xu, Zhiyuan Zhang and Liangchen Luo. Pay Less Attention with Lightweight and Dynamic Convolutions. of NIPS 2017.įelix Wu, Angela Fan, Alexei Baevski, Yann N. Gomez, Łukasz Kaiser and Illia Polosukhin. of ICML 2019.Īshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Self-Attention with Relative Position Representations. Peter Shaw, Jakob Uszkoreit and Ashish Vaswani. A Call for Clarity in Reporting BLEU Scores. Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View. Yiping Lu, Zhuohan Li, Di He, Zhiqing Sun, Bin Dong, Tao Qin, Liwei Wang and Tie-Yan Liu. * means that it is not mentioned in the paper and we guess from their codes. We use the widely used dataset WMT14 en-de and detokenized case-sensitive BLEU for comparison. We report architecture exploration starting from Transformer with nearly the same scale of network parameters. Furthermore, it is the best way to employ SacreBLEU (Post, 2018) to report BLEU scores for fair comparison on widely used datasets. Note that we would definitely miss some new SOTA models and please remind us if you know. Thus, we try our best to record the SOTA performance for the tasks in which there is dataset employed by several papers. It is a pity that there is no widely used benchmark datasets in many research tasks such as document translation, multilingual translation and domain adaptation. ![]() There are several research directions in neural machine translation, including architecture design, multimodal translation, speech and simultaneuous translation, document translation, multilingual translation, semi-supervised translation, unsupervised translation, domain adaptation, non-autoregressive translation and etc. Accordingly, we try to record the SOTA performance in this project. Currently, hundreds of MT papers are published each year and it is a bit difficult for researchers to know the SOTA models in each research direction. Machine translation has entered the era of neural methods, which attracts more and more researchers. Any comments and suggestions are welcome. We also give a detailed review of recent progress and potential research trends for NMT, available at. This project attempts to maintain the SOTA performance on various sub-tasks in machine translation. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |