Projects using DyNet

DyNet works for your complex neural networks

DyNet was designed from the ground up to be fast for neural networks with complex structure or control flow such as the ones that you need to handle tree or graph structures, or perform reinforcement learning or training with exploration. Below are some examples of full systems that use DyNet to handle their dynamic neural network needs.

Syntactic Parsing

Parsing is currently the most prominent scenario in which DyNet has been used, and DyNet was behind the development of a number of methods such as stack LSTMs, bi-directional LSTM feature extractors for dependency parsing, recurrent neural network grammars, and hierarchical tree LSTMs. A submission to the CoNLL shared task on dependency parsing using DyNet registered second place, and was nearly an order of magnitude faster than other submissions.

Machine Translation

DyNet is the backend chosen by a number of machine translation systems such as Mantis, Lamtram, nmtkit, and xnmt. It has powered the development of models that use complicated structures, such as lattice-to-sequence models.

Speech Recognition

DyNet powers the “Listen, Attend, and Spell” style models in xnmt. It has also been used to implement acoustic models using connectionist temporal classification (CTC).

Graph Parsing

DyNet powers the transition based UCCA parser that can predict graph structures from text.

Language Modeling

DyNet has been used in the development of hybrid neural/n-gram language models, and generative syntactic language models.

Tagging

DyNet supports applications to tagging for named entity recognition, semantic role labeling, punctuation prediction, and has been used in the creation of new architectures such as segmental recurrent neural networks.

Morphology

DyNet has been used in seminal work for morphological inflection generation and inflection generation with hard attention.