Skip to content

ppleskov/Russian-Language-Model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 

Repository files navigation

Russian Language Model

Russian language model is trained using the fast.ai version of AWD LSTM Language Model (basically LSTM with dropouts) with data from news site Lenta.ru. Achieved perplexity is 36.23 (90/10 validation split), compared to state-of-the-art on November 17, 2017 at 40.68 for English language. To the best of my knowledge, there is no comparable research in the Russian language at the point of writing (May 8, 2018). Details can be found in the notebook RLM.ipynb. The model can be found on Google Disk.

Embeddings vs Language Model

There are several word embeddings trained on Russian corpora: fastText, word2vec, RusVectōrēs. Why would you choose to work with language model instead? The reason is simple: it is a 3-layer neural net with non-linear activations and dropouts. While embeddings are just 300-dimensional vectors in a linear space. More complex word representation allows to capture more sophisticated language peculiarities. It also makes transfer learning very easy - adding a simple custom head will work perfectly for classification problems as well as regression tasks. Check out a similar model trained on OpenSubtitles dataset which got me the second place at Yandex Algorithm 2018 ML-track.

To-do

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published