LASER
MUSE
LASER | MUSE | |
---|---|---|
5 | 4 | |
3,539 | 3,128 | |
0.8% | - | |
5.7 | 0.0 | |
21 days ago | over 1 year ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LASER
-
SentenceTransformers: Python framework for sentence, text and image embeddings
I'm curious how people are handling multi-lingual embeddings.
I've found LASER[1] which originally had the idea to embed all languages in the same vector space, though it's a bit harder to use than models available through SentenceTransformers. LASER2 stuck with this approach, but LASER3 switched to language-specific models. However, I haven't found benchmarks for these models, and they were released about 2 years ago.
Another alternative would be to translate everything before embedding, which would introduce some amount of error, though maybe it wouldn't be significant.
1. https://github.com/facebookresearch/LASER
-
[D] Hey Reddit! We're a bunch of research scientists and software engineers and we just open sourced a new state-of-the-art AI model that can translate between 200 different languages. We're excited to hear your thoughts so we're hosting an AMA on 07/21/2022 @ 9:00AM PT. Ask Us Anything!
You can check out some of our materials and open sourced artifacts here: - Our latest blog post: https://ai.facebook.com/blog/nllb-200-high-quality-machine-translation - Project Overview: https://ai.facebook.com/research/no-language-left-behind/ - Product demo: https://nllb.metademolab.com/ - Research paper: https://research.facebook.com/publications/no-language-left-behind - NLLB-200: https://github.com/facebookresearch/fairseq/tree/nllb - FLORES-200: https://github.com/facebookresearch/flores - LASER3: https://github.com/facebookresearch/LASER Joining us today for the AMA are: - Angela Fan (AF), Research Scientist - Jean Maillard (JM), Research Scientist - Maha Elbayad (ME), Research Scientist - Philipp Koehn (PK), Research Scientist - Shruti Bhosale (SB), Software Engineer We’ll be here from 07/21/2022 @09:00AM PT - 10:00AM PT Thanks and we’re looking forward to answering your questions!
-
School project : sentiments analysis with my country Arabic Dialect
This may be helpful: https://github.com/facebookresearch/LASER
-
[P] Bilingual text alignment tools for NMT - help needed
Check FB's LASER: https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix Also , Sentence-Transformers has a pretty neat model for crosslingual sentence similarity: https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual
-
Help with aligned word embeddings
You want LASER its a superbig model trained on tons of languages you can use it with sentence_transformers in python to compute embedings. Then you can use faiss or datasketch to find matches at K
MUSE
-
The Illustrated Word2Vec
This is a great guide.
Also - despite the fact that language model embedding [1] are currently the hot rage, good old embedding models are more than good enough for most tasks.
With just a bit of tuning, they're generally as good at many sentence embedding tasks [2], and with good libraries [3] you're getting something like 400k sentence/sec on laptop CPU versus ~4k-15k sentences/sec on a v100 for LM embeddings.
When you should use language model embeddings:
- Multilingual tasks. While some embedding models are multilingual aligned (eg. MUSE [4]), you still need to route the sentence to the correct embedding model file (you need something like langdetect). It's also cumbersome, with one 400mb file per language.
For LM embedding models, many are multilingual aligned right away.
- Tasks that are very context specific or require fine-tuning. For instance, if you're making a RAG system for medical documents, the embedding space is best when it creates larger deviations for the difference between seemingly-related medical words.
This means models with more embedding dimensions, and heavily favors LM models over classic embedding models.
1. sbert.net
2. https://collaborate.princeton.edu/en/publications/a-simple-b...
3. https://github.com/oborchers/Fast_Sentence_Embeddings
4. https://github.com/facebookresearch/MUSE
-
Best AI-generated bilingual dictionaries
I am looking for the best way to get an AI-generated bilingual dictionary, so that I can get a list of words with their translations for each language pair I want. It is possible to get a list (with sometimes alright, sometimes bad results) using this project. Additionally, there exists this, but it does not have a whole lot of words unfortunately. I also read about the huge CCMatrix dataset which has millions of parallel sentences for many language pairs, but how would I extract direct word translations from it? (A naive python algorithm would probably take forever.)
-
Help with aligned word embeddings
We currently train our own vocabularies on Wikipedia and other sources, and we align the vocabularies using MUSE with default settings (0-5000 dictionary for training, 5000-6500 dictionary for evaluation and 5 refinements).
-
D How Advanced Is The Current Practice Of
MUSE embeddings has an unsupervised approach based on adversarial training: https://github.com/facebookresearch/MUSE#the-unsupervised-way-adversarial-training-and-refinement-cpugpu
What are some alternatives?
electra - ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
word2word - Easy-to-use word-to-word translations for 3,564 language pairs.
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
flores - Facebook Low Resource (FLoRes) MT Benchmark