Loading...
HF多模态

T-Systems-onsite/cross-en-de-roberta-sentence-transformer


Cross English & German RoBERTa for Sentence Embeddings

This model is intended to compute sentence (text) embeddings for English and German text. These embeddings can then be compared with cosine-similarity to find sentences with a similar semantic meaning. For example this can be useful for semantic textual similarity, semantic search, or paraphrase mining. To do this you have to use the Sentence Transformers Python framework.

The speciality of this model is that it also works cross-lingually. Regardless of the language, the sentences are translated into very similar vectors according to their semantics. This means that you can, for example, enter a search in German and find results according to the semantics in German and also in English. Using a xlm model and multilingual finetuning with language-crossing we reach performance that even exceeds the best current dedicated English large model (see Evaluation section below).

Sentence-BERT (SBERT) is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT.

Source: Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

This model is fine-tuned from Philip May and open-sourced by T-Systems-onsite. Special thanks to Nils Reimers for your awesome open-source work, the Sentence Transformers, the models and your help on GitHub.


How to use

To use this model install the sentence-transformers package (see here: https://github.com/UKPLab/sentence-transformers).

from sentence_transformers import SentenceTransformer
model = SentenceTransformer('T-Systems-onsite/cross-en-de-roberta-sentence-transformer')

For details of usage and examples see here:

  • Computing Sentence Embeddings
  • Semantic Textual Similarity
  • Paraphrase Mining
  • Semantic Search
  • Cross-Encoders
  • Examples on GitHub


Training

The base model is xlm-roberta-base. This model has been further trained by Nils Reimers on a large scale paraphrase dataset for 50+ languages. Nils Reimers about this on GitHub:

A paper is upcoming for the paraphrase models.

These models were trained on various datasets with Millions of examples for paraphrases, mainly derived from Wikipedia edit logs, paraphrases mined from Wikipedia and SimpleWiki, paraphrases from news reports, AllNLI-entailment pairs with in-batch-negative loss etc.

In internal tests, they perform much better than the NLI+STSb models as they have see more and broader type of training data. NLI+STSb has the issue that they are rather narrow in their domain and do not contain any domain specific words / sentences (like from chemistry, computer science, math etc.). The paraphrase models has seen plenty of sentences from various domains.

More details with the setup, all the datasets, and a wider evaluation will follow soon.

The resulting model called xlm-r-distilroberta-base-paraphrase-v1 has been released here: https://github.com/UKPLab/sentence-transformers/releases/tag/v0.3.8

Building on this cross language model we fine-tuned it for English and German language on the STSbenchmark dataset. For German language we used the dataset of our German STSbenchmark dataset which has been translated with deepl.com. Additionally to the German and English training samples we generated samples of English and German crossed. We call this multilingual finetuning with language-crossing. It doubled the traing-datasize and tests show that it further improves performance.

We did an automatic hyperparameter search for 33 trials with Optuna. Using 10-fold crossvalidation on the deepl.com test and dev dataset we found the following best hyperparameters:

  • batch_size = 8
  • num_epochs = 2
  • lr = 1.026343323298136e-05,
  • eps = 4.462251033010287e-06
  • weight_decay = 0.04794438776350409
  • warmup_steps_proportion = 0.1609010732760181

The final model was trained with these hyperparameters on the combination of the train and dev datasets from English, German and the crossings of them. The testset was left for testing.


Evaluation

The evaluation has been done on English, German and both languages crossed with the STSbenchmark test data. The evaluation-code is available on Colab. As the metric for evaluation we use the Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and STSbenchmark labels.

Model Name Spearman
German
Spearman
English
Spearman
EN-DE & DE-EN
(cross)
xlm-r-distilroberta-base-paraphrase-v1 0.8079 0.8350 0.7983
xlm-r-100langs-bert-base-nli-stsb-mean-tokens 0.7877 0.8465 0.7908
xlm-r-bert-base-nli-stsb-mean-tokens 0.7877 0.8465 0.7908
roberta-large-nli-stsb-mean-tokens 0.6371 0.8639 0.4109
T-Systems-onsite/
german-roberta-sentence-transformer-v2
0.8529 0.8634 0.8415
paraphrase-multilingual-mpnet-base-v2 0.8355 0.8682 0.8309
T-Systems-onsite/
cross-en-de-roberta-sentence-transformer
0.8550 0.8660 0.8525


License

Copyright (c) 2020 Philip May, T-Systems on site services GmbH

Licensed under the MIT License (the “License”); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file LICENSE in the repository.

数据统计

数据评估

T-Systems-onsite/cross-en-de-roberta-sentence-transformer浏览人数已经达到390,如你需要查询该站的相关权重信息,可以点击"5118数据""爱站数据""Chinaz数据"进入;以目前的网站数据参考,建议大家请以爱站数据为准,更多网站价值评估因素如:T-Systems-onsite/cross-en-de-roberta-sentence-transformer的访问速度、搜索引擎收录以及索引量、用户体验等;当然要评估一个站的价值,最主要还是需要根据您自身的需求以及需要,一些确切的数据则需要找T-Systems-onsite/cross-en-de-roberta-sentence-transformer的站长进行洽谈提供。如该站的IP、PV、跳出率等!

关于T-Systems-onsite/cross-en-de-roberta-sentence-transformer特别声明

本站Ai导航提供的T-Systems-onsite/cross-en-de-roberta-sentence-transformer都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由Ai导航实际控制,在2023年5月9日 下午7:16收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,Ai导航不承担任何责任。

相关导航

暂无评论

暂无评论...