🤗 AutoTrain是一个无代码工具,用于训练最先进的自然语言处理(NLP)模型、计算机视觉(CV)任务、语音任务甚至是表格任务。它建立在Hugging Face团队开发的出色工具之上,并且旨在易于使用。
模型描述 Vision Transformer...
DiffCSE: Difference-based C...
Vision Transformer (base-si...
Model card for CLAP Model...
dpr-question_encoder-single...
Releasing Hindi ELECTRA mod...
DistilBert for Dense Passag...
Motivation This model is ...
E5-small Text Embeddings ...
Erlangshen-SimCSE-110M-Chin...
This is a Japanese sentence...
KoBART-base-v1 from trans...
rubert-base-cased-conversat...
SciNCL SciNCL is a pre-tr...
IndoBERT Base Model (phase2...
https://github.com/BM-K/Sen...
Model Card for sup-simcse-r...
Please refer here. https://...
DRAGON+ is a BERT-base size...
CODER: Knowledge infused cr...
SimLM: Pre-training with Re...
This is a copy of the origi...
BART (large-sized model) ...
Overview Language model: ...
dpr-ctx_encoder-bert-base-m...
SPECTER 2.0 SPECTER 2.0 i...
bert-base-cased-conversatio...
WavLM-Base-Plus Microsoft...
SEW-tiny SEW by ASAPP Res...
LiLT + XLM-RoBERTa-base T...
X-CLIP (base-sized model) ...
(B AR出口 T )是论文中使用...
密集预测变压器(DPT)模型是在140万张图像上进行单目深度估计训练的。它是由Ranftl等人在2021年的论文“用于密集预测的视觉变压器”中介绍的,并首次在此存储库中发布。DPT使用视觉变压器(ViT)作为骨干,并在其上方添加了一个颈部+头部,用于单目深度估计。
glpn-nyu-finetuned-diode-23...
GLPN fine-tuned on KITTI ...
glpn-nyu-finetuned-diode-22...
MiniLM: 6 Layer Version T...
GLPN fine-tuned on NYUv2 ...
⚠️ This model is deprecated...
all-MiniLM-L6-v2 This is ...
Model Details: DPT-Hybrid ...
Model Details: DPT-Large ...
Test model To test this m...
glpn-kitti-finetuned-diode ...
glpn-nyu-finetuned-diode ...
language: multilingual tags...
This is the General_TinyBER...
glpn-kitti-finetuned-diode-...
一个使用 Vicuna13B 基础的完...
BERT是一个transformers模型,它是在一个大型英文语料库上进行自监督预训练的。这意味着它仅在原始文本上进行预训练,没有任何人类以任何方式对其进行标注(这就是为什么它可以使用大量公开可用的数据),并使用自动过程从这些文本中生成输入和标签。更准确地说,它是通过两个目标进行预训练的:
bert-base-multilingual-unca...
DeBERTa: Decoding-enhanced ...
xlm-roberta-base-language-d...
Distilbert-base-uncased-emo...
Non Factoid Question Catego...
Cross-Encoder for MS Marco ...
FinBERT is a BERT model pre...
Twitter-roBERTa-base for Em...
roberta-large-mnli Tab...
Twitter-roBERTa-base for Se...
distilbert-imdb This mode...
German Sentiment Classifica...
FinBERT is a pre-trained NL...
Emotion English DistilRoBER...
Model Trained Using AutoNLP...
CodeBERT fine-tuned for Ins...
distilbert-base-uncased-go-...
DistilBERT base uncased fin...
Model description This mo...
RoBERTa Base OpenAI Detecto...
Parrot THIS IS AN ANCILLARY...
BERT codemixed base model f...
Sentiment Analysis in Spani...
Fine-tuned DistilRoBERTa-ba...
BERT base model (uncased) ...
twitter-XLM-roBERTa-base fo...
SiEBERT - English-Language ...
tts_transformer-zh-cv7_css1...
该模型由 kan-bayashi 使用 espnet 中的 ljspeech/tts1 配方训练。
tts_transformer-ar-cv7 Tr...
ESPnet2 TTS pretrained mode...
Text-to-Speech (TTS) with T...
Example ESPnet2 TTS model ...
SpeechT5 (TTS task) Speec...
ESPnet2 TTS model laka...
fastspeech2-en-ljspeech F...
ESPnet2 TTS model mio/...
unit_hifigan_HK_layer12.km2...
fastspeech2-en-200_speaker-...
原项目链接如下: mio/amad...
Vocoder with HiFIGAN traine...
tts_transformer-ru-cv7_css1...
tts_transformer-fr-cv7_css1...
ESPnet JETS Text-to-Speech ...
unit_hifigan_mhubert_vp_en_...
tts_transformer-es-css10 ...
license: cc-by-4.0
Wine Quality classification...
Model Trained Using AutoTra...
TensorFlow's Gradient Boost...
Model description This re...
Load the data from datase...
Model Description Kera...
How to use import joblib...
Flowformer Automatic dete...
Model description [More I...
Keras Implementation of Str...
Titanic (Survived/Not Survi...
Decision Transformer model ...
poca Agent playing SoccerTw...
PPO Agent playing CartPole-...
DQN Agent playing CartPole-...
PPO Agent playing Pendulum-...
PPO Agent playing PongNoFra...
PPO Agent playing LunarLand...
DQN Agent playing LunarLand...
RL Zoo 是 Stable Baselines3...
PPO Agent playing BreakoutN...
PPO Agent playing seals/Mou...