Back to Transformers

Community

docs/source/en/community.md

5.8.025.4 KB
Original Source
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

Community

This page regroups resources around 🤗 Transformers developed by the community.

Community resources

ResourceDescriptionAuthor
Hugging Face Transformers Glossary FlashcardsA set of flashcards based on the Transformers Docs Glossary that has been put into a form which can be easily learned/revised using Anki an open source, cross platform app specifically designed for long term knowledge retention. See this Introductory video on how to use the flashcards.Darigov Research

Community notebooks

NotebookDescriptionAuthor
Fine-tune a pre-trained Transformer to generate lyricsHow to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 modelAleksey Korshuk
Train T5 on TPUHow to train T5 on SQUAD with Transformers and NlpSuraj Patil
Fine-tune T5 for Classification and Multiple ChoiceHow to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch LightningSuraj Patil
Fine-tune DialoGPT on New Datasets and LanguagesHow to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbotsNathan Cooper
Long Sequence Modeling with ReformerHow to train on sequences as long as 500,000 tokens with ReformerPatrick von Platen
Fine-tune BART for SummarizationHow to fine-tune BART for summarization with fastai using blurrWayde Gilliam
Fine-tune a pre-trained Transformer on anyone's tweetsHow to generate tweets in the style of your favorite Twitter account by fine-tuning a GPT-2 modelBoris Dayma
Optimize 🤗 Hugging Face models with Weights & BiasesA complete tutorial showcasing W&B integration with Hugging FaceBoris Dayma
Pretrain LongformerHow to build a "long" version of existing pretrained modelsIz Beltagy
Fine-tune Longformer for QAHow to fine-tune longformer model for QA taskSuraj Patil
Evaluate Model with 🤗nlpHow to evaluate longformer on TriviaQA with nlpPatrick von Platen
Fine-tune T5 for Sentiment Span ExtractionHow to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch LightningLorenzo Ampil
Fine-tune DistilBert for Multiclass ClassificationHow to fine-tune DistilBert for multiclass classification with PyTorchAbhishek Kumar Mishra
Fine-tune BERT for Multi-label ClassificationHow to fine-tune BERT for multi-label classification using PyTorchAbhishek Kumar Mishra
Fine-tune T5 for SummarizationHow to fine-tune T5 for summarization in PyTorch and track experiments with WandBAbhishek Kumar Mishra
Speed up Fine-Tuning in Transformers with Dynamic Padding / BucketingHow to speed up fine-tuning by a factor of 2 using dynamic padding / bucketingMichael Benesty
Pretrain Reformer for Masked Language ModelingHow to train a Reformer model with bi-directional self-attention layersPatrick von Platen
Expand and Fine Tune Sci-BERTHow to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it.Tanmay Thakur
Fine Tune BlenderBotSmall for Summarization using the Trainer APIHow to fine-tune BlenderBotSmall for summarization on a custom dataset, using the Trainer API.Tanmay Thakur
Fine-tune Electra and interpret with Integrated GradientsHow to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated GradientsEliza Szczechla
fine-tune a non-English GPT-2 Model with Trainer classHow to fine-tune a non-English GPT-2 Model with Trainer classPhilipp Schmid
Fine-tune a DistilBERT Model for Multi Label Classification taskHow to fine-tune a DistilBERT Model for Multi Label Classification taskDhaval Taunk
Fine-tune ALBERT for sentence-pair classificationHow to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification taskNadir El Manouzi
Fine-tune Roberta for sentiment analysisHow to fine-tune a Roberta model for sentiment analysisDhaval Taunk
Evaluating Question Generation ModelsHow accurate are the answers to questions generated by your seq2seq transformer model?Pascal Zoleko
Leverage BERT for Encoder-Decoder Summarization on CNN/DailymailHow to warm-start a EncoderDecoderModel with a google-bert/bert-base-uncased checkpoint for summarization on CNN/DailymailPatrick von Platen
Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSumHow to warm-start a shared EncoderDecoderModel with a FacebookAI/roberta-base checkpoint for summarization on BBC/XSumPatrick von Platen
Fine-tune TAPAS on Sequential Question Answering (SQA)How to fine-tune TapasForQuestionAnswering with a tapas-base checkpoint on the Sequential Question Answering (SQA) datasetNiels Rogge
Evaluate TAPAS on Table Fact Checking (TabFact)How to evaluate a fine-tuned TapasForSequenceClassification with a tapas-base-finetuned-tabfact checkpoint using a combination of the 🤗 datasets and 🤗 transformers librariesNiels Rogge
Fine-tuning mBART for translationHow to fine-tune mBART using Seq2SeqTrainer for Hindi to English translationVasudev Gupta
Fine-tune LayoutLM on FUNSD (a form understanding dataset)How to fine-tune LayoutLMForTokenClassification on the FUNSD dataset for information extraction from scanned documentsNiels Rogge
Fine-Tune DistilGPT2 and Generate TextHow to fine-tune DistilGPT2 and generate textAakash Tripathi
Fine-Tune LED on up to 8K tokensHow to fine-tune LED on pubmed for long-range summarizationPatrick von Platen
Evaluate LED on ArxivHow to effectively evaluate LED on long-range summarizationPatrick von Platen
Fine-tune LayoutLM on RVL-CDIP (a document image classification dataset)How to fine-tune LayoutLMForSequenceClassification on the RVL-CDIP dataset for scanned document classificationNiels Rogge
Wav2Vec2 CTC decoding with GPT2 adjustmentHow to decode CTC sequence with language model adjustmentEric Lam
Fine-tune BART for summarization in two languages with Trainer classHow to fine-tune BART for summarization in two languages with Trainer classEliza Szczechla
Evaluate Big Bird on Trivia QAHow to evaluate BigBird on long document question answering on Trivia QAPatrick von Platen
Create video captions using Wav2Vec2How to create YouTube captions from any video by transcribing the audio with Wav2VecNiklas Muennighoff
Fine-tune the Vision Transformer on CIFAR-10 using PyTorch LightningHow to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and PyTorch LightningNiels Rogge
Fine-tune the Vision Transformer on CIFAR-10 using the 🤗 TrainerHow to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and the 🤗 TrainerNiels Rogge
Evaluate LUKE on Open Entity, an entity typing datasetHow to evaluate LukeForEntityClassification on the Open Entity datasetIkuya Yamada
Evaluate LUKE on TACRED, a relation extraction datasetHow to evaluate LukeForEntityPairClassification on the TACRED datasetIkuya Yamada
Evaluate LUKE on CoNLL-2003, an important NER benchmarkHow to evaluate LukeForEntitySpanClassification on the CoNLL-2003 datasetIkuya Yamada
Evaluate BigBird-Pegasus on PubMed datasetHow to evaluate BigBirdPegasusForConditionalGeneration on PubMed datasetVasudev Gupta
Speech Emotion Classification with Wav2Vec2How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA datasetMehrdad Farahani
Detect objects in an image with DETRHow to use a trained DetrForObjectDetection model to detect objects in an image and visualize attentionNiels Rogge
Fine-tune DETR on a custom object detection datasetHow to fine-tune DetrForObjectDetection on a custom object detection datasetNiels Rogge
Finetune T5 for Named Entity RecognitionHow to fine-tune T5 on a Named Entity Recognition TaskOgundepo Odunayo
Fine-Tuning Open-Source LLM using QLoRA with MLflow and PEFTHow to use QLoRA and PEFT to fine-tune an LLM in a memory-efficient way, while using MLflow to manage experiment trackingYuki Watanabe