Back to Nlp Progress

Summarization

english/summarization.md

0.328.7 KB
Original Source

Summarization

Summarization is the task of producing a shorter version of one or several documents that preserves most of the input's meaning.

Warning: Evaluation Metrics

For summarization, automatic metrics such as ROUGE and METEOR have serious limitations:

  1. They only assess content selection and do not account for other quality aspects, such as fluency, grammaticality, coherence, etc.
  2. To assess content selection, they rely mostly on lexical overlap, although an abstractive summary could express they same content as a reference without any lexical overlap.
  3. Given the subjectiveness of summarization and the correspondingly low agreement between annotators, the metrics were designed to be used with multiple reference summaries per input. However, recent datasets such as CNN/DailyMail and Gigaword provide only a single reference.

Therefore, tracking progress and claiming state-of-the-art based only on these metrics is questionable. Most papers carry out additional manual comparisons of alternative summaries. Unfortunately, such experiments are difficult to compare across papers. If you have an idea on how to do that, feel free to contribute.

CNN / Daily Mail

The CNN / Daily Mail dataset as processed by Nallapati et al. (2016) has been used for evaluating summarization. The dataset contains online news articles (781 tokens on average) paired with multi-sentence summaries (3.75 sentences or 56 tokens on average). The processed version contains 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs. Models are evaluated with full-length F1-scores of ROUGE-1, ROUGE-2, ROUGE-L, and METEOR (optional). The multilingual version of CNN / Daily Mail dataset exists and is available for five different languages (French, German, Spanish, Russian, Turkish).

Anonymized version

The following models have been evaluated on the entitiy-anonymized version of the dataset introduced by Nallapati et al. (2016).

ModelROUGE-1ROUGE-2ROUGE-LMETEORPaper / SourceCode
RNES w/o coherence (Wu and Hu, 2018)41.2518.8737.75-Learning to Extract Coherent Summary via Deep Reinforcement Learning
SWAP-NET (Jadhav and Rajan, 2018)41.618.337.7-Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks
HSASS (Al-Sabahi et al., 2018)42.317.837.6-A Hierarchical Structured Self-Attentive Model for Extractive Document Summarization (HSSAS)
GAN (Liu et al., 2018)39.9217.6536.71-Generative Adversarial Network for Abstractive Text Summarization
KIGN+Prediction-guide (Li et al., 2018)38.9517.1235.68-Guiding Generation for Abstractive Text Summarization based on Key Information Guide Network
SummaRuNNer (Nallapati et al., 2017)39.616.235.3-SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents
rnn-ext + abs + RL + rerank (Chen and Bansal, 2018)39.6615.8537.34-Fast Abstractive Summarization with Reinforce-Selected Sentence RewritingOfficial
ML+RL, with intra-attention (Paulus et al., 2018)39.8715.8236.90-A Deep Reinforced Model for Abstractive Summarization
Lead-3 baseline (Nallapati et al., 2017)39.215.735.5-SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents
ML+RL ROUGE+Novel, with LM (Kryscinski et al., 2018)40.0215.5337.44-Improving Abstraction in Text Summarization
(Tan et al., 2017)38.113.934.0-Abstractive Document Summarization with a Graph-Based Attentional Neural Model
words-lvt2k-temp-att (Nallapti et al., 2016)35.4613.3032.65-Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

Non-Anonymized Version: Extractive Models

The following models have been evaluated on the non-anonymized version of the dataset introduced by See et al. (2017).

The first table covers Extractive Models, while the second covers abstractive approaches.

ModelROUGE-1ROUGE-2ROUGE-LMETEORPaper / SourceCode
MatchSum (Zhong et al., 2020)44.4120.8640.55-Extractive Summarization as Text MatchingOfficial
DiscoBERT w.G_R & G_C (Xu et al. 2019)43.7720.8540.67-A Discourse-Aware Neural Extractive Model for Text SummarizationOfficial
BertSumExt (Liu and Lapata 2019)43.8520.3439.90-Text Summarization with Pretrained EncodersOfficial
BERT-ext + RL (Bae et al., 2019)42.7619.8739.11-Summary Level Training of Sentence Rewriting for Abstractive Summarization
PNBERT (Zhong et al., 2019)42.6919.6038.85-Searching for Effective Neural Extractive Summarization: What Works and What's NextOfficial
HIBERT (Zhang et al., 2019)42.3719.9538.83-HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
NeuSUM (Zhou et al., 2018)41.5919.0137.98-Neural Document Summarization by Jointly Learning to Score and Select SentencesOfficial
Latent (Zhang et al., 2018)41.0518.7737.54-Neural Latent Extractive Document Summarization
BanditSum (Dong et al., 2018)41.518.737.6-BANDITSUM: Extractive Summarization as a Contextual BanditOfficial
REFRESH (Narayan et al., 2018)40.018.236.6-Ranking Sentences for Extractive Summarization with Reinforcement LearningOfficial
Lead-3 baseline (See et al., 2017)40.3417.7036.5722.21Get To The Point: Summarization with Pointer-Generator NetworksOfficial

Non-Anonymized: Abstractive Models & Mixed Models

ModelROUGE-1ROUGE-2ROUGE-LMETEORPaper / SourceCode
BRIO (Liu et al., 2022)47.7823.5544.57-BRIO: Bringing Order to Abstractive SummarizationOfficial
SimCLS (Liu et al., 2021)46.6722.1543.54-SimCLS: A Simple Framework for Contrastive Learning of Abstractive SummarizationOfficial
GSum (Dou et al., 2020)45.9422.3242.48-GSum: A General Framework for Guided Neural Abstractive SummarizationOfficial
ProphetNet (Yan, Qi, Gong, Liu et al., 2020)44.2021.1741.30-ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-trainingOfficial
PEGASUS (Zhang et al., 2019)44.1721.4741.11-PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive SummarizationOfficial
BART (Lewis et al., 2019)44.1621.2840.90-BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and ComprehensionOfficial
T5 (Raffel et al., 2019)43.5221.5540.69-Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerOfficial
UniLM (Dong et al., 2019)43.3320.2140.51-Unified Language Model Pre-training for Natural Language Understanding and GenerationOfficial
CNN-2sent-hieco-RBM (Zhang et al., 2019)42.0419.7739.42-Abstract Text Summarization with a Convolutional Seq2Seq Model
BertSumExtAbs (Liu and Lapata 2019)42.1319.6039.18-Text Summarization with Pretrained EncodersOfficial
BERT-ext + abs + RL + rerank (Bae et al., 2019)41.9019.0839.64-Summary Level Training of Sentence Rewriting for Abstractive Summarization
Two-Stage + RL (Zhang et al., 2019)41.7119.4938.79-Pretraining-Based Natural Language Generation for Text Summarization
DCA (Celikyilmaz et al., 2018)41.6919.4737.92-Deep Communicating Agents for Abstractive Summarization
EditNet (Moroshko et al., 2018)41.4219.0338.36-An Editorial Network for Enhanced Document Summarization
rnn-ext + RL (Chen and Bansal, 2018)41.4718.7237.7622.35Fast Abstractive Summarization with Reinforce-Selected Sentence RewritingOfficial
Bottom-Up Summarization (Gehrmann et al., 2018)41.2218.6838.34-Bottom-Up Abstractive SummarizationOfficial
(Li et al., 2018a)41.5418.1836.47-Improving Neural Abstractive Document Summarization with Explicit Information Selection Modeling
(Li et al., 2018b)40.3018.0237.36-Improving Neural Abstractive Document Summarization with Structural Regularization
ROUGESal+Ent RL (Pasunuru and Bansal, 2018)40.4318.0037.1020.02Multi-Reward Reinforced Summarization with Saliency and Entailment
end2end w/ inconsistency loss (Hsu et al., 2018)40.6817.9737.13-A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss
RL + pg + cbdec (Jiang and Bansal, 2018)40.6617.8737.0620.51Closed-Book Training to Improve Summarization Encoder Memory
rnn-ext + abs + RL + rerank (Chen and Bansal, 2018)40.8817.8038.5420.38Fast Abstractive Summarization with Reinforce-Selected Sentence RewritingOfficial
Pointer + Coverage + EntailmentGen + QuestionGen (Guo et al., 2018)39.8117.6436.5418.54Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation
ML+RL ROUGE+Novel, with LM (Kryscinski et al., 2018)40.1917.3837.52-Improving Abstraction in Text Summarization
Pointer-generator + coverage (See et al., 2017)39.5317.2836.3818.72Get To The Point: Summarization with Pointer-Generator NetworksOfficial

Gigaword

The Gigaword summarization dataset has been first used by Rush et al., 2015 and represents a sentence summarization / headline generation task with very short input documents (31.4 tokens) and summaries (8.3 tokens). It contains 3.8M training, 189k development and 1951 test instances. Models are evaluated with ROUGE-1, ROUGE-2 and ROUGE-L using full-length F1-scores.

Below Results are ranking by ROUGE-2 Scores.

ModelROUGE-1ROUGE-2*ROUGE-LPaper / SourceCode
ControlCopying (Song et al., 2020)39.0820.4736.69Controlling the Amount of Verbatim Copying in Abstractive SummarizationOfficial
ProphetNet (Yan, Qi, Gong, Liu et al., 2020)39.5120.4236.69ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-trainingOfficial
UniLM (Dong et al., 2019)38.9020.0536.00Unified Language Model Pre-training for Natural Language Understanding and GenerationOfficial
PEGASUS (Zhang et al., 2019)39.1219.8636.24PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive SummarizationOfficial
BiSET (Wang et al., 2019)39.1119.7836.87BiSET: Bi-directional Selective Encoding with Template for Abstractive SummarizationOfficial
MASS (Song et al., 2019)38.7319.7135.96MASS: Masked Sequence to Sequence Pre-training for Language GenerationOfficial
Re^3 Sum (Cao et al., 2018)37.0419.0334.46Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization
JointParsing (Song el at., 2020)36.6118.8534.33Joint Parsing and Generation for Abstractive SummarizationOfficial
CNN-2sent-hieco-RBM (Zhang et al., 2019)37.9518.6435.11Abstract Text Summarization with a Convolutional Seq2Seq Model
Reinforced-Topic-ConvS2S (Wang et al., 2018)36.9218.2934.58A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization
CGU (Lin et al., 2018)36.318.033.8Global Encoding for Abstractive SummarizationOfficial
Pointer + Coverage + EntailmentGen + QuestionGen (Guo et al., 2018)35.9817.7633.63Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation
Struct+2Way+Word (Song et al., 2018)35.4717.6633.52Structure-Infused Copy Mechanisms for Abstractive SummarizationOfficial
FTSum_g (Cao et al., 2018)37.2717.6534.24Faithful to the Original: Fact Aware Neural Abstractive Summarization
DRGD (Li et al., 2017)36.2717.5733.62Deep Recurrent Generative Decoder for Abstractive Text Summarization
SEASS (Zhou et al., 2017)36.1517.5433.63Selective Encoding for Abstractive Sentence SummarizationOfficial
EndDec+WFE (Suzuki and Nagata, 2017)36.3017.3133.88Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization
Seq2seq + selective + MTL + ERAM (Li et al., 2018)35.3317.2733.19Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization
Seq2seq + E2T_cnn (Amplayo et al., 2018)37.0416.6634.93Entity Commonsense Representation for Neural Abstractive Summarization
RAS-Elman (Chopra et al., 2016)33.7815.9731.15Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
words-lvt5k-1sent (Nallapti et al., 2016)32.6715.5930.64Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
ABS+ (Rush et al., 2015)29.7611.8826.96A Neural Attention Model for Sentence Summarization *
ABS (Rush et al., 2015)29.5511.3226.42A Neural Attention Model for Sentence Summarization *

(*) Rush et al., 2015 report ROUGE recall, the table here contains ROUGE F1-scores for Rush's model reported by Chopra et al., 2016

X-Sum

X-Sum (standing for Extreme Summarization), introduced by Narayan et al., 2018, is a summarization dataset which does not favor extractive strategies and calls for an abstractive modeling approach.
The idea of this dataset is to create a short, one sentence news summary.
Data is collected by harvesting online articles from the BBC.
The dataset contain 204 045 samples for the training set, 11 332 for the validation set, and 11 334 for the test set. In average the length of article is 431 words (~20 sentences) and the length of summary is 23 words. It can be downloaded here.
Evaluation metrics are ROUGE-1, ROUGE-2 and ROUGE-L.

ModelROUGE-1ROUGE-2ROUGE-LPaper / SourceCode
BRIO (Liu et al., 2022)49.0725.5940.40BRIO: Bringing Order to Abstractive SummarizationOfficial
PEGASUS (Zhang et al., 2019)47.2124.5639.25PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive SummarizationOfficial
BART (Lewis et al., 2019)45.1422.2737.25BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and ComprehensionOfficial
BertSumExtAbs (Liu et al., 2019)38.8116.5031.27Text Summarization with Pretrained EncodersOfficial
T-ConvS2S31.8911.5425.75Don’t Give Me the Details, Just the Summary!Official
PtGen29.709.2123.24Don’t Give Me the Details, Just the Summary!Official
Seq2Seq28.428.7722.48Don’t Give Me the Details, Just the Summary!Official
PtGen-Covg28.108.0221.72Don’t Give Me the Details, Just the Summary!Official
Baseline : Extractive Oracle29.798.8122.66Don’t Give Me the Details, Just the Summary!Official
Baseline : Lead-316.301.6011.95Don’t Give Me the Details, Just the Summary!Official
Baseline : Random15.161.7811.27Don’t Give Me the Details, Just the Summary!Official

DUC 2004 Task 1

Similar to Gigaword, task 1 of DUC 2004 is a sentence summarization task. The dataset contains 500 documents with on average 35.6 tokens and summaries with 10.4 tokens. Due to its size, neural models are typically trained on other datasets and only tested on DUC 2004. Evaluation metrics are ROUGE-1, ROUGE-2 and ROUGE-L recall @ 75 bytes.

ModelROUGE-1ROUGE-2ROUGE-LPaper / SourceCode
Transformer + LRPE + PE + ALONE + RE-ranking (Takase and Kobayashi, 2020)32.5711.6328.24All Word Embeddings from One EmbeddingOfficial
Transformer + LRPE + PE + Re-ranking (Takase and Okazaki, 2019)32.2911.4928.03Positional Encoding to Control Output Sequence LengthOfficial
DRGD (Li et al., 2017)31.7910.7527.48Deep Recurrent Generative Decoder for Abstractive Text Summarization
EndDec+WFE (Suzuki and Nagata, 2017)32.2810.5427.8Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization
Reinforced-Topic-ConvS2S (Wang et al., 2018)31.1510.8527.68A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization
CNN-2sent-hieco-RBM (Zhang et al., 2019)29.749.8525.81Abstract Text Summarization with a Convolutional Seq2Seq Model
Seq2seq + selective + MTL + ERAM (Li et al., 2018)29.3310.2425.24Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization
SEASS (Zhou et al., 2017)29.219.5625.51Selective Encoding for Abstractive Sentence Summarization
words-lvt5k-1sent (Nallapti et al., 2016)28.619.4225.24Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
ABS+ (Rush et al., 2015)28.188.4923.81A Neural Attention Model for Sentence Summarization
RAS-Elman (Chopra et al., 2016)28.978.2624.06Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
ABS (Rush et al., 2015)26.557.0622.05A Neural Attention Model for Sentence Summarization

Webis-TLDR-17 Corpus

This dataset contains 3 Million pairs of content and self-written summaries mined from Reddit. It is one of the first large-scale summarization dataset from the social media domain. For more details, refer to TL;DR: Mining Reddit to Learn Automatic Summarization

ModelROUGE-1ROUGE-2ROUGE-LPaper/SourceCode
Transformer + Copy (Gehrmann et al., 2019)22617Generating Summaries with Finetuned Language Models
Unified VAE + PGN (Choi et al., 2019)19415VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization

Webis-Snippet-20 Corpus

This dataset contains approximately 10 Million (webpage content, abstractive snippet) pairs and 3.5 Million (query term, webpage content, abstractive snippet) triples for the novel task of (query-biased) abstractive snippet generation of web pages. The corpus is compiled from ClueWeb09, ClueWeb12 and the DMOZ Open Directory Project. For more details, refer to Abstractive Snippet Generation

ModelROUGE-1ROUGE-2ROUGE-LUsefulnessPaper/SourceCode
Anchor-context + Query biased (Chen et al., 2020)25.75.220.166.18Abstractive Snippet Generation

Sentence Compression

Sentence compression produces a shorter sentence by removing redundant information, preserving the grammatically and the important content of the original sentence.

Google Dataset

The Google Dataset was built by Filippova et al., 2013(Overcoming the Lack of Parallel Data in Sentence Compression). The first dataset released contained only 10,000 sentence-compression pairs, but last year was released an additional 200,000 pairs.

Example of a sentence-compression pair:

Sentence: Floyd Mayweather is open to fighting Amir Khan in the future, despite snubbing the Bolton-born boxer in favour of a May bout with Argentine Marcos Maidana, according to promoters Golden Boy

Compression: Floyd Mayweather is open to fighting Amir Khan in the future.

In short, this is a deletion-based task where the compression is a subsequence from the original sentence. From the 10,000 pairs of the eval portion(repository) it is used the very first 1,000 sentence for automatic evaluation and the 200,000 pairs for training.

Models are evaluated using the following metrics:

  • F1 - compute the recall and precision in terms of tokens kept in the golden and the generated compressions.
  • Compression rate (CR) - the length of the compression in characters divided over the sentence length.
ModelF1CRPaper / SourceCode
SLAHAN with syntactic information (Kamigaito et al. 2020)0.8550.407Syntactically Look-Ahead Attention Network for Sentence Compressionhttps://github.com/kamigaito/SLAHAN
BiRNN + LM Evaluator (Zhao et al. 2018)0.8510.39A Language Model based Evaluator for Sentence Compressionhttps://github.com/code4conference/code4sc
LSTM (Filippova et al., 2015)0.820.38Sentence Compression by Deletion with LSTMs
BiLSTM (Wang et al., 2017)0.80.43Can Syntax Help? Improving an LSTM-based Sentence Compression Model for New Domains

Go back to the README