Abstract:Traditional extractive and abstractive methods lack readability and accuracy in the summary auto-generated task, so a HRAGS (Hybrid Guided Summarization with Redundancy-Aware) model-based hybrid summary generation method was proposed. First, the method used the BERT pre-trained language model to obtain a contextual representation and combined with redundancy-aware method to construct an extractive model. Then a couple of trained BERT encoders were united with a randomly-initialized Transformer decoder contained two encoder-decoder attention modules to construct an abstractive model. The abstractive model adopted a two-staged fine-tuning approach to resolve the training imbalance problem between encoders and decoders. Finally, an Oracle greedy algorithm chose key sentences as external guidance and source document with guidance were put into the abstractive model to acquire a summary, which was verified on the LCSTS evaluation dataset. Experimental results shows that the HRAGS model can generate a more readable, accurate and high ROUGE score summary compared with other benchmark models.