Abstract:
Previous question generation studies mainly use sequence-to-sequence frameworks based on recurrent neural networks, which ignores the answer information and syntactic information hidden in the context. In order to solve the above problems, this paper proposes a question generation model based on context answer fusion and syntactic dependency parsing. In the encoding stage, syntactic dependency relation of the context was captured by gated graph convolutional network, meanwhile using the co-attention mechanism to align the input context and answer. The model generated high-quality questions that were close to the answer by paying attention to the answer information and syntactic dependency relation of the context. Moreover, this paper used reinforcement learning to further improve the model performance. The experimental results on the public dataset SQuAD show that the method outperforms the baseline model in terms of evaluation metrics BLEU-4 and ROUGE-L.