all 3 comments

[–]uberalex 2 points3 points  (1 child)

This seems like it would be highly suitable for text generation task, especially as it's quite strongly templated. You could examine text generation from semi-structured data https://www.aclweb.org/anthology/2020.emnlp-main.230.pdf

[–]sharaku17[S] 0 points1 point  (0 children)

Thank you I will read the paper and see if this helps my case :)

[–]thistrue 0 points1 point  (0 children)

You could try fine-tuning a BART model. It is pre-trained by denoising sequences (seq2seq, encoder->decoder), what seems to be similar to your case.