Copy mechanism in seq2seq learning
Copy mechanism seems to be such an important and interesting topic when it comes to NLP and seq2seq models, but I can barely find anything about it. More specifically about it's usage.
My task I have been trying to solve for weeks now: A seq2seq task... I have some lines of text as the input and then a 3 token-long text as the output, and those words are always a copy of one of the tokens from the input text.
I have found a project called CopyNet on github. I think that could be the solution, but I can't get it working...
I would be helped out by anything, but the best would be some example code where I could understand how to use something like that. I'm using keras.