From the Sequence M...

From the Sequence Models Course - Word level attention model for Seq2Seq - getting very poor results  


New Member
Joined: 2 years ago
Posts: 1
08/10/2018 11:35 am  

Hi All,

I converted the character level model provided as the programming exercise for week 3 of the course "Sequence Models" into a word level model by changing the way data is fed to the model and I have successfully run the word level model in order to do a sequence to sequence mapping for a transcript of movie dialogs. I always find these two problems that I am not able to figure out and solve:

1) The accuracy I am able to get is very poor with many many repetitions of words where there should be none and (2) If I use a large corpus of movie dialogs then I get an Out of Memory error at the One hot encoding stage. Any help on figuring out ways around these will be very much welcome. A sample of input and output is provided below:

source: can we make this quick roxanne korrine and andrew barrett are having an incredibly horrendous public break up on the quad again

output: i you you you you you

source: the thing is cameron i am at the mercy of a particularly hideous breed of loser my sister i cannot date until she does

output: i you you source: Not the hacking and gagging and spitting part. Please output: i you you you you you


Active Member
Joined: 2 years ago
Posts: 5
22/10/2018 2:43 am  

Can you try sentence encoding, also reduce the dimension of your embedding vector. If you have a large corpus you can train your Embedding Vector as you go. 


We use cookies to collect information about our website and how users interact with it. We’ll use this information solely to improve the site. You are agreeing to consent to our use of cookies if you click ‘OK’. All information we collect using cookies will be subject to and protected by our Privacy Policy, which you can view here.


Please Login or Register