recurrent-models
seq2seq
- Encoding RNN processes each timestep in , and the last timestep corresponds to
Backlinks
g2-spectra
- For very long [[sequential-data]], [[recurrent-models]] lose sight very quickly because they have to backprop all the way through time: this is very apparent when trying to do a `seq2seq` type model for the $g_2$ spectra, which end up simply predicting the mean of the data regardless of the inputs (taken from `sleek-monkey-36`):
machine-learning-notes
- [[recurrent-models]]