You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However,$\overrightarrow{h}_t \in \mathbb{R}^{d_h}, \overleftarrow{h}_t \in \mathbb{R}^{d_h}$ is the hidden state vector at time $t$ obtained by the forward and backward RNNs, and ${\rm \overleftarrow{RNN}}(x,h)$ is the RNN unit that calculates the previous state from the input $x$ and the hidden state $h$ at the next time, $W^{(yh)} \in \mathbb{R}^{L \times 2d_h}$ is a matrix for predicting categories from the hidden state vector, and $b^{(y)} \in \mathbb{R}^{L}$ is the bias term. Moreover,$[a; b]$ represents a concatenation of two vectors $a$ and $b$.
In addition, experiment with multi-layered bidirectional RNNs.
The text was updated successfully, but these errors were encountered:
85. Bi-directional RNN and Multi-layer RNN
Encode the input text using both forward and backward RNNs and train the model.
$$
\overleftarrow{h}_{T+1} = 0, \
\overleftarrow{h}t = {\rm \overleftarrow{RNN}}(\mathrm{emb}(x_t), \overleftarrow{h}{t+1}), \
y = {\rm softmax}(W^{(yh)} [\overrightarrow{h}_T; \overleftarrow{h}_1] + b^{(y)})
$$
However,$\overrightarrow{h}_t \in \mathbb{R}^{d_h}, \overleftarrow{h}_t \in \mathbb{R}^{d_h}$ is the hidden state vector at time$t$ obtained by the forward and backward RNNs, and ${\rm \overleftarrow{RNN}}(x,h)$ is the RNN unit that calculates the previous state from the input $x$ and the hidden state $h$ at the next time, $W^{(yh)} \in \mathbb{R}^{L \times 2d_h}$ is a matrix for predicting categories from the hidden state vector, and $b^{(y)} \in \mathbb{R}^{L}$ is the bias term. Moreover,$[a; b]$ represents a concatenation of two vectors $a$ and $b$ .
In addition, experiment with multi-layered bidirectional RNNs.
The text was updated successfully, but these errors were encountered: