You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, @KimUyen
I am working on a local map prediction problem and I want to use Bi-ConvLSTM to solve my problem.
In your implementation ConvLSTM part, line 238: ## LSTM forward direction input_fw = input_tensor for layer_num in range(self.layer_num): h, c = hidden_states[layer_num] output_inner = [] for t in range(seq_len): h, c = self.cells_fw[layer_num](input_tensor=input_fw[:, t, :, :, :], cur_state=[h, c]) output_inner.append(h) layer_output = torch.stack(output_inner, dim=1) layer_outputs_fw.append(layer_output) layer_outputs = torch.stack(layer_outputs_fw, dim=1) last_state_fw = [h, c]
why the input of every layer is the same, I thought it should be the hidden state of the last layer, am I right?
The text was updated successfully, but these errors were encountered:
Hi, @KimUyen
I am working on a local map prediction problem and I want to use Bi-ConvLSTM to solve my problem.
In your implementation ConvLSTM part, line 238:
## LSTM forward direction input_fw = input_tensor for layer_num in range(self.layer_num): h, c = hidden_states[layer_num] output_inner = [] for t in range(seq_len): h, c = self.cells_fw[layer_num](input_tensor=input_fw[:, t, :, :, :], cur_state=[h, c]) output_inner.append(h) layer_output = torch.stack(output_inner, dim=1) layer_outputs_fw.append(layer_output) layer_outputs = torch.stack(layer_outputs_fw, dim=1) last_state_fw = [h, c]
why the input of every layer is the same, I thought it should be the hidden state of the last layer, am I right?
The text was updated successfully, but these errors were encountered: