Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tf.contrib.rnn.core_rnn_cell.BasicLSTMCell should be replaced by tf.contrib.rnn.BasicLSTMCell #46

Open
MartinThoma opened this issue Jun 30, 2017 · 4 comments

Comments

@MartinThoma
Copy link

MartinThoma commented Jun 30, 2017

For Tensorflow 1.2 and Keras 2.0, the line tf.contrib.rnn.core_rnn_cell.BasicLSTMCell should be replaced by tf.contrib.rnn.BasicLSTMCell.

$ ./train_demo.sh
017-06-30 16:09:13,025 root  INFO     ues GRU in the decoder.
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
Traceback (most recent call last):
  File "src/launcher.py", line 146, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 142, in main
    session = sess)
  File "/home/math/Github/Attention-OCR/src/model/model.py", line 151, in __init__
    use_gru = use_gru)
  File "/home/math/Github/Attention-OCR/src/model/seq2seq_model.py", line 87, in __init__
    single_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(attn_num_hidden, forget_bias=0.0, state_is_tuple=False)
AttributeError: 'module' object has no attribute 'core_rnn_cell'

and

$ sh test_demo.sh 
2017-06-30 16:10:13.765890: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765918: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765927: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765933: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765938: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13,766 root  INFO     loading data
2017-06-30 16:10:13,767 root  INFO     phase: test
2017-06-30 16:10:13,767 root  INFO     model_dir: model_01_16
2017-06-30 16:10:13,767 root  INFO     load_model: True
2017-06-30 16:10:13,767 root  INFO     output_dir: model_01_16/synth90
2017-06-30 16:10:13,767 root  INFO     steps_per_checkpoint: 500
2017-06-30 16:10:13,767 root  INFO     batch_size: 1
2017-06-30 16:10:13,767 root  INFO     num_epoch: 3
2017-06-30 16:10:13,767 root  INFO     learning_rate: 1
2017-06-30 16:10:13,768 root  INFO     reg_val: 0
2017-06-30 16:10:13,768 root  INFO     max_gradient_norm: 5.000000
2017-06-30 16:10:13,768 root  INFO     clip_gradients: True
2017-06-30 16:10:13,768 root  INFO     valid_target_length inf
2017-06-30 16:10:13,768 root  INFO     target_vocab_size: 39
2017-06-30 16:10:13,768 root  INFO     target_embedding_size: 10.000000
2017-06-30 16:10:13,768 root  INFO     attn_num_hidden: 256
2017-06-30 16:10:13,768 root  INFO     attn_num_layers: 2
2017-06-30 16:10:13,768 root  INFO     visualize: True
2017-06-30 16:10:13,768 root  INFO     buckets
2017-06-30 16:10:13,768 root  INFO     [(16, 32), (27, 32), (35, 32), (64, 32), (80, 32)]
2017-06-30 16:10:13,768 root  INFO     ues GRU in the decoder.
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
Traceback (most recent call last):
  File "src/launcher.py", line 146, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 142, in main
    session = sess)
  File "/home/math/Github/Attention-OCR/src/model/model.py", line 151, in __init__
    use_gru = use_gru)
  File "/home/math/Github/Attention-OCR/src/model/seq2seq_model.py", line 87, in __init__
    single_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(attn_num_hidden, forget_bias=0.0, state_is_tuple=False)
AttributeError: 'module' object has no attribute 'core_rnn_cell'
@MartinThoma MartinThoma changed the title Doesn't work with Tensorflow 1.2 and Keras 2.0 tf.contrib.rnn.core_rnn_cell.BasicLSTMCell should be replaced by tf.contrib.rnn.BasicLSTMCell Jun 30, 2017
@MartinThoma
Copy link
Author

see #47

@bandarikanth
Copy link

replace this code in place of previous code:

basic_cell = tf.contrib.rnn.DropoutWrapper(
tf.contrib.rnn.BasicLSTMCell(emb_dim, state_is_tuple=True),
output_keep_prob=self.keep_prob)
# stack cells together : n layered model
stacked_lstm = tf.contrib.rnn.MultiRNNCell([basic_cell] * num_layers, state_is_tuple=True)

@thewhiteflower110
Copy link

Try to replace Line no. 87 by
single_cell = tf.contrib.rnn.rnn_cell.BasicLSTMCell(attn_num_hidden, forget_bias=0.0, state_is_tuple=False)

replacing "core_rnn_cell" by "rnn_cell" solves the issue of tensorflow 0.12.1 and python 3

@MeerAjaz
Copy link

In my case I replaced tf.contrib.rnn.core_rnn_cell.BasicLSTMCell with tf.contrib.rnn.BasicLSTMCell and I replaced every rnn.core_rnn_cell with just rnn and it was working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants