diff --git a/README.md b/README.md index ea6e321..f90169f 100644 --- a/README.md +++ b/README.md @@ -25,7 +25,7 @@ The training script **rnn_train.py** is set up to save training and validation data as "Tensorboard sumaries" in the "log" folder. They can be visualised with Tensorboard. In the screenshot below, you can see the RNN being trained on 6 epochs of Shakespeare. The training and valisation curves stay close together which means that overfitting is not a major issue here. - You can try to add some dropout but it will not improve the situation much becasue it is already quite good. + You can try to add some dropout but it will not improve the situation much because it is already quite good. ![Image](https://martin-gorner.github.io/tensorflow-rnn-shakespeare/tensorboard_screenshot.png) ``` @@ -75,7 +75,7 @@ distances of 100 characters or more. But you will have to teach it this trick using examples of 30 or less characters. ### 4) So, now that I have unrolled the RNN cell, state passing is taken care of. I just have to call my train_step in a loop right ? -Not quite, you sill need to save the last state of the unrolled sequence of +Not quite, you still need to save the last state of the unrolled sequence of cells, and feed it as the input state for the next minibatch in the traing loop. ### 5) What is the proper way of batching training sequences ? @@ -159,4 +159,4 @@ DOMITIUS ENOY That you may be a soldier's father for the field. [Exit] - ``` \ No newline at end of file + ```