Skip to content
This repository has been archived by the owner on Sep 30, 2024. It is now read-only.

Commit

Permalink
Merge pull request #9 from mathieu-chauvet/master
Browse files Browse the repository at this point in the history
typo correction
  • Loading branch information
martin-gorner authored Apr 28, 2017
2 parents cb7de85 + d47ad7e commit 9a7fd70
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The training script **rnn_train.py** is set up to save training and validation
data as "Tensorboard sumaries" in the "log" folder. They can be visualised with Tensorboard.
In the screenshot below, you can see the RNN being trained on 6 epochs of Shakespeare.
The training and valisation curves stay close together which means that overfitting is not a major issue here.
You can try to add some dropout but it will not improve the situation much becasue it is already quite good.
You can try to add some dropout but it will not improve the situation much because it is already quite good.

![Image](https://martin-gorner.github.io/tensorflow-rnn-shakespeare/tensorboard_screenshot.png)
```
Expand Down Expand Up @@ -75,7 +75,7 @@ distances of 100 characters or more. But you will have to teach it this trick
using examples of 30 or less characters.

### 4) So, now that I have unrolled the RNN cell, state passing is taken care of. I just have to call my train_step in a loop right ?
Not quite, you sill need to save the last state of the unrolled sequence of
Not quite, you still need to save the last state of the unrolled sequence of
cells, and feed it as the input state for the next minibatch in the traing loop.

### 5) What is the proper way of batching training sequences ?
Expand Down Expand Up @@ -159,4 +159,4 @@ DOMITIUS ENOY
That you may be a soldier's father for the field.
[Exit]
```
```

0 comments on commit 9a7fd70

Please sign in to comment.