|Number of watchers on Github||2608|
|Number of open issues||48|
|Average time to close an issue||17 days|
|Average time to merge a PR||19 days|
|Open pull requests||31+|
|Closed pull requests||11+|
|Last commit||almost 2 years ago|
|Repo Created||almost 5 years ago|
|Repo Last Updated||3 months ago|
|Organization / Author||sherjilozair|
|Do you use char-rnn-tensorflow? Leave a review!|
|View open issues (48)|
|View char-rnn-tensorflow activity|
|View on github|
|Book a Mock Interview With Me (Silicon Valley Engineering Leader, 100s of interviews conducted)|
Software engineers: It's time to get promoted. Starting NOW! Subscribe to my mailing list and I will equip you with tools, tips and actionable advice to grow in your career.
Multi-layer Recurrent Neural Networks (LSTM, RNN) for character-level language models in Python using Tensorflow.
Inspired from Andrej Karpathy's char-rnn.
To train with default parameters on the tinyshakespeare corpus, run
python train.py. To access all the parameters use
python train.py --help.
To sample from a checkpointed model,
Sampling while the learning is still in progress (to check last checkpoint) works only in CPU or using another GPU.
To force CPU mode, use
export CUDA_VISIBLE_DEVICES="" and
unset CUDA_VISIBLE_DEVICES afterward
set CUDA_VISIBLE_DEVICES="" and
set CUDA_VISIBLE_DEVICES= on Windows).
To continue training after interruption or to run on more epochs,
python train.py --init_from=save
You can use any plain text file as input. For example you could download The complete Sherlock Holmes as such:
cd data mkdir sherlock cd sherlock wget https://sherlock-holm.es/stories/plain-text/cnus.txt mv cnus.txt input.txt
Then start train from the top level directory using
python train.py --data_dir=./data/sherlock/
A quick tip to concatenate many small disparate
.txt files into one large training file:
ls *.txt | xargs -L 1 cat >> input.txt.
Tuning your models is kind of a
dark art at this point. In general:
rememberfor durations longer than this sequence, but the effect falls off for longer character distances.
To visualize training progress, model graphs, and internal state histograms: fire up Tensorboard and point it at your
$ tensorboard --logdir=./logs/
Then open a browser to http://localhost:6006 or the correct IP/Port specified.
Please feel free to: