Skip to content

Commit

Permalink
Version update - 0.5.0
Browse files Browse the repository at this point in the history
Version update - 0.5.0
  • Loading branch information
eliorc authored Nov 29, 2019
2 parents 4ab2576 + 0bc2e00 commit 8320f4b
Show file tree
Hide file tree
Showing 12 changed files with 99 additions and 278 deletions.
4 changes: 4 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,10 @@ jobs:
source venv/bin/activate
pytest -s --cov=tavolo tests/
codecov
test-3.8:
<<: *test-template
docker:
- image: circleci/python:3.6
test-3.6:
<<: *test-template
docker:
Expand Down
17 changes: 8 additions & 9 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@

------------

.. image:: https://img.shields.io/badge/python-3.5%20%7C%203.6%20%7C%203.7-blue.svg
.. image:: https://img.shields.io/badge/python-3.5%20%7C%203.6%20%7C%203.7%20%7C%203.8-blue.svg
:alt: Supported Python versions

.. image:: https://img.shields.io/badge/tensorflow-2.0.0--rc0-orange.svg
.. image:: https://img.shields.io/badge/tensorflow-2.0-orange.svg
:alt: Supported TensorFlow versions

.. image:: https://codecov.io/gh/eliorc/tavolo/branch/master/graph/badge.svg
Expand All @@ -27,8 +27,7 @@ Tavolo
| You see, the deep learning world is moving fast, and new ideas keep on coming.
| tavolo gathers implementations of these useful ideas from the community (by contribution, from `Kaggle`_ etc.)
and makes them accessible in a single PyPI hosted package that compliments the `tf.keras`_ module.
|
| *Notice: tavolo is developed for TensorFlow 2.0 (right now on pre-release), most modules will work with earlier versions but some won't (like LayerNormalization)*

Documentation
-------------
Expand All @@ -41,8 +40,8 @@ Showcase
--------

| tavolo's API is straightforward and adopting its modules is as easy as it gets.
| In tavolo, you'll find implementations for basic layers like `LayerNormalization`_ to complex modules like the Transformer's
`MultiHeadedSelfAttention`_. You'll also find non-layer implementations that can ease development, like the `LearningRateFinder`_.
| In tavolo, you'll find implementations for basic layers like `PositionalEncoding`_ to complex modules like the Transformer's
`MultiHeadedAttention`_. You'll also find non-layer implementations that can ease development, like the `LearningRateFinder`_.
| For example, if we wanted to add head a multi-headed attention mechanism into our model and look for the optimal learning rate, it would look something like:
.. code-block:: python3
Expand All @@ -52,7 +51,7 @@ Showcase
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_size, input_length=max_len),
tvl.seq2seq.MultiHeadedSelfAttention(n_heads=8), # <--- Add self attention
tvl.seq2seq.MultiHeadedAttention(n_heads=8), # <--- Add self attention
tf.keras.layers.LSTM(n_lstm_units, return_sequences=True),
tf.keras.layers.Dense(n_hidden_units, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')])
Expand All @@ -70,8 +69,8 @@ Showcase
.. _`TensorFlow`: https://www.tensorflow.org/
.. _`Kaggle`: https://www.kaggle.com
.. _`tf.keras`: https://www.tensorflow.org/guide/keras
.. _`LayerNormalization`: https://tavolo.readthedocs.io/en/latest/normalization.html#layer-normalization
.. _`MultiHeadedSelfAttention`: https://tavolo.readthedocs.io/en/latest/seq2seq.html#multi-headed-self-attention
.. _`PositionalEncoding`: https://tavolo.readthedocs.io/en/latest/embeddings.html#module-embeddings.PositionalEncoding
.. _`MultiHeadedAttention`: https://tavolo.readthedocs.io/en/latest/seq2seq.html#multi-headed-self-attention
.. _`LearningRateFinder`: https://tavolo.readthedocs.io/en/latest/learning.html#learning-rate-finder


Expand Down
10 changes: 3 additions & 7 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,13 @@ Welcome to tavolo's documentation!
| tavolo gathers implementations of these useful ideas from the community (by contribution, from `Kaggle`_ etc.)
and makes them accessible in a single PyPI hosted package that compliments the `tf.keras`_ module.
.. warning::

tavolo is developed for TensorFlow 2.0 (right now on pre-release), most modules will work with earlier versions but some won't (like LayerNormalization)

Showcase
--------

| tavolo's API is straightforward and adopting its modules is as easy as it gets.
| In tavolo, you'll find implementations for basic layers like :ref:`layer_normalization` to complex modules like the Transformer's
:ref:`multi_headed_self_attention`. You'll also find non-layer implementations that can ease development, like the :ref:`learning_rate_finder`.
| In tavolo, you'll find implementations for basic layers like :ref:`positional_encoding` to complex modules like the Transformer's
:ref:`multi_headed_attention`. You'll also find non-layer implementations that can ease development, like the :ref:`learning_rate_finder`.
| For example, if we wanted to add head a multi-headed attention mechanism into our model and look for the optimal learning rate, it would look something like:
.. code-block:: python3
Expand All @@ -28,7 +25,7 @@ Showcase
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_size, input_length=max_len),
tvl.seq2seq.MultiHeadedSelfAttention(n_heads=8), # <--- Add self attention
tvl.seq2seq.MultiHeadedAttention(n_heads=8), # <--- Add self attention
tf.keras.layers.LSTM(n_lstm_units, return_sequences=True),
tf.keras.layers.Dense(n_hidden_units, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')])
Expand Down Expand Up @@ -59,7 +56,6 @@ Showcase

embeddings
learning
normalization
seq2seq
seq2vec

Expand Down
18 changes: 0 additions & 18 deletions docs/source/normalization.rst

This file was deleted.

6 changes: 3 additions & 3 deletions docs/source/seq2seq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ Layers mapping sequences to sequences

-------

.. _`multi_headed_self_attention`:
.. _`multi_headed_attention`:

``MultiHeadedSelfAttention``
``MultiHeadedAttention``
++++++++++++++++++++++++++++

.. automodule:: seq2seq.MultiHeadedSelfAttention
.. automodule:: seq2seq.MultiHeadedAttention
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from setuptools import setup

VERSION = '0.4.1'
VERSION = '0.5.0'

setup(name='tavolo',
version=VERSION,
Expand Down
3 changes: 1 addition & 2 deletions tavolo/__init__.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
__name__ = 'tavolo'
__version__ = '0.4.1'
__version__ = '0.5.0'

from . import embeddings
from . import normalization
from . import seq2vec
from . import seq2seq
from . import learning
98 changes: 0 additions & 98 deletions tavolo/normalization.py

This file was deleted.

Loading

0 comments on commit 8320f4b

Please sign in to comment.