Skip to content

Commit

Permalink
Merge pull request #62 from fractalego/development
Browse files Browse the repository at this point in the history
Development
  • Loading branch information
fractalego authored Aug 26, 2023
2 parents 683626d + c0e5e8c commit 26846b1
Show file tree
Hide file tree
Showing 231 changed files with 5,228 additions and 2,271 deletions.
13 changes: 5 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ While it is ready to play with, it might not ready for production depending on y
Installation
============

In this version, WAFL is built to run as a two-part system.
In this version, WAFL is a two-part system.
Both can be installed on the same machine.

![The two parts of WAFL](images/two-parts.png)
Expand Down Expand Up @@ -65,29 +65,26 @@ Running WAFL
This document contains a few examples of how to use the `wafl` CLI.
There are four modes in which to run the system

![The two parts of WAFL](images/wafl-commands.png)


## wafl run-audio
## $ wafl run-audio

This is the main mode of operation. It will run the system in a loop, waiting for the user to speak a command.
The activation word is the name defined in config.json.
The default name is "computer", but you can change it to whatever you want.


## wafl run-server
## $ wafl run-server

It runs a local web server that listens for HTTP requests on port 8889.
The server will act as a chatbot, executing commands and returning the result as defined in the rules.


## wafl run-cli
## $ wafl run-cli

This command works as for the run-server command, but it will listen for commands on the command line.
It does not run a webserver and is useful for testing purposes.


## wafl run-tests
## $ wafl run-tests

This command will run all the tests defined in the file testcases.txt.

Expand Down
5 changes: 5 additions & 0 deletions changelog.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### v 0.0. 70

* Added the keyword RETRIEVE to get list of relevant items from the knowledge base
* On-the-spot rule generation
* mapping lists onto queries
Binary file removed documentation/build/doctrees/chitchat.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/directory_structure.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/environment.pickle
Binary file not shown.
Binary file modified documentation/build/doctrees/examples.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/index.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/installation.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/introduction.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/license.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/rules.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/rules_and_backtracking.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/running_WAFL.doctree
Binary file not shown.
Binary file modified documentation/build/doctrees/wafl_init.doctree
Binary file not shown.
Binary file removed documentation/build/html/_images/wafl-commands.png
Binary file not shown.
8 changes: 0 additions & 8 deletions documentation/build/html/_sources/chitchat.rst.txt

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -39,5 +39,5 @@ Only the rules and facts that are included will be part of the inference.
For example, the keyword `#using facts` within greetings/ (2) will not include the folder above it.
Inference in a subfolder is limited the the rules and facts that are part of that folder or below it.

For more information, you can have a look at the (still early) project in
For more complete example, you can have a look at the (still early) project in
`wafl_home <https://github.com/fractalego/wafl_home>`_.
3 changes: 0 additions & 3 deletions documentation/build/html/_sources/examples.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,4 @@ Examples
:maxdepth: 2

wafl_init
chitchat
rules
rules_and_backtracking
directory_structure
6 changes: 4 additions & 2 deletions documentation/build/html/_sources/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to WAFL's documentation!
================================
Welcome to WAFL's 0.0.45 documentation!
=======================================

.. toctree::
:maxdepth: 3
Expand All @@ -13,6 +13,8 @@ Welcome to WAFL's documentation!
introduction
installation
running_WAFL
query_processing_pipeline
rules
examples
license

Expand Down
19 changes: 11 additions & 8 deletions documentation/build/html/_sources/installation.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,21 @@ Both can be installed on the same machine.
:align: center

Interface side
---------
--------------

The first part is local to your machine and needs to have access to a microphone and speaker.
To install it, run the following commands:

.. code-block:: bash
sudo apt-get install portaudio19-dev ffmpeg
pip install wafl
$ sudo apt-get install portaudio19-dev ffmpeg
$ pip install wafl
After installing the requirements, you can initialize the interface by running the following command:

.. code-block:: bash
wafl init
$ wafl init
which creates a `config.json` file that you can edit to change the default settings.
A standard rule file is also created as `wafl.rules`.
Expand All @@ -33,13 +33,16 @@ Please see the examples in the following chapters.
LLM side (needs a GPU)
----------------------

The second part is a server that runs on a machine with a public IP address.
This last machine will need to have a GPU to run the Large Language Model at a convenient speed.
This part can be run using a docker image by running the script
The second part is a machine that runs on a machine accessible from the interface side.
The initial configuration is for a local deployment of language models.
No action is needed to run WAFL if you want to run it as a local instance.

However, a multi-user setup will benefit for a dedicated server.
In this case, a docker image can be used

.. code-block:: bash
docker run -p8080:8080 --env NVIDIA_DISABLE_REQUIRE=1 --gpus all fractalego/wafl-llm:latest
$ docker run -p8080:8080 --env NVIDIA_DISABLE_REQUIRE=1 --gpus all fractalego/wafl-llm:latest
The interface side has a `config.json` file that needs to be filled with the IP address of the LLM side.
Expand Down
6 changes: 4 additions & 2 deletions documentation/build/html/_sources/introduction.rst.txt
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
Introduction
============

WAFL is a framework for home assistants.
It is designed to combine Large Language Models and rules to create a predictable behavior.
WAFL is a framework for personal agents.
It integrates Large language models, speech recognition and text to speech.

This framework combines Large Language Models and rules to create a predictable behavior.
Specifically, instead of organising the work of an LLM into a chain of thoughts,
WAFL intends to organise its behavior into inference trees.

Expand Down
182 changes: 6 additions & 176 deletions documentation/build/html/_sources/rules.rst.txt
Original file line number Diff line number Diff line change
@@ -1,181 +1,11 @@
Rules
=====

The file rules.wafl contains the rules used by the system.
Each rule is in the following format
Examples
========

.. code-block:: text
Trigger condition
action 1
action 2
action 3
...
Notice that the trigger condition has no indentation, while the actions are indented by any number spaces.
Each action returns a true or false value.
If that value is false, the rule stops executing and the next rule is triggered.
A demo of the rules can be found in the repository `wafl_home <https://github.com/fractalego/wafl_home>`_.

A rule ends when the next rule is encountered (a new trigger condition is found).
The trigger condition can be a single fact. For example

.. code-block:: text
This bot's name is "computer"
There are two actors in the system: "the user" and "the bot".
One simple rule example can be

.. code-block:: text
The user asks what is this bot's name
SAY Hello, my name is Computer
The rule above will be triggered when the user asks what is this bot's name.
There are 7 types of actions:
**SAY**,
**REMEMBER**,
**asking a question**,
**generate a text**,
**triggering of another rule**,
**code execution**,
**entailment**.


SAY command
-----------

This command will make the bot say something.
For example the rule above will make the bot say "Hello, my name is computer".

REMEMBER command
----------------

This command will make the bot remember something.
for example the rule below will make the bot remember the user's name.

.. code-block:: text
The user says their name is John
REMEMBER The user's name is John
Asking a question
-----------------

Typing a question (with or without question mark) will return a variable.
This variable can be used later in the rule
For example the rule below will make the bot ask the user's name.

.. code-block:: text
The user says their name
name = what is the user's name?
REMEMBER The user's name is {name}
Yes/No questions return a truth condition.
For example by using the rule below, the bot will ask the user if they want to remember their name.

.. code-block:: text
The user says their name
name = what is the user's name?
Do you want to remember the user's name?
REMEMBER The user's name is {name}
If the user says "no", the rule will stop executing and the REMEMBER command will never be used


Generate a text
----------------

A text can be generated in a similar fashion as when asking questions

.. code-block:: text
The user says their name
name = what is the user's name?
italian_name = the italian version of {name} is
SAY The italian version of {name} is {italian_name}
The text will be generated by the line "the italian version of {name}" according to the LLM model.
The only difference with asking question is that the text on the right hand side of `=` is a statement
and not a question.

Triggering of another rule
--------------------------

A rule can trigger another rule as follows

.. code-block:: text
The user says their name
name = what is the user's name?
the name if the user is {name}
The name of the user is John
SAY Hello John!
In this case the second rule is triggered if the user says their name is John.

Code execution
--------------

The code execution is done by using the python syntax.
A function defined in the file `functions.py` can be called from the rule.


For example, the file `rules.wafl` contains the following rule

.. code-block:: text
The user says their name
name = what is the user's name?
greet({name})
and the file `functions.py` contains the following function

.. code-block:: python
def greet(name):
print("Hello", name)
When the user says their name, the bot will greet the user by calling the function greet with the user's name as argument.
However print() does not activate the SAY command.
From the `functions.py` file, a rule can be triggered by using the syntax `"% ... %"`

.. code-block:: python
def greet(name):
"% SAY Hello %"
f"% SAY your name is {name} %"
The first line will make the bot say "Hello". The second line will make the bot say "your name is John" if the user's name is John.

The syntax `"% ... %"`, can be used to trigger a rule, to generate a text, to ask a question, to remember something, or any other action available in the rules file.
For example the prior function can be written as follows

.. code-block:: python
def greet(name):
"% SAY Hello %"
"% SAY your name is {name} %"
date = "% what is the date today? %"
"% SAY today is {date} %"
while "% Do you want to continue? %":
"% SAY I am happy to continue %"
Entailment
----------

The entailment is done by using the :- operator. if RHS entails LHS, then LSH :- RHS is true, otherwise it is false.
For example the rule below will stop at the second line if the user's name is not John.

.. code-block:: text
The user says their name
name = what is the user's name?
The user's name is John :- The user's name is {name}
SAY Your name is John!
.. toctree::
:maxdepth: 2

writing_the_rules
rules_and_backtracking
24 changes: 9 additions & 15 deletions documentation/build/html/_sources/running_WAFL.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,36 +3,30 @@ Running WAFL
This document contains a few examples of how to use the `wafl` CLI.
There are four modes in which to run the system

$ wafl run-audio
----------------

.. image:: _static/wafl-commands.png
:alt: Basic CLI commands
:align: center


wafl run-audio
--------------

This is the main mode of operation. It will run the system in a loop, waiting for the user to speak a command.
It will run the system in a loop, waiting for the user to speak a command.
The activation word is the name defined in config.json.
The default name is "computer", but you can change it to whatever you want.


wafl run-server
---------------
$ wafl run-server
-----------------

It runs a local web server that listens for HTTP requests on port 8889.
The server will act as a chatbot, executing commands and returning the result as defined in the rules.


wafl run-cli
------------
$ wafl run-cli
--------------

This command works as for the run-server command, but it will listen for commands on the command line.
It does not run a webserver and is useful for testing purposes.


wafl run-tests
--------------
$ wafl run-tests
----------------

This command will run all the tests defined in the file testcases.txt.

Loading

0 comments on commit 26846b1

Please sign in to comment.