-
Notifications
You must be signed in to change notification settings - Fork 11
Project Architecture Overview
In this system, the architecture revolves around a rule-based pattern matching engine that leverages MeTTa's meta-programming capabilities. The system is designed to evaluate expressions dynamically by matching patterns against defined rules. The rules dictate how specific operations, like arithmetic, are handled within the system.
Here’s a breakdown of the architecture and flow of how rule-based arithmetic, such as (add $a $b)
, is processed and evaluated in the system:
-
Atomspace (Blackboard):
-
Role: Atomspace serves as the knowledge base and storage mechanism for all code modules (rules, operations, and logic). In this context, it holds the rules for pattern matching and transformations, like the rule
(= (add $a $b) (+ $a $b))
. -
Storage: Rules and expressions are stored as atoms in Atomspace. These atoms represent the symbolic expressions and patterns used in the system. For example, the rule
(= (add $a $b) (+ $a $b))
would be represented as atoms that link the patterns and their transformations.
-
Role: Atomspace serves as the knowledge base and storage mechanism for all code modules (rules, operations, and logic). In this context, it holds the rules for pattern matching and transformations, like the rule
-
Rule-Based Pattern Matching Engine:
-
Role: This engine is the core processing unit that evaluates MeTTa expressions. It works by matching input expressions (e.g.,
(add 5 10)
) against predefined rules stored in Atomspace. -
Functionality: When an expression is submitted for evaluation, the engine:
- Searches Atomspace for matching patterns.
-
Transforms the expression according to the matched rule (e.g.,
(add $a $b)
becomes(+ $a $b)
). - Evaluates the transformed expression to produce a result.
- Dynamic Behavior: This component is capable of dynamically creating and modifying rules during runtime, allowing for flexible and adaptive programming.
-
Role: This engine is the core processing unit that evaluates MeTTa expressions. It works by matching input expressions (e.g.,
-
Dynamic Evaluation (MeTTa Interpreter):
-
Role: The interpreter is responsible for executing the transformed expressions after the rule-based engine has processed them. It recursively evaluates expressions that have been transformed (e.g.,
(+ 5 10)
), calculating their values. - Interaction with Atomspace: The interpreter may retrieve additional rules or definitions from Atomspace during the evaluation process if needed (e.g., recursive function calls or additional pattern matches).
-
Role: The interpreter is responsible for executing the transformed expressions after the rule-based engine has processed them. It recursively evaluates expressions that have been transformed (e.g.,
-
Module Manager:
- Role: The module manager oversees the organization and management of code modules within Atomspace. These modules may contain arithmetic rules, logical rules, or even higher-level control structures. The manager is responsible for loading, linking, and organizing these modules in Atomspace.
- Interaction with Subprocesses: In a more complex scenario, the module manager might also delegate certain tasks to subprocesses (e.g., Python or Scheme) when MeTTa modules define external dependencies or computations that need to be performed in other languages.
-
Subprocess Manager (Optional):
- Role: While not necessary for basic rule-based arithmetic, in more complex systems, the subprocess manager would handle the interaction with external subprocesses (e.g., calling a Python subprocess for specialized computations). In the context of rule-based arithmetic, the subprocess manager would be activated when a MeTTa rule specifies that external code needs to be executed.
-
Data Conversion Layer:
- Role: If subprocesses are used, the data conversion layer would handle translating the internal MeTTa expressions into forms that other subprocesses (like Python) can understand, and vice versa. For example, a rule might specify that a computation should be performed by an external Python subprocess, in which case the system would convert the MeTTa representation into Python code, execute it, and convert the results back into MeTTa atoms.
Let’s go through the system flow when an expression like (add 5 10)
is evaluated:
-
Input Expression:
- The user inputs the expression
(add 5 10)
for evaluation. This expression is represented internally as atoms in Atomspace.
- The user inputs the expression
-
Pattern Matching:
- The pattern matching engine searches Atomspace for a rule that matches the structure of the expression. It finds the rule
(= (add $a $b) (+ $a $b))
.
- The pattern matching engine searches Atomspace for a rule that matches the structure of the expression. It finds the rule
-
Transformation:
- The engine transforms the expression
(add 5 10)
by substituting5
for$a
and10
for$b
, resulting in the new expression(+ 5 10)
.
- The engine transforms the expression
-
Evaluation:
- The transformed expression
(+ 5 10)
is passed to the dynamic evaluation component (MeTTa interpreter), which evaluates the expression and calculates the result:15
.
- The transformed expression
-
Return Result:
- The result
15
is returned as the output of the evaluation. This result may also be stored back into Atomspace for further use by other components or subsequent computations.
- The result
+-----------------+
| Atomspace | <- Stores rules, expressions, and evaluation results
| (Blackboard for |
| Code and Rules) |
+-----------------+
|
v
+-------------------------+
| Rule-Based Pattern | <- Matches and transforms input expressions
| Matching Engine |
+-------------------------+
|
v
+-------------------------+
| Dynamic Evaluation | <- Evaluates transformed expressions (e.g., (+ 5 10))
| (MeTTa Interpreter) |
+-------------------------+
|
v
+-------------------------+
| Execution Results | <- Returns the evaluated result (e.g., 15)
| |
+-------------------------+
-
Atomspace as Central Knowledge Base:
- Atomspace holds not only code (e.g., arithmetic rules) but also the logic for how expressions are transformed and evaluated. This allows the system to be highly flexible, as the rules and logic can be updated at runtime without changing the underlying system architecture.
-
Rule-Based Pattern Matching Engine:
- The engine is responsible for transforming expressions based on the rules defined in Atomspace. For arithmetic, it matches expressions like
(add $a $b)
and replaces them with corresponding mathematical operations.
- The engine is responsible for transforming expressions based on the rules defined in Atomspace. For arithmetic, it matches expressions like
-
Dynamic Evaluation:
- Once the expression has been transformed, the MeTTa interpreter evaluates it recursively. The interpreter’s role is to carry out the final computation, such as resolving
(+ 5 10)
to15
.
- Once the expression has been transformed, the MeTTa interpreter evaluates it recursively. The interpreter’s role is to carry out the final computation, such as resolving
-
Module and Subprocess Management:
- Although not needed for simple arithmetic, in a more complex system, the module manager would oversee the organization of different rule sets and subprocesses. When rules or code modules specify that external systems (like Python) need to be invoked, the subprocess manager would handle the orchestration.
-
Data Flow:
- The data flow follows a clear path: from the initial expression, through rule transformation, evaluation, and finally to the output result. Atomspace plays a crucial role by allowing different components of the system to access and modify shared information, ensuring consistency across the system.
In this architecture:
- MeTTa enables meta-programming through dynamic pattern matching and rule-based transformations.
- Atomspace serves as the central blackboard where rules and expressions are stored and accessed by various components.
- The Rule-Based Pattern Matching Engine handles the transformation of input expressions based on predefined rules.
- The Dynamic Evaluation Component (MeTTa Interpreter) evaluates the transformed expressions and computes the results.
- The architecture is extensible to handle more complex interactions, such as integrating external subprocesses (like Python or Scheme) for specialized computations, managed by a module manager and subprocess manager.
To better understand how these components interact within the MeTTa system, here's a high-level flow of operations:
-
Startup and Initialization
- The system begins by executing
init.default.metta
, which sets up the initial environment. - Necessary modules and core libraries (
corelib.metta
,metta_corelib.pl
) are loaded usingmetta_loader.pl
.
- The system begins by executing
-
User Interaction via REPL or Script Execution
- If the user engages with MeTTa interactively,
metta_repl.pl
andrepl.default.metta
handle inputs and outputs. - For script execution, the interpreter processes MeTTa files directly.
- If the user engages with MeTTa interactively,
-
Parsing Source Code
- Source code is fed into
metta_parser.pl
, which parses it into internal representations. -
metta_convert.pl
may further process these representations for compatibility with the evaluator or compiler.
- Source code is fed into
-
Compilation or Interpretation
-
Compilation Path:
-
metta_compiler.pl
compiles the code, using templates frommetta_comp_templates.pl
.
-
-
Interpretation Path:
-
metta_interp.pl
usesmetta_eval.pl
to evaluate the parsed code directly.
-
-
Compilation Path:
-
Runtime Execution
- The runtime environment (
metta_space.pl
,metta_subst.pl
,metta_types.pl
) manages variable scopes, type checking, and pattern matching during execution. - Concurrency support is provided by
metta_threads.pl
if needed.
- The runtime environment (
-
Utilizing Core Libraries and Extensions
- Core functions from
corelib.metta
andmetta_corelib.pl
are available throughout execution. - Logic programming features are accessed via
metta_ontology.pfc.pl
,metta_pfc_base.pl
, andmetta_pfc_support.pl
. - Python interoperability is handled by the
metta_python
suite of files.
- Core functions from
-
Output and Feedback
- Results and outputs are formatted by
metta_printer.pl
and displayed to the user. - Debugging information (if enabled) is provided by
metta_debug.pl
.
- Results and outputs are formatted by
-
Persistence and State Management
-
metta_persists.pl
allows the saving and loading of program states as needed.
-
-
Server Operations
- If running in server mode,
metta_server.pl
manages client connections and request handling.
- If running in server mode,
Below is a textual representation of the MeTTa system architecture, showing the relationships between components:
[ User Interface ]
|
V
[ REPL (metta_repl.pl) ] -- [ REPL Script (repl.default.metta) ]
|
V
[ Parser (metta_parser.pl) ] -- [ Converter (metta_convert.pl) ]
|
V
[ Interpreter (metta_interp.pl) ] -- [ Compiler (metta_compiler.pl) ]
| |
V V
[ Evaluator (metta_eval.pl) ] [ Compilation Templates (metta_comp_templates.pl) ]
|
V
[ Runtime Environment ]
|
|-- [ Execution Space (metta_space.pl) ]
|-- [ Substitution (metta_subst.pl) ]
|-- [ Type System (metta_types.pl) ]
|-- [ Concurrency (metta_threads.pl) ]
|
[ Core Libraries ]
|
|-- [ corelib.metta ]
|-- [ metta_corelib.pl ]
|
[ Extensions and Integrations ]
|
|-- [ Logic Programming ]
| |-- [ metta_ontology.pfc.pl ]
| |-- [ metta_pfc_base.pl ]
| |-- [ metta_pfc_support.pl ]
|
|-- [ Python Integration ]
|-- [ metta_python.pl ]
|-- [ metta_python_override.py ]
|-- [ metta_python_patcher.py ]
|-- [ metta_python_proxy.py ]
The MeTTa language system is structured to facilitate ease of use, extensibility, and powerful features for logical reasoning and interoperability. Here's a brief recap:
- Front-End: Parses and prepares code for execution.
- Core Libraries: Provide essential functions and definitions.
- Interpreter/Compiler: Execute or compile code for running programs.
- Runtime Environment: Manages execution contexts and supports runtime features.
- REPL: Offers an interactive environment for code execution.
- Debugging/Testing: Tools for ensuring code correctness.
- Extensions: Enhances MeTTa with logic programming and Python interoperability.
- Server Support: Allows MeTTa to function as a networked service.
- Utilities: Additional support functions and compatibility layers.
- Initialization: Handles startup routines and environment setup.
Note: The descriptions and architecture are based on standard naming conventions and typical functionalities associated with similar files in programming language implementations. For precise implementation details, please refer to the actual source code and official documentation.
The other directories and files in the project provide additional functionality and support. Each top-level directory is expandable, and the link is provided outside the spoiler.
src
: Main Source Code of the Project
Click to expand the src
directory contents.
src/ext
: External Modules Interfacing with External Libraries
These modules provide integration with external libraries and tools, extending the capabilities of the Metta system.
-
bhv_binding.py
: Implements behavior bindings, potentially for integrating AI behavior models. -
dasgate.py
: Gateway to the Distributed Annotation System (DAS), used for accessing and integrating biological data. -
kwargsme.py
: Experiments with PyTorch keyword arguments, potentially for machine learning applications. -
metta_repl.py
: Interactive REPL for Metta written in Python, providing an alternative interface. -
neurospace.py
: Defines neural spaces for simulations, aiding in neural network modeling. -
numme.py
: Numerical experiments using NumPy, supporting mathematical computations. -
parse_torch_func_signatures.py
: Parses PyTorch function signatures, facilitating integration with PyTorch. -
parsing_exceptions.py
: Handles parsing exceptions, improving error handling during code parsing. -
r.py
: Utility script for resolution tasks, possibly related to logic programming. -
resolve.py
: Implements resolution logic, essential for logical inference mechanisms. -
sql_space.py
: Provides SQL space integration, enabling database interactions. -
tm_test.py
: Test script for PyTorch models, useful for machine learning development. -
torchme.py
: Script interfacing with PyTorch, aiding in deep learning tasks.
src/main
: Contains the Last Known Good Version of the Source Code
This directory holds a stable version of the source code, ensuring that a working copy is available for reference or rollback.
src/metta_jupyter_kernel.py
: Implements a Jupyter Kernel for the Metta Language
Allows Metta code to be executed within Jupyter notebooks, facilitating interactive development and data analysis.
src/mettalog
: Python Metta Logic Library for Logical Reasoning
This library provides logical reasoning capabilities within the Metta environment.
-
logic.py
: Core logic functions that implement fundamental logical operations. -
reasoning.py
: Reasoning algorithms that enable inference and deduction. -
utils.py
: Utility functions supporting the logic library.
docs
: Contains Project Documentation, Including User Guides and API References
Contents
-
user_guide.md
: Comprehensive guide for users, covering installation, usage, and best practices. -
api_reference.md
: Detailed API documentation, outlining functions, classes, and modules. -
developer_guide.md
: Guidelines for developers contributing to the project, including coding standards and development workflows.
setup.py
: Build Script Specifying How to Package and Install Project Components
This script ensures that the project components are correctly packaged and installed, managing dependencies and configurations.
notebooks
: Contains Jupyter Notebooks for Experimentation and Data Analysis
Contents
-
experiment1.ipynb
: Analysis of dataset X, showcasing data processing capabilities. -
visualization.ipynb
: Data visualization examples, illustrating how to create visual representations of data. -
from_das
: Demonstrations of data retrieval from the Distributed Annotation System (DAS).-
das_example.ipynb
: Using DAS data, demonstrating how to access and utilize biological data.
-
-
images
: Visual assets used in notebooks, such as charts and diagrams.
scripts
: Utility Scripts for Data Processing, Testing, and Environment Setup
Files
-
Defragmenter.py
: Defragments data files for optimization, improving performance. -
generate_allure_environment.py
: Generates configurations for Allure reports, aiding in test reporting. -
generate_allure_executor.py
: Creates executor configurations for Allure, facilitating test execution tracking. -
into_junit.py
: Converts test results into JUnit format, standardizing test reporting.
library
: Reusable Code Modules and Libraries
Click to expand the library
directory contents.
graphml
: Scripts Related to the GraphML Format
-
ext_loader_graphml.pl
: Parses GraphML files, enabling the import of graph data. -
tests
: Test cases for the GraphML loader, ensuring correct functionality.
genome
: Prolog Scripts for Genomic Data Processing
Modules focused on processing genomic data, integrating with bioinformatics databases.
-
chado_xml_loader.pl
: Loads genomic data from Chado XML files, used in biological databases. -
das_classic_loader.pl
: Loads data using the DAS classic protocol, facilitating access to genomic annotations. -
ext_loader_csv.pl
: Loads data from CSV files, a common format for biological data. -
ext_loader_fasta.pl
: Loads sequences from FASTA files, essential for genetic sequence data. -
ext_loader_gff.pl
: Parses GFF files, another format for genomic annotations. -
ext_loader_json.pl
: Processes genomic data in JSON format. -
ext_loader_obo.pl
: Handles OBO files, used for ontologies in biology. -
ext_loader_tsv.pl
: Loads data from TSV files. -
flybase_convert.pl
: Converts FlyBase data, a database of Drosophila genetics. -
flybase_induced_types.pl
: Generates induced types from FlyBase data, aiding in data classification. -
flybase_learn.pl
: Implements learning algorithms using FlyBase data. -
flybase_loader.pl
: Loads FlyBase data into the system. -
flybase_main.pl
: Main entry point for processing FlyBase data. -
flybase_scheme.pl
: Defines data structures for FlyBase data.
tests
: Various Test Suites to Ensure Correctness and Stability
Test Suites
-
baseline_compat
: Tests for baseline compatibility, ensuring fundamental functionalities work as expected. -
compiler_baseline
: Tests for compiler baseline functionality. -
direct_comp
: Tests for direct compilation processes without intermediate steps. -
extended_compat
: Tests for compatibility with extended features and modules. -
features
: Tests focused on specific features of the Metta language and system. -
flybase
: Tests related to the processing of FlyBase data. -
more-anti-regression
: Additional regression tests to catch unintended changes. -
nars_interp
: Tests for the NARS (Non-Axiomatic Reasoning System) interpreter integration. -
nars_w_comp
: Tests for NARS with compiler integrations, ensuring reasoning capabilities. -
performance
: Performance measurement tests to benchmark system efficiency. -
python_compat
: Tests for Python compatibility, verifying the integration between Metta and Python. -
timing
: Scripts and tests focused on measuring execution times and performance metrics.
This detailed overview provides an in-depth look into the architecture and workflow of the Metta language processing system. Starting from the REPL interface, which allows users to interact with the interpreter and load files, the system efficiently manages code addition, parsing, forward chaining for logical inference, compilation, and execution. The inclusion of utilities and support modules enhances the system's capabilities, offering integration with external languages like Python and providing tools for debugging and testing.
Understanding each component's role and how they interact enhances appreciation of the system's workflow and capabilities. This comprehensive guide serves as a valuable resource for both users and developers navigating and utilizing the Metta language processing system.
For further clarification, refer to the docs
directory or contact the development team.