Releases: logikon-ai/logikon
v0.2.0
v0.1.0
v0.1.0 logikon
: Analytics for Reasoning Traces
We're thrilled to announce the inaugural release of logikon
– a versatile Python package and framework designed for in-depth analysis of natural language reasoning.
Primary Use Case: Observing AI Reasoners and Agents
Easily score the reasoning traces of your Large Language Model (LLM), including intermediary steps generated in deliberative prompting, with just one additional line of code.
# LLM generation
prompt = "Vim or Emacs? Reason carefully before submitting your choice."
completion = llm.predict(prompt)
# Analyze and score reasoning 🚀
import logikon
score = logikon.score(prompt=prompt, completion=completion)
# >>> print(score.info())
# argmap_size: 13
# n_root_nodes: 3
# global_balance: -.23
Check out the quickstart guide in quickstart.ipynb for a seamless setup.
Metrics and Artifacts
The readme gives an overview of logikon
’s artifacts and reasoning quality scores. We’ll gradually build a more thorough documentation in the future.
Examples
Explore the examples to gain insights into the diverse capabilities of logikon
. Learn how it can empower your natural language reasoning projects.
Work in Progress
🚧
logikon
is in early development stage:
- the package is subject to change at any time;
- results can vary due to changes in methods, pipelines, or underlying models;
- current speed and quality of evaluation results are not representative of future product.
Community Collaboration
By releasing logikon
now, we are dedicated to co-creating with, and specifically for, the GenAI community. Your feedback, questions, bug reports, and suggestions for improvement are highly valued and encouraged.
Feel free to reach out and engage with us. Together, let's shape the future of AI natural language reasoning!
Best regards,
The Logikon Team
v0.0.1-dev1
version
v0.0.1-dev0
semantic version format