Ket.ai is an overengineered telegram bot that functions as a chatbot and system status monitor. It is built using the pyrogram
library and includes a variety of commands for generating responses using various APIs and local LLMs. The bot is designed to be easily configurable and extensible, with support for custom commands and system information reporting.
- System Information: Commands to check CPU usage, RAM usage, and system temperature.
- Text Generation: Utilizes the Ollama API for generating responses based on user input.
- Debug and Status Modes: Includes configurations for debug mode and system status reporting.
- Command Handling: Custom commands for different functionalities, including help and start commands.
- Python3
- GNU+Linux
- Ollama (for local llm models)
- ffmpeg and flac (for speech-to-text)
- Start Command:
/start
to get an introduction and available commands. - Help Command:
/help
provides information on how to use the bot and its commands. - Status Command:
/status
to get the current system status including CPU usage, RAM usage, and more. - Custom Commands:
{DataConfig.GEN_COMMANDS}
for generating responses based on specified commands. - Sum Command:
/sum
pass youtube video url to get a summary of the video.
Click to view installation steps.
- Clone the repository:
git clone https://github.com/ket0x4/ketard-ai.git && cd ketard-ai
- Create a virtual environment:
python -m venv venv
- Activate the virtual environment:
source venv/bin/activate
- Install the dependencies:
pip install -r requirements.txt
- Create a configuration file and fill in the required variables:
cp sample_config.json config.json
- Configure the bot by editing the
config.json
file with the appropriate values forBOT_NAME
,API_ID
,API_HASH
,BOT_TOKEN
, ... - Run the bot:
bash start
Ensure that your config.json
is correctly set up with the necessary API credentials and configurations for bot behavior.
Click to expand!
- Add
/sum
command - Async
/sum
command - Support other youtube url's
- Add speech-to-text support
- Check api response before sending
- Fix async
/status
command - Add blacklist support
- log prompts and responses to db
- split long messages
- delete status message after sending prompt response
- Add reply support
- Refactor code
- remove repeated code
- Add
TR
lang support to/sum command
- Better
/help
message - Add
/start command
- Make llm backend configurable
- Add
/model
command for changing llm model - Add
/debug
command for enabling debug mode - Fix
/update
command
This project is licensed under the terms of the GNU General Public License v3.0.