Skip to content

Latest commit

 

History

History
170 lines (114 loc) · 4.91 KB

README.md

File metadata and controls

170 lines (114 loc) · 4.91 KB

Eliza

Eliza Banner

As seen powering @DegenSpartanAI and @MarcAIndreessen

  • Multi-agent simulation framework
  • Add as many unique characters as you want with characterfile
  • Full-featured Discord and Twitter connectors, with Discord voice channel support
  • Full conversational and document RAG memory
  • Can read links and PDFs, transcribe audio and videos, summarize conversations, and more
  • Highly extensible - create your own actions and clients to extend Eliza's capabilities
  • Supports open source and local models (default configured with Nous Hermes Llama 3.1B)
  • Supports OpenAI for cloud inference on a light-weight device
  • "Ask Claude" mode for calling Claude on more complex queries
  • 100% Typescript

Getting Started

Prerequisites (MUST):

Edit the .env file

  • Copy .env.example to .env and fill in the appropriate values
  • Edit the TWITTER environment variables to add your bot's username and password

Edit the character file

  • Check out the file src/core/defaultCharacter.ts - you can modify this
  • You can also load characters with the pnpm start --characters="path/to/your/character.json" and run multiple bots at the same time.

After setting up the .env file and character file, you can start the bot with the following command:

pnpm i
pnpm start

Customising Eliza

Adding custom actions

To avoid git clashes in the core directory, we recommend adding custom actions to a custom_actions directory and then adding them to the elizaConfig.yaml file. See the elizaConfig.example.yaml file for an example.

Running with different models

Run with Llama

You can run Llama 70B or 405B models by setting the XAI_MODEL environment variable to meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo or meta-llama/Meta-Llama-3.1-405B-Instruct

Run with Grok

You can run Grok models by setting the XAI_MODEL environment variable to grok-beta

Run with OpenAI

You can run OpenAI models by setting the XAI_MODEL environment variable to gpt-4o-mini or gpt-4o

Additional Requirements

You may need to install Sharp. If you see an error when starting up, try installing it with the following command:

pnpm install --include=optional sharp

Environment Setup

You will need to add environment variables to your .env file to connect to various platforms:

# Required environment variables
DISCORD_APPLICATION_ID=
DISCORD_API_TOKEN= # Bot token
OPENAI_API_KEY=sk-* # OpenAI API key, starting with sk-
ELEVENLABS_XI_API_KEY= # API key from elevenlabs

# ELEVENLABS SETTINGS
ELEVENLABS_MODEL_ID=eleven_multilingual_v2
ELEVENLABS_VOICE_ID=21m00Tcm4TlvDq8ikWAM
ELEVENLABS_VOICE_STABILITY=0.5
ELEVENLABS_VOICE_SIMILARITY_BOOST=0.9
ELEVENLABS_VOICE_STYLE=0.66
ELEVENLABS_VOICE_USE_SPEAKER_BOOST=false
ELEVENLABS_OPTIMIZE_STREAMING_LATENCY=4
ELEVENLABS_OUTPUT_FORMAT=pcm_16000

TWITTER_DRY_RUN=false
TWITTER_USERNAME= # Account username
TWITTER_PASSWORD= # Account password
TWITTER_EMAIL= # Account email
TWITTER_COOKIES= # Account cookies

X_SERVER_URL=
XAI_API_KEY=
XAI_MODEL=


# For asking Claude stuff
ANTHROPIC_API_KEY=

WALLET_SECRET_KEY=EXAMPLE_WALLET_SECRET_KEY
WALLET_PUBLIC_KEY=EXAMPLE_WALLET_PUBLIC_KEY

BIRDEYE_API_KEY=

SOL_ADDRESS=So11111111111111111111111111111111111111112
SLIPPAGE=1
RPC_URL=https://api.mainnet-beta.solana.com
HELIUS_API_KEY=


## Telegram
TELEGRAM_BOT_TOKEN=

TOGETHER_API_KEY=

Local Inference Setup

CUDA Setup

If you have an NVIDIA GPU, you can install CUDA to speed up local inference dramatically.

pnpm install
npx --no node-llama-cpp source download --gpu cuda

Make sure that you've installed the CUDA Toolkit, including cuDNN and cuBLAS.

Running locally

Add XAI_MODEL and set it to one of the above options from Run with Llama - you can leave X_SERVER_URL and XAI_API_KEY blank, it downloads the model from huggingface and queries it locally

Clients

Discord Bot

For help with setting up your Discord Bot, check out here: https://discordjs.guide/preparations/setting-up-a-bot-application.html

Development

Testing

To run the test suite:

pnpm test           # Run tests once
pnpm test:watch    # Run tests in watch mode

For database-specific tests:

pnpm test:sqlite   # Run tests with SQLite
pnpm test:sqljs    # Run tests with SQL.js

Tests are written using Jest and can be found in src/**/*.test.ts files. The test environment is configured to:

  • Load environment variables from .env.test
  • Use a 2-minute timeout for long-running tests
  • Support ESM modules
  • Run tests in sequence (--runInBand)

To create new tests, add a .test.ts file adjacent to the code you're testing.