This starter is an example of how to create a composable Simli interaction that runs in a Next.js app.
The project consists of a Next.js app that uses the Simli SDK (simli-client
) and a server server.js
that handles the interaction with other services such as speech-to-text (STT), large language models (LLMs) and text-to-speech (TTS).
Start by signing up and getting your API key from Simli.com. Then Rename .env_sample to .env and paste your API keys.
NEXT_PUBLIC_SIMLI_API_KEY="API key from simli.com"
NEXT_PUBLIC_ELEVENLABS_API_KEY="API key from elevenlabs.io"
NEXT_PUBLIC_DEEPGRAM_API_KEY="API key from deepgram.com"
GROQ_API_KEY="API key from groq.com"
If you want to try Simli but don't have API access to these third parties, ask in Discord and we can help you out with that (Discord Link).
First make sure you have nodejs installed
sudo apt install nodejs
Then run
npm install --save-dev npm-run-all in your project folder
To run the back-end and front-end together, run the following command:
npm run start
You can swap out the character by finding one that you like in the docs, or creating your own (coming soon!).
You can of course replace Deepgram and Elevenlabs with AI services with your own preference, or even build your own. The only requirement for Simli to work is that audio is sent using PCM16 format and 16KHz sample rate. If you're having trouble getting nice audio, feel free to ask for help in Discord.
[Simli] [Elevenlabs] [Deepgram] [Groq]
An easy way to deploy your avatar interaction to use the Vercel Platform.