Talk with a live2d model.
Run locally in the browser or using backend proxy connection Openai, Ollama etc.
Try it online https://live2d-ai-chat.hitorisama.org/
- show live2d model 🆗
- auto change expression of model
- auto change motion of model 🆗
- speech to text 🆗 (web speech api)
- text to speech 🆗 (browser:vits-web; backend: node-edge-tts)
- style of speech
- subtitle of AI and User 🆗
- long-term memory
- Custom chat model
- Speaking first / Find topics 🆗
- changeable model, expression and motion
- other function: playing games, singing,searching google, etc.
- install ollama and pull a model you like
- install nodejs, pnpm, bun(optional)
- git clone https://github.com/zoollcar/live2d-AI-chat
- cd live2d-AI-chat & pnpm install & cd backend & pnpm install
- run the backend: cd backend & cp .env.local.example .env.local & node index.js
- run the app: cd live2d-AI-chat & pnpm run dev
- install nodejs, pnpm, bun
- git clone https://github.com/zoollcar/live2d-AI-chat
- cd live2d-AI-chat & pnpm install & cd backend & bun install
- build the backend embed to frontend : cd backend & bun run build:windows
- cd live2d-AI-chat & pnpm run tauri:build
frontend LLM model: LLMChatWebLLM
frontend TTS model:vitsWeb
backend: .env.local.example
live2d model: Tianyelulu