Bring Claude's Artifacts feature to ChatGPT
artifacts-react-4k.mp4
Clone this repository
git clone https://github.com/ozgrozer/chatgpt-artifacts.git
Install dependencies
npm install
Duplicate .env.example
as .env
and add your OPEN AI API key
cp .env.example .env
vim .env
Build the app
npm run build
Start the app
npm start
To make it work with your local LLMs like Llama3 or Gemma2 you just need to make a simple update in the code.
Open /pages/api/chat.js
file
// change this
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
// to this
const openai = new OpenAI({
apiKey: 'ollama',
baseURL: 'http://127.0.0.1:11434/v1'
})
// change this
const stream = await openai.chat.completions.create({
stream: true,
model: 'gpt-4o',
messages: conversations[conversationId]
})
// to this
const stream = await openai.chat.completions.create({
stream: true,
model: 'llama3',
messages: conversations[conversationId]
})
To make it work with Groq you just need to get an API key here and make a simple update in the code.
Open /pages/api/chat.js
file
// change this
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
// to this
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://api.groq.com/openai/v1'
})
// change this
const stream = await openai.chat.completions.create({
stream: true,
model: 'gpt-4o',
messages: conversations[conversationId]
})
// to this
const stream = await openai.chat.completions.create({
stream: true,
model: 'llama3-70b-8192',
messages: conversations[conversationId]
})
To make it work with Azure OpenAI, you need to create a resource in the Azure Portal and then create a deployment in the Azure OpenAI Studio and get your API key, API version, API endpoint and make a simple update in the code.
Open /pages/api/chat.js
file
// change this
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
// to this (change the API version if yours is different)
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
defaultQuery: { 'api-version': '2023-03-15-preview' },
defaultHeaders: { 'api-key': process.env.OPENAI_API_KEY },
baseURL: 'https://<RESOURCE_NAME>.openai.azure.com/openai/deployments/<DEPLOYMENT_NAME>'
})
// change your model here
const stream = await openai.chat.completions.create({
stream: true,
model: 'gpt-4o',
messages: conversations[conversationId]
})