- 95227a7: Add query endpoint
- 27d2499: Bump the LlamaCloud library and fix breaking changes (Python).
- f9a057d: Add support multimodal indexes (e.g. from LlamaCloud)
- aedd73d: bump: chat-ui
- fe90a7e: chore: bump ai v4
- 02b2473: Show streaming errors in Python, optimize system prompts for tool usage and set the weather tool as default for the Agentic RAG use case
- 63e961e: Use auto_routed retriever mode for LlamaCloudIndex
- 28c8808: Add fly.io deployment
- 0a7dfcf: Generate NEXT_PUBLIC_CHAT_API for NextJS backend to specify alternative backend
- 8b371d8: Set pydantic version to <2.10 to avoid incompatibility with llama-index.
- 30fe269: Deactive duckduckgo tool for TS
- 30fe269: Replace DuckDuckGo by Wikipedia tool for agentic template
- fc5b266: Improve DX for Python template (use one deployment instead of two)
- f8f97d2: Add support for python 3.13
- 00f0b3a: fix: dont include user message in chat history
- 4663dec: chore: bump react19 rc
- 44b34fb: chore: update eslint 9, nextjs 15, react 19
- 6925676: feat: use latest chat UI
- 282eaa0: Ensure that the index and document store are created when uploading a file with no available index.
- 6edea6a: Optimize generated workflow code for Python
- 8431b78: Optimize Typescript multi-agent code
- 8431b78: Add form filling use case (Typescript)
- 2b8aaa8: Add support for local models via Hugging Face
- b9570b2: Fix: use generic LLMAgent instead of OpenAIAgent (adds support for Gemini and Anthropic for Agentic RAG)
- 1fe21f8: Fix the highlight.js issue with the Next.js static build
- 00009ae: feat: import pdf css
- 9172fed: feat: bump LITS 0.8.2
- 78ccde7: feat: use llamaindex chat-ui for nextjs frontend
- ed59927: Add form filling use case (Python)
- 4a83469: Add multi-agent financial report for Typescript (and update LITS to 0.7.10)
- fa80378: DocumentInfo working with relative URLs
- 0182368: Fix the streaming issue to prevent the UI from hanging.
- 2209409: Add financial report as the default use case in the multi-agent template (Python).
- 384a136: Fix import error if the artifact tool is selected
- 99b8247: Simplify and unify handling file uploads
- 6d1b6b9: Update README.md for pro mode
- f3577c5: Fix event streaming is blocked
- f3577c5: Add upload file to sandbox (artifact and code interpreter)
- 7562cb4: Simplified default questions and added pro mode
- 0a69fe0: fix: missing params when init Astra vectorstore
- 98a82b0: docs: chroma env variables
- 3d41488: feat: use selected llamacloud for multiagent
- 75e1f61: Fix cannot query public document from llamacloud
- 88220f1: fix workflow doesn't stop when user presses stop generation button
- 75e1f61: Fix typescript templates cannot upload file to llamacloud
- 88220f1: Bump [email protected]
- cd3fcd0: bump: use LlamaIndexTS 0.6.18
- 6335de1: Fix using LlamaCloud selector does not use the configured values in the environment (Python)
- 0e78ba4: Fix: programmatically ensure index for LlamaCloud
- 0e78ba4: Fix .env not loaded on poetry run generate
- 7f4ac22: Don't need to run generate script for LlamaCloud
- 5263bde: Use selected LlamaCloud index in multi-agent template
- 16e6124: Bump package for llamatrace observability
- 3790ca0: Add multi-agent task selector for TS template
- d18f039: Add e2b code artifact tool for the FastAPI template
- 5a7216e: feat: implement artifact tool in TS
- 04ddebc: Add publisher agent to multi-agents for generating documents (PDF and HTML)
- 04ddebc: Allow tool selection for multi-agents (Python and TS)
- 70f7dca: feat: add test deps for llamaparse
- ef070c0: Add multi agents template for Typescript
- 7c2a3f6: fix: postgres import
- cb8d535: Fix only produces one agent event
- 0213fe0: Update dependencies for vector stores and add e2e test to ensure that they work as expected.
- 0031e67: Bump llama-index to 0.11.11 for the multi-agent template
- 505b8e9: bump: use latest ai package version
- cf3ec97: Dynamically select model for Groq
- 8c1087f: feat: enhance style for markdown
- adc40cf: fix: vercel ai update crash sending annotations
- 38a8be8: fix: filter in mongo vector store
- 917e862: Fix errors in building the frontend
- b6da3c2: Ensure the generation script always works
- 8105c5c: Add env config for next questions feature
- 6a409cb: Bump web and database reader packages
- 435109f: Add multi-agents template based on workflows
- bedde2b: Change metadata filters to use already existing documents in LlamaCloud Index
- 5cd12fa: Use one callback manager per request
- 5cd12fa: Bump llama_index version to 0.11.1
- fd4abb3: Fix to use filename for uploaded documents in NextJS
- 2f8feab: Simplify CLI interface
- 4fa2b76: feat: implement citation for TS
- 8f670a9: Allow relative URL in documents
- 57e7638: Use the retrieval defaults from LlamaCloud
- 8ce4a85: Add UI for extractor template
- 3fb93c7: Use LlamaCloud pipeline for data ingestion in TS (private file uploads and generate script)
- bd5e39a: Fix error that files in sub folders of 'data' are not displayed
- 9fd832c: Add in-text citation references
- 2b7a5d8: Fix: private file upload not working in Python without LlamaCloud
- 81ef7f0: Use LlamaCloud pipeline for data ingestion (private file uploads and generate script)
- c49a5e1: Add error handling for generating the next question
- c49a5e1: Fix wrong api key variable in Azure OpenAI provider
- d746c75: Add Weaviate vector store (Typescript)
- 3ec5163: Add Weaviate vector database support (Python)
- 04a9c71: Cluster nodes by document
- 09e3022: Add support for LlamaTrace (Python)
- c06ec4f: Fix imports for MongoDB
- b6dd7a9: Always send chat data when submit message
- 8890e27: Let user change indexes in LlamaCloud projects
- 9a09e8c: Fix Vercel deployment
- c5c7eee: Make components reusable for chat-llamaindex
- f43399c: Add metadatafilters to context chat engine (Typescript)
- c67daeb: fix: missing set private to false for default generate.py
- 43474a5: Configure LlamaCloud organization ID for Python
- cf11b23: Add Azure code interpreter for Python and TS
- fd9fb42: Add Azure OpenAI as model provider
- 5c13646: Fix starter questions not working in python backend
- 6bd76fb: Add template for structured extraction
- b0becaa: Add e2e testing for llamacloud datasource
- df9cca5: Upgrade pdf viewer
- bd4714c: Filter private documents for Typescript (Using MetadataFilters) and update to LlamaIndexTS 0.5.7
- 58e6c15: Add using LlamaParse for private file uploader
- 455ab68: Display files in sources using LlamaCloud indexes.
- 23b7357: Use gpt-4o-mini as default model
- 0900413: Add suggestions for next questions.
- 624c721: Update to LlamaIndex 0.10.55
- df96159: Use Qdrant FastEmbed as local embedding provider
- 32fb32a: Support upload document files: pdf, docx, txt
- d1026ea: support Mistral as llm and embedding
- a221cfc: Use LlamaParse for all the file types that it supports (if activated)
- 9ecd061: Add new template for a multi-agents app
- a0aab03: Add T-System's LLMHUB as a model provider
- 64732f0: Fix the issue of images not showing with the sandbox URL from OpenAI's models
- aeb6fef: use llamacloud for chat
- f2c3389: chore: update to llamaindex 0.4.3
- 5093b37: Remove non-working file selectors for Linux
- b3c969d: Add image generator tool
- aa69014: Fix NextJS for TS 5.2
- 48b96ff: Add DuckDuckGo search tool
- 9c9decb: Reuse function tool instances and improve e2b interpreter tool for Python
- 02ed277: Add Groq as a model provider
- 0748f2e: Remove hard-coded Gemini supported models
- 9112d08: Add OpenAPI tool for Typescript
- 8f03f8d: Add OLLAMA_REQUEST_TIMEOUT variable to config Ollama timeout (Python)
- 8f03f8d: Apply nest_asyncio for llama parse
- a42fa53: Add CSV upload
- 563b51d: Fix Vercel streaming (python) to stream data events instantly
- d60b3c5: Add E2B code interpreter tool for FastAPI
- 956538e: Add OpenAPI action tool for FastAPI
- cd50a33: Add interpreter tool for TS using e2b.dev
- 260d37a: Add system prompt env variable for TS
- bbd5b8d: Fix postgres connection leaking issue
- bb53425: Support HTTP proxies by setting the GLOBAL_AGENT_HTTP_PROXY env variable
- 69c2e16: Fix streaming for Express
- 7873bfb: Update Ollama provider to run with the base URL from the environment variable
- 56537a1: Display PDF files in source nodes
- 84db798: feat: support display latex in chat markdown
- 0bc8e75: Use ingestion pipeline for dedicated vector stores (Python only)
- cb1001d: Add ChromaDB vector store
- 416073d: Directly import vector stores to work with NextJS
- 056e376: Add support for displaying tool outputs (including weather widget as example)
- 7bd3ed5: Support Anthropic and Gemini as model providers
- 7bd3ed5: Support new agents from LITS 0.3
- cfb5257: Display events (e.g. retrieving nodes) per chat message
- f1c3e8d: Add Llama3 and Phi3 support using Ollama
- a0dec80: Use
gpt-4-turbo
model as default. Upgrade Python llama-index to 0.10.28 - 753229d: Remove asking for AI models and use defaults instead (OpenAIs GPT-4 Vision Preview and Embeddings v3). Use
--ask-models
CLI parameter to select models. - 1d78202: Add observability for Python
- 6acccd2: Use poetry run generate to generate embeddings for FastAPI
- 9efcffe: Use Settings object for LlamaIndex configuration
- 418bf9b: refactor: use tsx instead of ts-node
- 1be69a5: Add Qdrant support
- 625ed4d: Support Astra VectorDB
- 922e0ce: Remove UI question (use shadcn as default). Use
html
UI by calling create-llama with --ui html parameter - ce2f24d: Update loaders and tools config to yaml format (for Python)
- e8db041: Let user select multiple datasources (URLs, files and folders)
- c06d4af: Add nodes to the response (Python)
- 29b17ee: Allow using agents without any data source
- 665c26c: Add redirect to documentation page when accessing the base URL (FastAPI)
- 78ded9e: Add Dockerfile templates for Typescript and Python
- 99e758f: Merge non-streaming and streaming template to one
- b3f2685: Add support for agent generation for Typescript
- 2739714: Use a database (MySQL or PostgreSQL) as a data source
- 56faee0: Added windows e2e tests
- 60ed8fe: Added missing environment variable config for URL data source
- 60ed8fe: Fixed tool usage by freezing llama-index package versions
- 3af6328: Add support for llamaparse using Typescript
- dd92b91: Add fetching llm and embedding models from server
- bac1b43: Add Milvus vector database
- edd24c2: Add observability with openllmetry
- 403fc6f: Minor bug fixes to improve DX (missing .env value and updated error messages)
- 0f79757: Ability to download community submodules
- 89a49f4: Add more config variables to .env file
- fdf48dd: Add "Start in VSCode" option to postInstallAction
- fdf48dd: Add devcontainers to generated code
- 2d29350: Add LlamaParse option when selecting a pdf file or a folder (FastAPI only)
- b354f23: Add embedding model option to create-llama (FastAPI only)
- 09d532e: feat: generate llama pack project from llama index
- cfdd6db: feat: add pinecone support to create llama
- ef25d69: upgrade llama-index package to version v0.10.7 for create-llama app
- 50dfd7b: update fastapi for CVE-2024-24762
- d06a85b: Add option to create an agent by selecting tools (Google, Wikipedia)
- 7b7329b: Added latest turbo models for GPT-3.5 and GPT 4
- ba95ca3: Use condense plus context chat engine for FastAPI as default
- c680af6: Fixed issues with locating templates path
- 6dd401e: Add an option to provide an URL and chat with the website data (FastAPI only)
- e9b87ef: Select a folder as data source and support more file types (.pdf, .doc, .docx, .xls, .xlsx, .csv)
- 27d55fd: Add an option to provide an URL and chat with the website data
- 3a29a80: Add node_modules to gitignore in Express backends
- fe03aaa: feat: generate llama pack example
- 88d3b41: fix packaging
- fa17f7e: Add an option that allows the user to run the generated app
- 9e5d8e1: Add an option to select a local PDF file as data source
- a73942d: Fix: Bundle mongo dependency with NextJS
- 9492cc6: Feat: Added option to automatically install dependencies (for Python and TS)
- f74dea5: Feat: Show images in chat messages using GPT4 Vision (Express and NextJS only)
- 8e124e5: feat: support showing image on chat message
- 2e6b36e: fix: re-organize file structure
- 2b356c8: fix: relative path incorrect
- Added PostgreSQL vector store (for Typescript and Python)
- Improved async handling in FastAPI
- 9c5e22a: Added cross-env so frontends with Express/FastAPI backends are working under Windows
- 5ab65eb: Bring Python templates with TS templates to feature parity
- 9c5e22a: Added vector DB selector to create-llama (starting with MongoDB support)
- 2aeb341: - Added option to create a new project based on community templates
- Added OpenAI model selector for NextJS projects
- Added GPT4 Vision support (and file upload)
- Bugfixes (thanks @marcusschiesser)
- acfe232: Deployment fixes (thanks @seldo)
- 8cdb07f: Fix Next deployment (thanks @seldo and @marcusschiesser)
- 9f9f293: Added more to README and made it easier to switch models (thanks @seldo)
- 4431ec7: Label bug fix (thanks @marcusschiesser)
- 25257f4: Fix issue where it doesn't find OpenAI Key when running npm run generate (#182) (thanks @RayFernando1337)
- 031e926: Update create-llama readme (thanks @logan-markewich)
- 91b42a3: change version (thanks @marcusschiesser)
- e2a6805: Hello Create Llama (thanks @marcusschiesser)