Releases: cloudera/CML_AMP_RAG_Studio
Releases · cloudera/CML_AMP_RAG_Studio
1.6.0-beta
also make the horizontal menu items green (#95)
1.5.0
Release 1.5.0 is out!
Notable changes:
Provide better status and error handling when uploading and summarizing documents
Support persisting the inference model selection for a chat session
Enable querying the LLM model without referencing the knowledge base
Support Markdown files (and support rendering MD for chunk previews)
Support Python versions <=3.12.0
Upgrade Docling to the latest version
Add support for JPG, PNG, and JPEG images when using advanced parsing
Allow users to ask queries without waiting for suggested questions to be generated
1.5.0-Beta-1
Merge pull request #89 from cloudera/mob/main Support Python versions <=3.12
1.5.0-beta
Merge pull request #83 from cloudera/mob/main Document Upload Error Handling and Support Jpeg and Png
1.5-beta
Markdown support + UI for non-knowledge-base-answers (#82) * markdown file handler, small refactorings, make the file delete immediately submit a request to the reconciler to delete the file * "delete an empty data source with an empty global storage" * "maybe ask to delete from docstore" * fix deleting from doc store * "markdown rendering" * wip on markdown tables * "now we're thinking with styled-components" * "css styling of the markdown tables" * update lock file * use markdown for docling pdf, minor ui changes to source card * readme updates * wip on icon for no source nodes * "now we're thinking with colors" * "styling/coloration of the not-a-kb-response response" * "now we're thinking with inference models" * Set inference model when chatting with model directly * it ain't round * fill a little more of the "circle" * it's still not a circle, but it's a little bigger * mock the model api for the test --------- Co-authored-by: Elijah Williams <[email protected]> Co-authored-by: Michael Liu <[email protected]>
1.4.2
Mob/main (#79) * "fix the batch embedding response processing for CAII" * update fe to data source ids * "now we're thinking with panic" * fix bug with s3 path when the prefix is not provided * "add in exception handing for more than one kb" * Fix docstring for qdrant.size() * fix evals * fix shadowing --------- Co-authored-by: jwatson <[email protected]> Co-authored-by: Michael Liu <[email protected]>
1.4.1
Merge pull request #77 from cloudera/mob/fix-model-status fix model status for bedrock
1.4.0
Release 1.4.0 is out! ⛷️
Major changes including:
- CAII Model Discovery. We no longer require users to specify an inference and embedding CAII model. Instead, users can now choose between any of the CAII models available in their environment.
- Switched to Bedrock Converse to support more Bedrock models
- We now support saving the inference model in a chat session
- We support saving the embedding and summarization model for a knowledge base. As part of this change, users can now disable summarization by not selecting a model.
- Added Cohere Command R+ v1 inference model
- Added Cohere Embed Multilingual v3 embedding model
- Removed Llama 3.1 405b model inference model
1.4.0-beta
Merge pull request #58 from cloudera/mob/main Provide Sessions with Inference Model Persistence
1.3.0
Visualization, Powerpoint, Misc. Cleanup (#49) * wip on visualizing dataset contents and vectors * remove docling * Make datasets represent individasiual docs * hi lite * make the hover point size bigger for stronger highlighting * highlight even stronger! * refactor and add a question entry box to show where the question lies in the vector space * a bit of refactoring and fix unused imports * force mypy into happiness * Update release version to dev-testing * fix types * disable input while loading * mark loading while viz is loading * clear up some warnings * make the active tab sticky on the ds management page * small refactoring * small refactor * "wip on visualization cleanup" * "now we're thinking with UMAP tooltips" * "styling" * "viz layout, continued." * "now we're thinking with dependency hell" * "add dependencies" * add the last dependency needed for powerpoint parsing * add other powerpoints * turn on the no-kb query toggle * tidy up the chunk metadata handling * use models for models * update uv.lock from main * delete log file --------- Co-authored-by: Michael Liu <[email protected]> Co-authored-by: actions-user <[email protected]> Co-authored-by: Elijah Williams <[email protected]>