Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the embedding generation model we use configurable and use inference API #538

Open
dineshyv opened this issue Nov 27, 2024 · 1 comment

Comments

@dineshyv
Copy link
Contributor

🚀 Describe the new functionality needed

Right now, embedding generation is hardcoded to a specific model, we want this to go through inference

💡 Why is this needed? What if we don't build it?

Needed for flexibility in using the correct model

Other thoughts

No response

@dineshyv
Copy link
Contributor Author

cc: @raghotham , @ashwinb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant