Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/corcel+llama+gemini #252

Merged
merged 13 commits into from
Oct 4, 2024
Merged

Feat/corcel+llama+gemini #252

merged 13 commits into from
Oct 4, 2024

Conversation

dvilelaf
Copy link
Collaborator

@dvilelaf dvilelaf commented Sep 22, 2024

Proposed changes

Adds corcel, llama and gemini request tools that support both prediction and completion.

Fixes

N/A

Types of changes

What types of changes does your code introduce? (A breaking change is a fix or feature that would cause existing functionality and APIs to not work as expected.)
Put an x in the box that applies

  • Non-breaking fix (non-breaking change which fixes an issue)
  • Breaking fix (breaking change which fixes an issue)
  • Non-breaking feature (non-breaking change which adds functionality)
  • Breaking feature (breaking change which adds functionality)
  • Refactor (non-breaking change which changes implementation)
  • Messy (mixture of the above - requires explanation!)

Checklist

Put an x in the boxes that apply.

  • I have read the CONTRIBUTING doc
  • I am making a pull request against the main branch (left side). Also you should start your branch off our main.
  • Lint and unit tests pass locally with my changes
  • I have added tests that prove my fix is effective or that my feature works

Further comments

N/A

@dvilelaf dvilelaf changed the title Feat/add corcel llama Feat/corcel+llama Sep 22, 2024
@dvilelaf dvilelaf requested a review from 0xArdi September 22, 2024 17:59
Comment on lines 68 to 72
llm = Llama.from_pretrained(
repo_id=AVAILABLE_MODELS[model]["repo_id"],
filename=AVAILABLE_MODELS[model]["filename"],
verbose=False,
)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@0xArdi this tool runs small LLM models on the CPU. When this command is run for the first time, it downloads the model (~4GB) to a local directory, which means that the first execution could take 2-3 minutes depending on the connection. Would this be a problem for the mech?

Also, inference can take up to 1 minute depending on the prompts and uses a lot of CPU and RAM.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have had issues before with models in memory, as they can lead to spikes to resource usage resulting in the container being killed by k8s. So unless this is required to be deployed, I'd say let's not deploy this tool.

That being said, with the introduction of mech-marketplace, what I said above doesn't hold, and if someone can chose to run it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let's not deploy it then for now, but I guess we can merge it. What's that mech marketplace thing?

@dvilelaf dvilelaf changed the title Feat/corcel+llama Feat/corcel+llama+gemini Sep 30, 2024
@dvilelaf
Copy link
Collaborator Author

Not sure what is happening to Windows lock check. It seems to stall while installing tomte.

@dvilelaf
Copy link
Collaborator Author

dvilelaf commented Oct 1, 2024

@0xArdi I'm getting a linter error failure on an already existing component:

Processing /home/david/Valory/repos/mech/packages/nickcom007/customs/prediction_request_sme_lite/component.yaml
Failed!
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>
ERROR: InvocationError for command /home/david/Valory/repos/mech/.tox/check-packages/bin/autonomy check-packages (exited with code 1)
__________________________________________________________________________________________ summary ___________________________________________________________________________________________
ERROR:   check-packages: commands failed

@dvilelaf dvilelaf mentioned this pull request Oct 1, 2024
10 tasks
@0xArdi 0xArdi merged commit d07b30b into main Oct 4, 2024
6 of 7 checks passed
@dvilelaf dvilelaf deleted the feat/add-corcel-llama branch October 7, 2024 08:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants