Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example: Vertex AI Gen Eval service #97

Merged
merged 7 commits into from
Sep 23, 2024
Merged

Example: Vertex AI Gen Eval service #97

merged 7 commits into from
Sep 23, 2024

Conversation

philschmid
Copy link
Member

No description provided.

Copy link
Member

@alvarobartt alvarobartt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi here @philschmid! Thanks for this PR! Just a couple of nits/questions:

  • Should we use "we" over "you"? I recall we mentioned in the past to use e.g. "you need to run ..." instead of "we need to run ..."?

  • The pipeline that automatically updates the examples listing is still WIP, and whilst the examples will be auto-generated within the docs; the tables would still need manual update (will try to push the CI for that later today as atm is just a local script)

  • Should we maybe create another category besides training and inference for e.g. benchmarking or evaluation? Not needed within this PR just flagging that.

Besides those concerns, we're good to merge! 🤗

@alvarobartt
Copy link
Member

P.S. Do you have a good or decent experience with any Jupyter Notebook reviewer tool? I believe that reviewing notebooks within this repository is not so odd, as most of the Vertex AI examples will run on Jupyter Notebooks; so maybe makes sense to add any if you had a good experience? I didn't TBH, but asking just in case you have any recommendation!

@philschmid
Copy link
Member Author

Should we use "we" over "you"? I recall we mentioned in the past to use e.g. "you need to run ..." instead of "we need to run ..."?

I always use "we" in my examples

The pipeline that automatically updates the examples listing is still WIP, and whilst the examples will be auto-generated within the docs; the tables would still need manual update (will try to push the CI for that later today as atm is just a local script)

What do i have to update?

Should we maybe create another category besides training and inference for e.g. benchmarking or evaluation? Not needed within this PR just flagging that.

Haha good question. Evaluation might work?

any Jupyter Notebook reviewer tool?

never used one. Feel free to experiment

@alvarobartt
Copy link
Member

alvarobartt commented Sep 23, 2024

I always use "we" in my examples

Then I'm happy to later align on comms and use "we" over "you" 👍🏻

What do i have to update?
Haha good question. Evaluation might work?

Assuming that we will create evaluation for the evaluation/benchmarking related examples; first of all you should include an initial Markdown block within the Jupyter Notebooks as:

---
title: Evaluate open LLMs with Vertex AI and Gemini
type: evaluation
---

Then update the following tables:

P.S. Sorry for the extra manual work, I'll try to automate it properly so that those are automatically generated; as the script I had was not super consistent yet!

@HuggingFaceDocBuilderDev
Copy link
Collaborator

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there was a mismatch between your main and the main branch, and this has been split in different files, so you should only add the table entry to the https://github.com/huggingface/Google-Cloud-Containers/blob/main/docs/source/resources.mdx file; while this should remain unchanged.

P.S. Sorry for the inconvenience!

@philschmid philschmid merged commit 8d672f8 into main Sep 23, 2024
2 checks passed
@philschmid philschmid deleted the eval-example branch September 23, 2024 08:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants