From b5c307f306ae91833cc721f85fe066b9eecd8965 Mon Sep 17 00:00:00 2001 From: Zi Wang Date: Wed, 27 Sep 2023 21:48:44 -0700 Subject: [PATCH] adding additional setup --- website/docs/docs/build/python-models.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/website/docs/docs/build/python-models.md b/website/docs/docs/build/python-models.md index c489b4e3333..a2bfcd9dd00 100644 --- a/website/docs/docs/build/python-models.md +++ b/website/docs/docs/build/python-models.md @@ -678,6 +678,8 @@ models: **Installing packages**: If you are using a Dataproc Cluster (as opposed to Dataproc Serverless), you can add third-party packages while creating the cluster with the [Spark BigQuery connector initialization action](https://github.com/GoogleCloudDataproc/initialization-actions/tree/master/connectors#bigquery-connectors). If you are using Dataproc Serverless, you can build your own [custom container image](https://cloud.google.com/dataproc-serverless/docs/guides/custom-containers#python_packages) with the packages you need. +**Additional setup:**: The user or role should have the adequate IAM permission to be able to trigger a job through Dataproc Cluster or Dataproc Serverless + **Docs:** - [Dataproc overview](https://cloud.google.com/dataproc/docs/concepts/overview) - [PySpark DataFrame syntax](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.html)