Skip to content

Commit

Permalink
feat: Automated regeneration of Dataflow client (#10909)
Browse files Browse the repository at this point in the history
Auto-created at 2024-03-12 01:37:20 +0000 using the toys pull request generator.
  • Loading branch information
yoshi-code-bot authored Mar 12, 2024
1 parent 5dd352f commit 2ed8062
Show file tree
Hide file tree
Showing 36 changed files with 1,055 additions and 134 deletions.
92 changes: 10 additions & 82 deletions clients/dataflow/lib/google_api/dataflow/v1b3/api/projects.ex
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
end

@doc """
List the jobs of a project across all regions.
List the jobs of a project across all regions. **Note:** This method doesn't support filtering the list of jobs by name.
## Parameters
Expand All @@ -179,7 +179,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
* `:upload_protocol` (*type:* `String.t`) - Upload protocol for media (e.g. "raw", "multipart").
* `:filter` (*type:* `String.t`) - The kind of filter to use.
* `:location` (*type:* `String.t`) - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
* `:name` (*type:* `String.t`) - Optional. The job name. Optional.
* `:name` (*type:* `String.t`) - Optional. The job name.
* `:pageSize` (*type:* `integer()`) - If there are many jobs, limit response to at most this many. The actual number of jobs returned will be the lesser of max_responses and an unspecified server-defined limit.
* `:pageToken` (*type:* `String.t`) - Set this to the 'next_page_token' field of a previous response to request additional results in a long list.
* `:view` (*type:* `String.t`) - Deprecated. ListJobs always returns summaries now. Use GetJob for other JobViews.
Expand Down Expand Up @@ -455,7 +455,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
end

@doc """
List the jobs of a project. To list the jobs of a project in a region, we recommend using `projects.locations.jobs.list` with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). To list the all jobs across all regions, use `projects.jobs.aggregated`. Using `projects.jobs.list` is not recommended, as you can only get the list of jobs that are running in `us-central1`.
List the jobs of a project. To list the jobs of a project in a region, we recommend using `projects.locations.jobs.list` with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). To list the all jobs across all regions, use `projects.jobs.aggregated`. Using `projects.jobs.list` is not recommended, because you can only get the list of jobs that are running in `us-central1`. `projects.locations.jobs.list` and `projects.jobs.list` support filtering the list of jobs by name. Filtering by name isn't supported by `projects.jobs.aggregated`.
## Parameters
Expand All @@ -475,7 +475,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
* `:upload_protocol` (*type:* `String.t`) - Upload protocol for media (e.g. "raw", "multipart").
* `:filter` (*type:* `String.t`) - The kind of filter to use.
* `:location` (*type:* `String.t`) - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
* `:name` (*type:* `String.t`) - Optional. The job name. Optional.
* `:name` (*type:* `String.t`) - Optional. The job name.
* `:pageSize` (*type:* `integer()`) - If there are many jobs, limit response to at most this many. The actual number of jobs returned will be the lesser of max_responses and an unspecified server-defined limit.
* `:pageToken` (*type:* `String.t`) - Set this to the 'next_page_token' field of a previous response to request additional results in a long list.
* `:view` (*type:* `String.t`) - Deprecated. ListJobs always returns summaries now. Use GetJob for other JobViews.
Expand Down Expand Up @@ -623,6 +623,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
* `:uploadType` (*type:* `String.t`) - Legacy upload protocol for media (e.g. "media", "multipart").
* `:upload_protocol` (*type:* `String.t`) - Upload protocol for media (e.g. "raw", "multipart").
* `:location` (*type:* `String.t`) - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
* `:updateMask` (*type:* `String.t`) - The list of fields to update relative to Job. If empty, only RequestedJobState will be considered for update. If the FieldMask is not empty and RequestedJobState is none/empty, The fields specified in the update mask will be the only ones considered for update. If both RequestedJobState and update_mask are specified, an error will be returned as we cannot update both state and mask.
* `:body` (*type:* `GoogleApi.Dataflow.V1b3.Model.Job.t`) -
* `opts` (*type:* `keyword()`) - Call options
Expand Down Expand Up @@ -662,6 +663,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
:uploadType => :query,
:upload_protocol => :query,
:location => :query,
:updateMask => :query,
:body => :body
}

Expand Down Expand Up @@ -1556,7 +1558,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
end

@doc """
List the jobs of a project. To list the jobs of a project in a region, we recommend using `projects.locations.jobs.list` with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). To list the all jobs across all regions, use `projects.jobs.aggregated`. Using `projects.jobs.list` is not recommended, as you can only get the list of jobs that are running in `us-central1`.
List the jobs of a project. To list the jobs of a project in a region, we recommend using `projects.locations.jobs.list` with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). To list the all jobs across all regions, use `projects.jobs.aggregated`. Using `projects.jobs.list` is not recommended, because you can only get the list of jobs that are running in `us-central1`. `projects.locations.jobs.list` and `projects.jobs.list` support filtering the list of jobs by name. Filtering by name isn't supported by `projects.jobs.aggregated`.
## Parameters
Expand All @@ -1576,7 +1578,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
* `:uploadType` (*type:* `String.t`) - Legacy upload protocol for media (e.g. "media", "multipart").
* `:upload_protocol` (*type:* `String.t`) - Upload protocol for media (e.g. "raw", "multipart").
* `:filter` (*type:* `String.t`) - The kind of filter to use.
* `:name` (*type:* `String.t`) - Optional. The job name. Optional.
* `:name` (*type:* `String.t`) - Optional. The job name.
* `:pageSize` (*type:* `integer()`) - If there are many jobs, limit response to at most this many. The actual number of jobs returned will be the lesser of max_responses and an unspecified server-defined limit.
* `:pageToken` (*type:* `String.t`) - Set this to the 'next_page_token' field of a previous response to request additional results in a long list.
* `:view` (*type:* `String.t`) - Deprecated. ListJobs always returns summaries now. Use GetJob for other JobViews.
Expand Down Expand Up @@ -1740,6 +1742,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
* `:quotaUser` (*type:* `String.t`) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* `:uploadType` (*type:* `String.t`) - Legacy upload protocol for media (e.g. "media", "multipart").
* `:upload_protocol` (*type:* `String.t`) - Upload protocol for media (e.g. "raw", "multipart").
* `:updateMask` (*type:* `String.t`) - The list of fields to update relative to Job. If empty, only RequestedJobState will be considered for update. If the FieldMask is not empty and RequestedJobState is none/empty, The fields specified in the update mask will be the only ones considered for update. If both RequestedJobState and update_mask are specified, an error will be returned as we cannot update both state and mask.
* `:body` (*type:* `GoogleApi.Dataflow.V1b3.Model.Job.t`) -
* `opts` (*type:* `keyword()`) - Call options
Expand Down Expand Up @@ -1780,6 +1783,7 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
:quotaUser => :query,
:uploadType => :query,
:upload_protocol => :query,
:updateMask => :query,
:body => :body
}

Expand Down Expand Up @@ -2626,82 +2630,6 @@ defmodule GoogleApi.Dataflow.V1b3.Api.Projects do
|> Response.decode(opts ++ [struct: %GoogleApi.Dataflow.V1b3.Model.ListSnapshotsResponse{}])
end

@doc """
Validates a GoogleSQL query for Cloud Dataflow syntax. Will always confirm the given query parses correctly, and if able to look up schema information from DataCatalog, will validate that the query analyzes properly as well.
## Parameters
* `connection` (*type:* `GoogleApi.Dataflow.V1b3.Connection.t`) - Connection to server
* `project_id` (*type:* `String.t`) - Required. The ID of the Cloud Platform project that the job belongs to.
* `location` (*type:* `String.t`) - The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
* `optional_params` (*type:* `keyword()`) - Optional parameters
* `:"$.xgafv"` (*type:* `String.t`) - V1 error format.
* `:access_token` (*type:* `String.t`) - OAuth access token.
* `:alt` (*type:* `String.t`) - Data format for response.
* `:callback` (*type:* `String.t`) - JSONP
* `:fields` (*type:* `String.t`) - Selector specifying which fields to include in a partial response.
* `:key` (*type:* `String.t`) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* `:oauth_token` (*type:* `String.t`) - OAuth 2.0 token for the current user.
* `:prettyPrint` (*type:* `boolean()`) - Returns response with indentations and line breaks.
* `:quotaUser` (*type:* `String.t`) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* `:uploadType` (*type:* `String.t`) - Legacy upload protocol for media (e.g. "media", "multipart").
* `:upload_protocol` (*type:* `String.t`) - Upload protocol for media (e.g. "raw", "multipart").
* `:query` (*type:* `String.t`) - The sql query to validate.
* `opts` (*type:* `keyword()`) - Call options
## Returns
* `{:ok, %GoogleApi.Dataflow.V1b3.Model.ValidateResponse{}}` on success
* `{:error, info}` on failure
"""
@spec dataflow_projects_locations_sql_validate(
Tesla.Env.client(),
String.t(),
String.t(),
keyword(),
keyword()
) ::
{:ok, GoogleApi.Dataflow.V1b3.Model.ValidateResponse.t()}
| {:ok, Tesla.Env.t()}
| {:ok, list()}
| {:error, any()}
def dataflow_projects_locations_sql_validate(
connection,
project_id,
location,
optional_params \\ [],
opts \\ []
) do
optional_params_config = %{
:"$.xgafv" => :query,
:access_token => :query,
:alt => :query,
:callback => :query,
:fields => :query,
:key => :query,
:oauth_token => :query,
:prettyPrint => :query,
:quotaUser => :query,
:uploadType => :query,
:upload_protocol => :query,
:query => :query
}

request =
Request.new()
|> Request.method(:get)
|> Request.url("/v1b3/projects/{projectId}/locations/{location}/sql:validate", %{
"projectId" => URI.encode(project_id, &URI.char_unreserved?/1),
"location" => URI.encode(location, &URI.char_unreserved?/1)
})
|> Request.add_optional_params(optional_params_config, optional_params)
|> Request.library_version(@library_version)

connection
|> Connection.execute(request)
|> Response.decode(opts ++ [struct: %GoogleApi.Dataflow.V1b3.Model.ValidateResponse{}])
end

@doc """
Creates a Cloud Dataflow job from a template. Do not enter confidential information when you supply string values using the API.
Expand Down
5 changes: 1 addition & 4 deletions clients/dataflow/lib/google_api/dataflow/v1b3/connection.ex
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,7 @@ defmodule GoogleApi.Dataflow.V1b3.Connection do
"https://www.googleapis.com/auth/compute",

# View your Google Compute Engine resources
"https://www.googleapis.com/auth/compute.readonly",

# See your primary Google Account email address
"https://www.googleapis.com/auth/userinfo.email"
"https://www.googleapis.com/auth/compute.readonly"
],
otp_app: :google_api_dataflow,
base_url: "https://dataflow.googleapis.com/"
Expand Down
2 changes: 1 addition & 1 deletion clients/dataflow/lib/google_api/dataflow/v1b3/metadata.ex
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ defmodule GoogleApi.Dataflow.V1b3 do
API client metadata for GoogleApi.Dataflow.V1b3.
"""

@discovery_revision "20221025"
@discovery_revision "20240303"

def discovery_revision(), do: @discovery_revision
end
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# NOTE: This file is auto generated by the elixir code generator program.
# Do not edit this file manually.

defmodule GoogleApi.Dataflow.V1b3.Model.Base2Exponent do
@moduledoc """
Exponential buckets where the growth factor between buckets is `2**(2**-scale)`. e.g. for `scale=1` growth factor is `2**(2**(-1))=sqrt(2)`. `n` buckets will have the following boundaries. - 0th: [0, gf) - i in [1, n-1]: [gf^(i), gf^(i+1))
## Attributes
* `numberOfBuckets` (*type:* `integer()`, *default:* `nil`) - Must be greater than 0.
* `scale` (*type:* `integer()`, *default:* `nil`) - Must be between -3 and 3. This forces the growth factor of the bucket boundaries to be between `2^(1/8)` and `256`.
"""

use GoogleApi.Gax.ModelBase

@type t :: %__MODULE__{
:numberOfBuckets => integer() | nil,
:scale => integer() | nil
}

field(:numberOfBuckets)
field(:scale)
end

defimpl Poison.Decoder, for: GoogleApi.Dataflow.V1b3.Model.Base2Exponent do
def decode(value, options) do
GoogleApi.Dataflow.V1b3.Model.Base2Exponent.decode(value, options)
end
end

defimpl Poison.Encoder, for: GoogleApi.Dataflow.V1b3.Model.Base2Exponent do
def encode(value, options) do
GoogleApi.Gax.ModelBase.encode(value, options)
end
end
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# NOTE: This file is auto generated by the elixir code generator program.
# Do not edit this file manually.

defmodule GoogleApi.Dataflow.V1b3.Model.BucketOptions do
@moduledoc """
`BucketOptions` describes the bucket boundaries used in the histogram.
## Attributes
* `exponential` (*type:* `GoogleApi.Dataflow.V1b3.Model.Base2Exponent.t`, *default:* `nil`) - Bucket boundaries grow exponentially.
* `linear` (*type:* `GoogleApi.Dataflow.V1b3.Model.Linear.t`, *default:* `nil`) - Bucket boundaries grow linearly.
"""

use GoogleApi.Gax.ModelBase

@type t :: %__MODULE__{
:exponential => GoogleApi.Dataflow.V1b3.Model.Base2Exponent.t() | nil,
:linear => GoogleApi.Dataflow.V1b3.Model.Linear.t() | nil
}

field(:exponential, as: GoogleApi.Dataflow.V1b3.Model.Base2Exponent)
field(:linear, as: GoogleApi.Dataflow.V1b3.Model.Linear)
end

defimpl Poison.Decoder, for: GoogleApi.Dataflow.V1b3.Model.BucketOptions do
def decode(value, options) do
GoogleApi.Dataflow.V1b3.Model.BucketOptions.decode(value, options)
end
end

defimpl Poison.Encoder, for: GoogleApi.Dataflow.V1b3.Model.BucketOptions do
def encode(value, options) do
GoogleApi.Gax.ModelBase.encode(value, options)
end
end
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# NOTE: This file is auto generated by the elixir code generator program.
# Do not edit this file manually.

defmodule GoogleApi.Dataflow.V1b3.Model.DataSamplingConfig do
@moduledoc """
Configuration options for sampling elements.
## Attributes
* `behaviors` (*type:* `list(String.t)`, *default:* `nil`) - List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
"""

use GoogleApi.Gax.ModelBase

@type t :: %__MODULE__{
:behaviors => list(String.t()) | nil
}

field(:behaviors, type: :list)
end

defimpl Poison.Decoder, for: GoogleApi.Dataflow.V1b3.Model.DataSamplingConfig do
def decode(value, options) do
GoogleApi.Dataflow.V1b3.Model.DataSamplingConfig.decode(value, options)
end
end

defimpl Poison.Encoder, for: GoogleApi.Dataflow.V1b3.Model.DataSamplingConfig do
def encode(value, options) do
GoogleApi.Gax.ModelBase.encode(value, options)
end
end
Loading

0 comments on commit 2ed8062

Please sign in to comment.