PipelineResources
in a pipeline are the set of objects that are going to be
used as inputs to a Task
and can be output by a Task
.
A Task
can have multiple inputs and outputs.
For example:
- A
Task
's input could be a GitHub source which contains your application code. - A
Task
's output can be your application container image which can be then deployed in a cluster. - A
Task
's output can be a jar file to be uploaded to a storage bucket.
Note: PipelineResources have not been promoted to Beta in tandem with Pipeline's other CRDs. This means that the level of support for
PipelineResources
remains Alpha and there are effectively no guarantees about the type's future. There remain a lot of known issues with the type that have caused Tekton's developers to reassess it.For Beta-supported alternatives to PipelineResources see the v1alpha1 to v1beta1 migration guide which lists each PipelineResource type and a suggested option for replacing it.
For more information on why PipelineResources are remaining alpha see the description of their problems, along with next steps, below.
To define a configuration file for a PipelineResource
, you can specify the
following fields:
- Required:
apiVersion
- Specifies the API version, for exampletekton.dev/v1alpha1
.kind
- Specify thePipelineResource
resource object.metadata
- Specifies data to uniquely identify thePipelineResource
object, for example aname
.spec
- Specifies the configuration information for yourPipelineResource
resource object.type
- Specifies thetype
of thePipelineResource
- Optional:
description
- Description of the Resource.params
- Parameters which are specific to each type ofPipelineResource
optional
- Boolean flag to mark a resource optional (by default,optional
is set tofalse
making resources mandatory).
Resources can be used in Tasks and Conditions.
Input resources, like source code (git) or artifacts, are dumped at path
/workspace/task_resource_name
within a mounted
volume and are available
to all steps
of your Task
. The path that the resources are mounted
at can be
overridden with the targetPath
field.
Steps can use the path
variable substitution key to
refer to the local path to the mounted resource.
Task
and Condition
specs can refer resource params as well as predefined
variables such as path
using the variable substitution syntax below where
<name>
is the resource's name
and <key>
is one of the resource's params
:
For an input resource in a Task
spec: shell $(resources.inputs.<name>.<key>)
Or for an output resource:
$(outputs.resources.<name>.<key>)
Input resources can be accessed by:
$(resources.<name>.<key>)
The path
key is predefined and refers to the local path to a resource on the
mounted volume shell $(resources.inputs.<name>.path)
The optional field targetPath
can be used to initialize a resource in a
specific directory. If targetPath
is set, the resource will be initialized
under /workspace/targetPath
. If targetPath
is not specified, the resource
will be initialized under /workspace
. The following example demonstrates how
git input repository could be initialized in $GOPATH
to run tests:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: task-with-input
namespace: default
spec:
resources:
inputs:
- name: workspace
type: git
targetPath: go/src/github.com/tektoncd/pipeline
steps:
- name: unit-tests
image: golang
command: ["go"]
args:
- "test"
- "./..."
workingDir: "/workspace/go/src/github.com/tektoncd/pipeline"
env:
- name: GOPATH
value: /workspace/go
When specifying input and output PipelineResources
, you can optionally specify
paths
for each resource. paths
will be used by TaskRun
as the resource's
new source paths i.e., copy the resource from a specified list of paths.
TaskRun
expects the folder and contents to be already present in specified
paths. The paths
feature could be used to provide extra files or altered
version of existing resources before the execution of steps.
The output resource includes the name and reference to the pipeline resource and
optionally paths
. paths
will be used by TaskRun
as the resource's new
destination paths i.e., copy the resource entirely to specified paths. TaskRun
will be responsible for the creation of required directories and content
transition. The paths
feature could be used to inspect the results of
TaskRun
after the execution of steps.
paths
feature for input and output resources is heavily used to pass the same
version of resources across tasks in context of PipelineRun
.
In the following example, Task
and TaskRun
are defined with an input
resource, output resource and step, which builds a war artifact. After the
execution of TaskRun
(volume-taskrun
), custom
volume will have the entire
resource java-git-resource
(including the war artifact) copied to the
destination path /custom/workspace/
.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: volume-task
namespace: default
spec:
resources:
inputs:
- name: workspace
type: git
outputs:
- name: workspace
steps:
- name: build-war
image: objectuser/run-java-jar #https://hub.docker.com/r/objectuser/run-java-jar/
command: jar
args: ["-cvf", "projectname.war", "*"]
volumeMounts:
- name: custom-volume
mountPath: /custom
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: volume-taskrun
namespace: default
spec:
taskRef:
name: volume-task
resources:
inputs:
- name: workspace
resourceRef:
name: java-git-resource
outputs:
- name: workspace
paths:
- /custom/workspace/
resourceRef:
name: java-git-resource
podTemplate:
volumes:
- name: custom-volume
emptyDir: {}
When resources are bound inside a TaskRun
, they can include extra information
in the TaskRun
Status.ResourcesResult field. This information can be useful
for auditing the exact resources used by a TaskRun
later. Currently the Image
and Git resources use this mechanism.
For an example of what this output looks like:
resourcesResult:
- key: digest
value: sha256:a08412a4164b85ae521b0c00cf328e3aab30ba94a526821367534b81e51cb1cb
resourceRef:
name: skaffold-image-leeroy-web
The description
field is an optional field and can be used to provide description of the Resource.
By default, a resource is declared as mandatory unless optional
is set to true
for that resource. Resources declared as optional
in a Task
does not have be
specified in TaskRun
.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: task-check-optional-resources
spec:
resources:
inputs:
- name: git-repo
type: git
optional: true
Similarly, resources declared as optional
in a Pipeline
does not have to be
specified in PipelineRun
.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline-build-image
spec:
resources:
- name: workspace
type: git
optional: true
tasks:
- name: check-workspace
...
You can refer to different examples demonstrating usage of optional resources in
Task
, Condition
, and Pipeline
:
The git
resource represents a git repository, that
contains the source code to be built by the pipeline. Adding the git
resource
as an input to a Task
will clone this repository and allow the Task
to
perform the required actions on the contents of the repo.
To create a git resource using the PipelineResource
CRD:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: wizzbang-git
namespace: default
spec:
type: git
params:
- name: url
value: https://github.com/wizzbangcorp/wizzbang.git
- name: revision
value: master
Params that can be added are the following:
-
url
: represents the location of the git repository, you can use this to change the repo, e.g. to use a fork -
revision
: Git revision (branch, tag, commit SHA or ref) to clone. You can use this to control what commit or branch is used. git checkout is used to switch to the revision, and will result in a detached HEAD in most cases. Use refspec along with revision if you want to checkout a particular branch without a detached HEAD. If no revision is specified, the resource inspects remote repository to determine the correct default branch. -
refspec
: (Optional) specify a git refspec to pass to git-fetch. Note that if this field is specified, it must specify all refs, branches, tags, or commits required to checkout the specifiedrevision
. An additional fetch will not be run to obtain the contents of the revision field. If no refspec is specified, the value of therevision
field will be fetched directly. The refspec is useful in manipulating the repository in several cases:-
when the server does not support fetches via the commit SHA (i.e. does not have
uploadpack.allowReachableSHA1InWant
enabled) and you want to fetch and checkout a specific commit hash from a ref chain. -
when you want to fetch several other refs alongside your revision (for instance, tags)
-
when you want to checkout a specific branch, the revision and refspec fields can work together to be able to set the destination of the incoming branch and switch to the branch.
Examples:
- Check out a specified revision commit SHA1 after fetching ref (detached)
revision
: cb17eba165fe7973ef9afec20e7c6971565bd72f
refspec
: refs/smoke/myref - Fetch all tags alongside refs/heads/master and switch to the master branch
(not detached)
revision
: master
refspec
: "refs/tags/*:refs/tags/* +refs/heads/master:refs/heads/master" - Fetch the develop branch and switch to it (not detached)
revision
: develop
refspec
: refs/heads/develop:refs/heads/develop - Fetch refs/pull/1009/head into the master branch and switch to it (not detached)
revision
: master
refspec
: refs/pull/1009/head:refs/heads/master
- Check out a specified revision commit SHA1 after fetching ref (detached)
-
-
submodules
: defines if the resource should initialize and fetch the submodules, value is eithertrue
orfalse
. If not specified, this will default to true -
depth
: performs a shallow clone where only the most recent commit(s) will be fetched. This setting also applies to submodules. If set to'0'
, all commits will be fetched. If not specified, the default depth is 1. -
sslVerify
: defines if http.sslVerify should be set totrue
orfalse
in the global git config. Defaults totrue
if omitted.
When used as an input, the Git resource includes the exact commit fetched in the
resourceResults
section of the taskRun
's status object:
resourceResults:
- key: commit
value: 6ed7aad5e8a36052ee5f6079fc91368e362121f7
resourceRef:
name: skaffold-git
The Url
parameter can be used to point at any git repository, for example to
use a GitHub fork at master:
spec:
type: git
params:
- name: url
value: https://github.com/bobcatfish/wizzbang.git
The revision
can be any
git commit-ish (revision).
You can use this to create a git PipelineResource
that points at a branch, for
example:
spec:
type: git
params:
- name: url
value: https://github.com/wizzbangcorp/wizzbang.git
- name: revision
value: some_awesome_feature
To point at a pull request, you can use the pull requests's branch:
spec:
type: git
params:
- name: url
value: https://github.com/wizzbangcorp/wizzbang.git
- name: revision
value: refs/pull/52525/head
The httpProxy
and httpsProxy
parameter can be used to proxy non-SSL/SSL requests, for example to use an enterprise
proxy server for SSL requests:
spec:
type: git
params:
- name: url
value: https://github.com/bobcatfish/wizzbang.git
- name: httpsProxy
value: "my-enterprise.proxy.com"
The noProxy
parameter can be used to opt out of proxying, for example, to not proxy HTTP/HTTPS requests to
no.proxy.com
:
spec:
type: git
params:
- name: url
value: https://github.com/bobcatfish/wizzbang.git
- name: noProxy
value: "no.proxy.com"
Note: httpProxy
, httpsProxy
, and noProxy
are all optional but no validation done if all three are specified.
The pullRequest
resource represents a pull request event from a source control
system.
Adding the Pull Request resource as an input to a Task
will populate the
workspace with a set of files containing generic pull request related metadata
such as base/head commit, comments, and labels.
The payloads will also contain links to raw service-specific payloads where appropriate.
Adding the Pull Request resource as an output of a Task
will update the source
control system with any changes made to the pull request resource during the
pipeline.
Example file structure:
/workspace/
/workspace/<resource>/
/workspace/<resource>/labels/
/workspace/<resource>/labels/<label>
/workspace/<resource>/status/
/workspace/<resource>/status/<status>
/workspace/<resource>/comments/
/workspace/<resource>/comments/<comment>
/workspace/<resource>/head.json
/workspace/<resource>/base.json
/workspace/<resource>/pr.json
More details:
Labels are empty files, named after the desired label string.
Statuses describe pull request statuses. It is represented as a set of json files.
References (head and base) describe Git references. They are represented as a set of json files.
Comments describe a pull request comment. They are represented as a set of json
files. Add a file or modify the Body
field in an existing json comment file to
interact with the PR. Files with json extension will be parsed as such.
The content of any comments file(s) with other/no extensions will be treated as
body field of the comment.
Other pull request information can be found in pr.json
. This is a read-only
resource. Users should use other subresources (labels, comments, etc) to
interact with the PR.
For an example of the output this resource provides, see
example
.
To create a pull request resource using the PipelineResource
CRD:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: wizzbang-pr
namespace: default
spec:
type: pullRequest
params:
- name: url
value: https://github.com/wizzbangcorp/wizzbang/pulls/1
secrets:
- fieldName: authToken
secretName: github-secrets
secretKey: token
---
apiVersion: v1
kind: Secret
metadata:
name: github-secrets
type: Opaque
data:
token: github_personal_access_token_secret # in base64 encoded form
Params that can be added are the following:
url
: represents the location of the pull request to fetch.provider
: represents the SCM provider to use. This will be "guessed" based on the url if not set. Valid values aregithub
orgitlab
today.insecure-skip-tls-verify
: represents whether to skip verification of certificates from the git server. Valid values are"true"
or"false"
, the default being"false"
.
The following status codes are available to use for the Pull Request resource: https://godoc.org/github.com/jenkins-x/go-scm/scm#State
The pullRequest
resource will look for GitHub or GitLab OAuth authentication
tokens in spec secrets with a field name called authToken
.
URLs should be of the form: tektoncd#1
The PullRequest resource works with self hosted or enterprise GitHub/GitLab
instances. Simply provide the pull request URL and set the provider
parameter.
If you need to skip certificate validation set the insecure-skip-tls-verify
parameter to "true"
.
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: wizzbang-pr
namespace: default
spec:
type: pullRequest
params:
- name: url
value: https://github.example.com/wizzbangcorp/wizzbang/pulls/1
- name: provider
value: github
An image
resource represents an image that lives in a remote repository. It is
usually used as a Task
output
for Tasks
that build
images. This allows the same Tasks
to be used to generically push to any
registry.
Params that can be added are the following:
url
: The complete path to the image, including the registry and the image tagdigest
: The image digest which uniquely identifies a particular build of an image with a particular tag. While this can be provided as a parameter, there is not yet a way to update this value after an image is built, but this is planned in #216.
For example:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: kritis-resources-image
namespace: default
spec:
type: image
params:
- name: url
value: gcr.io/staging-images/kritis
To surface the image digest in the output of the taskRun
the builder tool
should produce this information in a
OCI Image Layout
index.json
file. This file should be placed on a location as specified in the
task definition under the default resource directory, or the specified
targetPath
. If there is only one image in the index.json
file, the digest of
that image is exported; otherwise, the digest of the whole image index would be
exported. For example this build-push task defines the outputImageDir
for the
builtImage
resource in /workspace/buildImage
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-push
spec:
resources:
inputs:
- name: workspace
type: git
outputs:
- name: builtImage
type: image
targetPath: /workspace/builtImage
steps: ...
If no value is specified for targetPath
, it will default to
/workspace/output/{resource-name}
.
Please check the builder tool used on how to pass this path to create the output file.
The taskRun
will include the image digest and URL in the resourcesResult
field that
is part of the taskRun.Status
for example:
status:
...
resourcesResult:
- key: "digest"
value: "sha256:eed29cd0b6feeb1a92bc3c4f977fd203c63b376a638731c88cacefe3adb1c660"
resourceRef:
name: skaffold-image-leeroy-web
...
If the index.json
file is not produced, the image digest will not be included
in the taskRun
output.
A cluster
resource represents a Kubernetes cluster other than the current
cluster Tekton Pipelines is running on. A common use case for this resource is
to deploy your application/function on different clusters.
The resource will use the provided parameters to create a
kubeconfig
file that can be used by other steps in the pipeline Task
to access the target
cluster. The kubeconfig will be placed in
/workspace/<your-cluster-name>/kubeconfig
on your Task
container
The Cluster resource has the following parameters:
url
(required): Host url of the master nodeusername
(required): the user with access to the clusterpassword
: to be used for clusters with basic authnamespace
: The namespace to target in the clustertoken
: to be used for authentication, if present will be used ahead of the passwordinsecure
: to indicate server should be accessed without verifying the TLS certificate.cadata
(required): holds PEM-encoded bytes (typically read from a root certificates bundle).clientKeyData
: contains PEM-encoded data from a client key file for TLSclientCertificateData
: contains PEM-encoded data from a client cert file for TLS
Note: Since only one authentication technique is allowed per user, either a
token
or a password
should be provided, if both are provided, the password
will be ignored.
clientKeyData
and clientCertificateData
are only required if token
or
password
is not provided for authentication to cluster.
The following example shows the syntax and structure of a cluster
resource:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: test-cluster
spec:
type: cluster
params:
- name: url
value: https://10.10.10.10 # url to the cluster master node
- name: cadata
value: LS0tLS1CRUdJTiBDRVJ.....
- name: token
value: ZXlKaGJHY2lPaU....
For added security, you can add the sensitive information in a Kubernetes Secret and populate the kubeconfig from them.
For example, create a secret like the following example:
apiVersion: v1
kind: Secret
metadata:
name: target-cluster-secrets
data:
cadatakey: LS0tLS1CRUdJTiBDRVJUSUZ......tLQo=
tokenkey: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbX....M2ZiCg==
and then apply secrets to the cluster resource
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: test-cluster
spec:
type: cluster
params:
- name: url
value: https://10.10.10.10
- name: username
value: admin
secrets:
- fieldName: token
secretKey: tokenKey
secretName: target-cluster-secrets
- fieldName: cadata
secretKey: cadataKey
secretName: target-cluster-secrets
Example usage of the cluster
resource in a Task
, using
variable substitution:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: deploy-image
namespace: default
spec:
resources:
inputs:
- name: workspace
type: git
- name: dockerimage
type: image
- name: test-cluster
type: cluster
steps:
- name: deploy
image: image-with-kubectl
command: ["bash"]
args:
- "-c"
- kubectl --kubeconfig
/workspace/$(resources.inputs.test-cluster.name)/kubeconfig --context
$(resources.inputs.test-cluster.name) apply -f /workspace/service.yaml'
To use the cluster
resource with Google Kubernetes Engine, you should use the
cadata
authentication mechanism.
To determine the caData, you can use the following gcloud
commands:
gcloud container clusters describe <cluster-name> --format='value(masterAuth.clusterCaCertificate)'
To create a secret with this information, you can use:
CADATA=$(gcloud container clusters describe <cluster-name> --format='value(masterAuth.clusterCaCertificate)')
kubectl create secret generic cluster-ca-data --from-literal=cadata=$CADATA
To retrieve the URL, you can use this gcloud command:
gcloud container clusters describe <cluster-name> --format='value(endpoint)'
Then to use these in a resource, reference the cadata from the secret you created above, and use the IP address from the gcloud command as your url (prefixed with https://):
spec:
type: cluster
params:
- name: url
value: https://<ip address determined above>
secrets:
- fieldName: cadata
secretName: cluster-ca-data
secretKey: cadata
The storage
resource represents blob storage, that contains either an object
or directory. Adding the storage resource as an input to a Task
will download
the blob and allow the Task
to perform the required actions on the contents of
the blob.
Only blob storage type Google Cloud Storage(gcs) is supported as of now via GCS storage resource and BuildGCS storage resource.
The gcs
storage resource points to
Google Cloud Storage blob.
To create a GCS type of storage resource using the PipelineResource
CRD:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: wizzbang-storage
namespace: default
spec:
type: storage
params:
- name: type
value: gcs
- name: location
value: gs://some-bucket
- name: dir
value: "y" # This can have any value to be considered "true"
Params that can be added are the following:
-
location
: represents the location of the blob storage. -
type
: represents the type of blob storage. For GCS storage resource this value should be set togcs
. -
dir
: represents whether the blob storage is a directory or not. By default a storage artifact is not considered a directory.- If the artifact is a directory then
-r
(recursive) flag is used, to copy all files under the source directory to a GCS bucket. Eg:gsutil cp -r source_dir/* gs://some-bucket
- If an artifact is a single file like a zip or tar, then the copy will be
only 1 level deep(not recursive). It will not trigger a copy of sub
directories in the source directory. Eg:
gsutil cp source.tar gs://some-bucket.tar
.
- If the artifact is a directory then
Private buckets can also be configured as storage resources. To access GCS
private buckets, service accounts with correct permissions are required. The
secrets
field on the storage resource is used for configuring this
information. Below is an example on how to create a storage resource with a
service account.
-
Refer to the official documentation on how to create service accounts and configuring IAM permissions to access buckets.
-
Create a Kubernetes secret from a downloaded service account json key
kubectl create secret generic bucket-sa --from-file=./service_account.json
-
To access the GCS private bucket environment variable
GOOGLE_APPLICATION_CREDENTIALS
should be set, so apply the above created secret to the GCS storage resource under thefieldName
key.apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: wizzbang-storage namespace: default spec: type: storage params: - name: type value: gcs - name: location value: gs://some-private-bucket - name: dir value: "y" secrets: - fieldName: GOOGLE_APPLICATION_CREDENTIALS secretName: bucket-sa secretKey: service_account.json
The build-gcs
storage resource points to a
Google Cloud Storage blob like
GCS Storage Resource, either in the form of a .zip
archive, or based on the contents of a source manifest file.
In addition to fetching an .zip archive, BuildGCS also unzips it.
A
Source Manifest File
is a JSON object, which is listing other objects in a Cloud Storage that should
be fetched. The format of the manifest is a mapping of the destination file path
to the location in a Cloud Storage, where the file's contents can be found. The
build-gcs
resource can also do incremental uploads of sources via the Source
Manifest File.
To create a build-gcs
type of storage resource using the PipelineResource
CRD:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: build-gcs-storage
namespace: default
spec:
type: storage
params:
- name: type
value: build-gcs
- name: location
value: gs://build-crd-tests/rules_docker-master.zip
- name: artifactType
value: Archive
Params that can be added are the following:
location
: represents the location of the blob storage.type
: represents the type of blob storage. For BuildGCS, this value should be set tobuild-gcs
artifactType
: represent the type ofgcs
resource. Right now, we support following types:
ZipArchive
:- ZipArchive indicates that the resource fetched is an archive file in the zip format.
- It unzips the archive and places all the files in the directory, which is set at runtime.
Archive
is also supported and is equivalent toZipArchive
, but is deprecated.
TarGzArchive
:- TarGzArchive indicates that the resource fetched is a gzipped archive file in the tar format.
- It unzips the archive and places all the files in the directory, which is set at runtime.
Manifest
:- Manifest indicates that the resource should be fetched using a source manifest file.
Private buckets other than the ones accessible by a
TaskRun Service Account can not be configured
as storage
resources for the build-gcs
storage resource right now. This is
because the container image
gcr.io/cloud-builders//gcs-fetcher
does not support configuring secrets.
The cloudevent
resource represents a
cloud event that is sent to a target
URI
upon completion of a TaskRun
. The cloudevent
resource sends Tekton
specific events; the body of the event includes the entire TaskRun
spec plus
status; the types of events defined for now are:
- dev.tekton.event.task.unknown
- dev.tekton.event.task.successful
- dev.tekton.event.task.failed
cloudevent
resources are useful to notify a third party upon the completion
and status of a TaskRun
. In combinations with the
Tekton triggers project they can be used
to link Task/PipelineRuns
asynchronously.
To create a CloudEvent resource using the PipelineResource
CRD:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: event-to-sink
spec:
type: cloudEvent
params:
- name: targetURI
value: http://sink:8080
The content of an event is for example:
Context Attributes,
SpecVersion: 0.2
Type: dev.tekton.event.task.successful
Source: /apis/tekton.dev/v1beta1/namespaces/default/taskruns/pipeline-run-api-16aa55-source-to-image-task-rpndl
ID: pipeline-run-api-16aa55-source-to-image-task-rpndl
Time: 2019-07-04T11:03:53.058694712Z
ContentType: application/json
Transport Context,
URI: /
Host: my-sink.default.my-cluster.containers.appdomain.cloud
Method: POST
Data,
{
"taskRun": {
"metadata": {...}
"spec": {
"inputs": {...}
"outputs": {...}
"serviceAccount": "default",
"taskRef": {
"name": "source-to-image",
"kind": "Task"
},
"timeout": "1h0m0s"
},
"status": {...}
}
}
The short answer is that they're not ready to be given a Beta level of support by Tekton's developers. The long answer is, well, longer:
-
Their behaviour can be opaque.
They're implemented as a mixture of injected Task Steps, volume configuration and type-specific code in Tekton Pipeline's controller. This means errors from
PipelineResources
can manifest in quite a few different ways and it's not always obvious whether an error directly relates toPipelineResource
behaviour. This problem is compounded by the fact that, while our docs explain each Resource type's "happy path", there never seems to be enough info available to explain error cases sufficiently. -
When they fail they're difficult to debug.
Several PipelineResources inject their own Steps before a
Task's
Steps. It's extremely difficult to manually insert Steps before them to inspect the state of a container before they run. -
There aren't enough of them.
The six types of existing PipelineResources only cover a tiny subset of the possible systems and side-effects we want to support with Tekton Pipelines.
-
Adding extensibility to them makes them really similar to
Tasks
:- User-definable
Steps
? This is whatTasks
provide. - User-definable params? Tasks already have these.
- User-definable "resource results"?
Tasks
haveTask
Results. - Sharing data between Tasks using PVCs?
workspaces
provide this forTasks
.
- User-definable
-
They make
Tasks
less reusable.- A
Task
has to choose thetype
ofPipelineResource
it will accept. - If a
Task
accepts agit
PipelineResource
then it's not able to accept agcs
PipelineResource
from aTaskRun
orPipelineRun
even though both thegit
andgcs
PipelineResources
fetch files. They should technically be interchangeable: all they do is write files from somewhere remote onto disk. Yet with the existingPipelineResources
implementation they aren't interchangeable.
- A
They also present challenges from a documentation perspective:
- Their purpose is ambiguous and it's difficult to articulate what the CRD is precisely for.
- Four of the types interact with external systems (git, pull-request, gcs, gcs-build).
- Five of them write files to a Task's disk (git, pull-request, gcs, gcs-build, cluster).
- One tells the Pipelines controller to emit CloudEvents to a specific endpoint (cloudEvent).
- One writes config to disk for a
Task
to use (cluster). - One writes a digest in one
Task
and then reads it back in anotherTask
(image). - Perhaps the one thing you can say about the
PipelineResource
CRD is that it can create side-effects for yourTasks
.
So what are PipelineResources still good for? We think we've identified some of the most important things:
- You can augment
Task
-only workflows withPipelineResources
that, without them, can only be done withPipelines
.- For example, let's say you want to checkout a git repo for your Task to test. You have two options. First, you could use a
git
PipelineResource and add it directly to your testTask
. Second, you could write aPipeline
that has agit-clone
Task
which checks out the code onto a PersistentVolumeClaimworkspace
and then passes that PVCworkspace
to your testTask
. For a lot of users the second workflow is totally acceptable but for others it isn't. Some of the most notable reasons we've heard are:- Some users simply cannot allocate storage on their platform, meaning
PersistentVolumeClaims
are out of the question. - Expanding a single
Task
workflow into aPipeline
is labor-intensive and feels unnecessary.
- Some users simply cannot allocate storage on their platform, meaning
- For example, let's say you want to checkout a git repo for your Task to test. You have two options. First, you could use a
- Despite being difficult to explain the whole CRD clearly each individual
type
is relatively easy to explain.- For example, users can build a pretty good "hunch" for what a
git
PipelineResource
is without really reading any docs.
- For example, users can build a pretty good "hunch" for what a
- Configuring CloudEvents to be emitted by the Tekton Pipelines controller.
- Work is ongoing to get notifications support into the Pipelines controller which should hopefully be able to replace the
cloudEvents
PipelineResource
.
- Work is ongoing to get notifications support into the Pipelines controller which should hopefully be able to replace the
For each of these there is some amount of ongoing work or discussion. It may be that
PipelineResources
can be redesigned to fix all of their problems or it could be
that the best features of PipelineResources
can be extracted for use everywhere in
Tekton Pipelines. So given this state of affairs PipelineResources
are being kept
out of beta until those questions are resolved.
For Beta-supported alternatives to PipelineResources see the v1alpha1 to v1beta1 migration guide which lists each PipelineResource type and a suggested option for replacing it.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.