- Abstract
- Status of this document
- Overview
- Project Components
- Specification Details
- Core Concepts
- Workflow Definition
- Workflow Instance
- Workflow Model
- Workflow Data
- Workflow Functions
- Workflow Expressions
- Workflow Definition Structure
- Workflow States
- Related State Definition
- Function Definition
- Event Definition
- Auth Definition
- Correlation Definition
- OnEvents Definition
- Action Definition
- Subflow Action
- FunctionRef Definition
- EventRef Definition
- SubFlowRef Definition
- Error Definition
- Retry Definition
- Transition Definition
- Switch State Data Conditions
- Switch State Event Conditions
- Parallel State Branch
- Parallel State Handling Exceptions
- Start Definition
- Schedule Definition
- Cron Definition
- End Definition
- ProducedEvent Definition
- Transitions
- Additional Properties
- Workflow Error Handling
- Workflow Timeouts
- Workflow Compensation
- Workflow Versioning
- Workflow Constants
- Workflow Secrets
- Workflow Metadata
- Extensions
- Use Cases
- Examples
- Comparison to other workflow languages
- References
- License
The Serverless Workflow project defines a vendor-neutral and declarative workflow language, targeting the Serverless computing technology domain.
This document represents the current state of the specification. It includes all features so far released as well as all features planned to be added in the next release.
You can find all specification releases here. You can find the specification roadmap here.
Workflows allow us to capture and organize business requirements in an unified manner. They can bridge the gap between how we express and model business logic.
A key component of workflows is the domain-specific language (DSL) we use to model our business logic and solutions. Selecting the appropriate workflow language for our business and technology domains is a very important decision to be considered.
Serverless Workflow focuses on defining a vendor-neutral, platform-independent, and declarative workflow language that targets the serverless computing technology domain. It can be used to significantly bridge the gap between your unique business domain and the target technology domain.
The lack of a common way to define and model workflows means that we must constantly re-learn how to write them. This also limits the potential for common libraries, tooling and infrastructure to aid workflow modeling and execution across different platforms. Portability as well as productivity that can be achieved from workflow orchestration is hindered overall.
Serverless Workflow addresses the need for a community-driven, vendor-neutral and a platform-independent workflow language specification that targets the serverless computing technology domain.
Having and using a specification-based workflow language allows us to model our workflows once and deploy them onto many different container/cloud platforms, expecting the same execution results.
For more information on the history, development and design rationale behind the specification, see the Serverless Workflow Wiki.
Serverless Workflow language takes advantage of well-established and known standards such as CloudEvents, OpenAPI specifications, gRPC and GraphQL.
The specification has multiple components:
- Definitions of the workflow language. This is defined via the Workflow JSON Schema. You can use both JSON and YAML formats to model your workflows.
- Software Development Kits (SDKs) for both Go and Java, and we plan to add them for more languages in the future.
- Set of Workflow Extensions which allow users to define additional, non-execution-related workflow information. This information can be used to improve workflow performance. Some example workflow extensions include Key Performance Indicators (KPIs), Simulation, Tracing, etc.
- Technology Compatibility Kit (TCK) to be used as a specification conformance tool for runtime implementations.
Following sections provide detailed descriptions of all parts of the Serverless Workflow language.
This section describes some of the core Serverless Workflow concepts:
A workflow definition is a single artifact written in the Serverless Workflow language. It consists of the core Workflow Definition Structure and the Workflow Model It defines a blueprint used by runtimes for its execution.
A business solution can be composed of any number of related workflow definitions. Their relationships are explicitly modeled with the Serverless Workflow language (for example by using SubFlowRef Definition in actions).
Runtimes can initialize workflow definitions for some particular set of data inputs or events which forms workflow instances.
A workflow instance represents a single workflow execution according to the given workflow definition. Instances should be kept isolated, but still be able to have access to other running instances.
Depending on their workflow definition, workflow instances can be short-lived or can execute for days, weeks, or longer.
Each workflow instances should have its unique identifier, which should remain unchanged throughout its execution.
Workflow definitions can describe how/when workflow instances should be created via
its start
property. This property is described in detail in the start definition section.
For example, instance creation can be defined for some set of data, but other ways are also possible.
For example you can enforce instance creations upon arrival of certain events with a starting EventState, as well
on a defined schedule.
Workflow instance termination is also explicitly described in the workflow definition.
By default, instances should be terminated once there are no active workflow paths (all active
paths reach a state containing the default end definition). Other ways, such as
using the terminate
property of the end definition to terminate instance execution,
or defining an workflowExecTimeout
property are also possible.
This default behavior can be changed by setting the keepActive
workflow property to true
.
In this case the only way to terminate a workflow instance is for its control flow to explicitly end with a "terminate" end definition,
or if the defined workflowExecTimeout
time is reached.
The Serverless Workflow language is composed of:
- Function definitions - Reusable functions that can declare services that need to be invoked, or expressions to be evaluated.
- Event definitions - Reusable declarations of events that need to be
consumed
to start or continue workflow instances, trigger function/service execution, or beproduced
during workflow execution. - Retry definitions - Reusable retry definitions. Can specify retry strategies for service invocations during workflow execution.
- State definitions - Definition of states, the building blocks of workflow
control flow logic
. States can reference the reusable function, event and retry definitions.
Serverless Workflow data is represented in JSON format. Data flow and execution logic go hand in hand, meaning as workflow execution follows the workflow definition logic, so does the workflow data:
The initial Workflow data input is passed to the workflow starting state as its data input. When a state finishes its execution, its data output is passed as data input to the next state that should be executed.
When workflow execution ends, the last executed workflow state's data output becomes the final Workflow data output.
States can filter their data inputs and outputs using State Data filters.
States can also consume events as well as invoke services. These event payloads and service invocation results can be filtered using Event data filters and Action data filters.
Data filters use workflow expressions for selecting and manipulating state data input and output, action inputs and results, and event payloads.
Multiple filters can be combined to gain high level of control of your workflow state data. You can find an example of that in this section.
Data from consumed events,and action execution results are added/merged to state data. Reference the data merging section to learn about the merging rules that should be applied.
The initial data input into a workflow instance. Must be a valid JSON object. If no input is provided, the default data input should be an empty JSON object:
{ }
Workflow data input is passed to the workflow starting state as its data input.
States in a workflow can receive data (data input) and produce a data result (data output). The state's data input is typically the previous state's data output. When a state completes its execution, its data output is passed to the state's data input it transitions to. There are two rules to consider here:
- If the state is the workflow starting state, its data input is the workflow data input.
- When workflow execution ends, the data output of the last executed state becomes the workflow data output.
Each workflow execution should produce a data output. The workflow data output is the data output of the last executed workflow state.
Parameter | Description | Type | Required |
---|---|---|---|
input | Workflow expression to filter the states data input | string | no |
output | Workflow expression that filters the states data output | string | no |
Click to view example definition
JSON | YAML |
---|---|
{
"stateDataFilter": {
"input": "${ .orders }",
"output": "${ .provisionedOrders }"
}
} |
stateDataFilter:
input: "${ .orders }"
output: "${ .provisionedOrders }" |
State data filters can be used to filter the state's data input and output.
The state data filters input
property expression is applied when the workflow transitions to the current state and receives its data input.
It can be used to select only data that is needed and disregard what is not needed.
If input
is not defined or does not select any parts of the state's data input, its data input is not filtered.
The state data filter output
property expression is applied right before the state transitions to the next state defined.
It filters the state's data output to be passed as data input to the transitioning state.
If the current state is the workflow end state, the filtered state's data output becomes the workflow data output.
If output
is not defined or does not select any parts of the state's data output, its data output is not filtered.
Results of the input
expression should become the state data.
Results of the output
expression should become the state data output.
For more information on this you can reference the data merging section.
Let's take a look at some examples of state filters. For our examples let's say the data input to our state is as follows:
{
"fruits": [ "apple", "orange", "pear" ],
"vegetables": [
{
"veggieName": "potato",
"veggieLike": true
},
{
"veggieName": "broccoli",
"veggieLike": false
}
]
}
For the first example, our state only cares about fruits data, and we want to disregard the vegetables. To do this we can define a state filter:
{
"stateDataFilter": {
"input": "${ {fruits: .fruits} }"
}
}
The state data output then would include only the fruits data:
{
"fruits": [ "apple", "orange", "pear"]
}
For our second example, let's say that we are interested in the only vegetable "veggie-like". Here we have two ways of filtering our data, depending on if actions within our state need access to all vegetables, or only the ones that are "veggie-like".
The first way would be to use both "input", and "output":
{
"stateDataFilter": {
"input": "${ {vegetables: .vegetables} }",
"output": "${ {vegetables: .vegetables[] | select(.veggieLike == true)} }"
}
}
The states data input filter selects all the vegetables from the main data input. Once all actions have performed, before the state transition or workflow execution completion (if this is an end state), the "output" of the state filter selects only the vegetables which are "veggie like".
The second way would be to directly filter only the "veggie like" vegetables with just the data input path:
{
"stateDataFilter": {
"input": "${ {vegetables: .vegetables[] | select(.veggieLike == true)} }"
}
}
Parameter | Description | Type | Required |
---|---|---|---|
fromStateData | Workflow expression that filters state data that can be used by the action | string | no |
results | Workflow expression that filters the actions data results | string | no |
toStateData | Workflow expression that selects a state data element to which the action results should be added/merged into. If not specified denotes the top-level state data element | string | no |
Click to view example definition
JSON | YAML |
---|---|
{
"actionDataFilter": {
"fromStateData": "${ .language }",
"results": "${ .results.greeting }",
"toStateData": "${ .finalgreeting }"
}
} |
actionDataFilter:
fromStateData: "${ .language }"
results: "${ .results.greeting }"
toStateData: "${ .finalgreeting }" |
Action data filters can be used inside Action definitions. Each action can define this filter which can:
- Filter the state data to select only the data that can be used within function definition arguments using its
fromStateData
property. - Filter the action results to select only the result data that should be added/merged back into the state data
using its
results
property. - Select the part of state data which the action data results should be added/merged to
using the
toStateData
property.
To give an example, let's say we have an action which returns a list of breads and pasta types. For our workflow, we are only interested into breads and not the pasta.
Action results:
{
"breads": ["baguette", "brioche", "rye"],
"pasta": [ "penne", "spaghetti", "ravioli"]
}
We can use an action data filter to filter only the breads data:
{
"actions":[
{
"functionRef": "breadAndPastaTypesFunction",
"actionDataFilter": {
"results": "${ {breads: .breads} }"
}
}
]
}
The results
will filter the action results, which would then be:
{
"breads": [
"baguette",
"brioche",
"rye"
]
}
Now let's take a look at a similar example (same expected action results) and assume our current state data is:
{
"itemsToBuyAtStore": [
]
}
and have the following action definition:
{
"actions":[
{
"functionRef": "breadAndPastaTypesFunction",
"actionDataFilter": {
"results": "${ [ .breads[0], .pasta[1] ] }",
"toStateData": "${ .itemsToBuyAtStore }"
}
}
]
}
In this case, our results
select the first bread and the second element of the pasta array.
The toStateData
expression then selects the itemsToBuyAtStore
array of the state data to add/merge these results
into. With this, after our action executes the state data would be:
{
"itemsToBuyAtStore": [
"baguette",
"spaghetti"
]
}
Parameter | Description | Type | Required |
---|---|---|---|
data | Workflow expression that filters the event data (payload) | string | no |
toStateData | Workflow expression that selects a state data element to which the action results should be added/merged into. If not specified denotes the top-level state data element | string | no |
Click to view example definition
JSON | YAML |
---|---|
{
"eventDataFilter": {
"data": "${ .data.results }"
}
} |
eventDataFilter:
data: "${ .data.results }" |
Event data filters can be used to filter consumed event payloads. They can be used to:
- Filter the event payload to select only the data that should be added/merged into the state data
using its
data
property. - Select the part of state data into which the event payload should be added/merged into
using the
toStateData
property.
Allows event data to be filtered and added to or merged with the state data. All events have to be in the CloudEvents format
and event data filters can filter both context attributes and the event payload (data) using the data
property.
Here is an example using an event filter:
Note that the data input to the Event data filters depends on the dataOnly
property of the associated Event definition.
If this property is not defined (has default value of true
), Event data filter expressions are evaluated against the event payload (the CloudEvents data
attribute only). If it is set to
false
, the expressions should be evaluated against the entire CloudEvent (including its context attributes).
As Event states can take advantage of all defined data filters. In the example below, we define a workflow with a single event state and show how data filters can be combined.
{
"id": "GreetCustomersWorkflow",
"name": "Greet Customers when they arrive",
"version": "1.0",
"specVersion": "0.7",
"start": "WaitForCustomerToArrive",
"states":[
{
"name": "WaitForCustomerToArrive",
"type": "event",
"onEvents": [{
"eventRefs": ["CustomerArrivesEvent"],
"eventDataFilter": {
"data": "${ .customer }",
"toStateData": "${ .customerInfo }"
},
"actions":[
{
"functionRef": {
"refName": "greetingFunction",
"arguments": {
"greeting": "${ .spanish } ",
"customerName": "${ .customerInfo.name } "
}
},
"actionDataFilter": {
"fromStateData": "${ .hello }",
"results": "${ .greetingMessageResult }",
"toStateData": "${ .finalCustomerGreeting }"
}
}
]
}],
"stateDataFilter": {
"input": "${ .greetings } ",
"output": "${ .finalCustomerGreeting }"
},
"end": true
}
],
"events": [{
"name": "CustomerArrivesEvent",
"type": "customer-arrival-type",
"source": "customer-arrival-event-source"
}],
"functions": [{
"name": "greetingFunction",
"operation": "http://my.api.org/myapi.json#greeting"
}]
}
The workflow data input when starting workflow execution is assumed to include greetings in different languages:
{
"greetings": {
"hello": {
"english": "Hello",
"spanish": "Hola",
"german": "Hallo",
"russian": "Здравствуйте"
},
"goodbye": {
"english": "Goodbye",
"spanish": "Adiós",
"german": "Auf Wiedersehen",
"russian": "Прощай"
}
}
}
The workflow data input then becomes the data input of the starting workflow state.
We also assume for this example that the CloudEvent that our event state consumes include the data (payload):
{
"customer": {
"name": "John Michaels",
"address": "111 Some Street, SomeCity, SomeCountry",
"age": 40
}
}
Here is a sample diagram showing our workflow, each numbered step on this diagram shows a certain defined point during workflow execution at which data filters are invoked and correspond to the numbered items below.
(1) Workflow execution starts: Workflow data is passed to our "WaitForCustomerToArrive" event state as data input. Workflow executes its starting state, namely the "WaitForCustomerToArrive" event state.
The event state stateDataFilter is invoked to filter its data input. The filters "input" expression is evaluated and selects only the "greetings" data. The rest of the state data input should be disregarded.
At this point our state data should be:
{
"hello": {
"english": "Hello",
"spanish": "Hola",
"german": "Hallo",
"russian": "Здравствуйте"
},
"goodbye": {
"english": "Goodbye",
"spanish": "Adiós",
"german": "Auf Wiedersehen",
"russian": "Прощай"
}
}
(2) CloudEvent of type "customer-arrival-type" is consumed: Once the event is consumed, the "eventDataFilter" is triggered. Its "data" expression selects the "customer" object from the events data. The "toStateData" expression says that we should add/merge this selected event data to the state data in its "customerInfo" property. If this property exists it should be merged, if it does not exist, one should be created.
At this point our state data contains:
{
"hello": {
"english": "Hello",
"spanish": "Hola",
"german": "Hallo",
"russian": "Здравствуйте"
},
"goodbye": {
"english": "Goodbye",
"spanish": "Adiós",
"german": "Auf Wiedersehen",
"russian": "Прощай"
},
"customerInfo": {
"name": "John Michaels",
"address": "111 Some Street, SomeCity, SomeCountry",
"age": 40
}
}
(3) Event state performs its actions: Before the first action is executed, its actionDataFilter is invoked. Its "fromStateData" expression filters the current state data to select from its data that should be available to action arguments. In this example it selects the "hello" property from the current state data. At this point the action is executed. We assume that for this example "greetingFunction" returns:
{
"execInfo": {
"execTime": "10ms",
"failures": false
},
"greetingMessageResult": "Hola John Michaels!"
}
After the action is executed, the actionDataFilter "results" expression is evaluated to filter the results returned from the action execution. In this case, we select only the "greetingMessageResult" element from the results.
The action filters "toStateData" expression then defines that we want to add/merge this action result to state data under the "finalCustomerGreeting" element.
At this point, our state data contains:
{
"hello": {
"english": "Hello",
"spanish": "Hola",
"german": "Hallo",
"russian": "Здравствуйте"
},
"goodbye": {
"english": "Goodbye",
"spanish": "Adiós",
"german": "Auf Wiedersehen",
"russian": "Прощай"
},
"customerInfo": {
"name": "John Michaels",
"address": "111 Some Street, SomeCity, SomeCountry",
"age": 40
},
"finalCustomerGreeting": "Hola John Michaels!"
}
(4) Event State Completes Execution:
When our event state finishes its execution, the states "stateDataFilter" "output" filter expression is executed to filter the state data to create the final state data output.
Because our event state is also an end state, its data output becomes the final workflow data output. Namely:
{
"finalCustomerGreeting": "Hola John Michaels!"
}
Consumed event data (payload) and action execution results should be merged into the state data. Event and action data filters can be used to give more details about this operation.
By default, with no data filters specified, when an event is consumed, its entire data section (payload) should be merged to the state data. Merging should be applied to the entire state data JSON element.
In case of event and action filters, their "toStateData" property can be defined to select a specific element of the state data with which merging should be done against. If this element does not exist, a new one should be created first.
When merging, the state data element and the data (payload)/action result should have the same type, meaning that you should not merge arrays with objects or objects with arrays etc.
When merging elements of type object should be done by inserting all the key-value pairs from both objects into a single combined object. If both objects contain a value for the same key, the object of the event data/action results should "win". To give an example, let's say we have the following state data:
{
"customer": {
"name": "John",
"address": "1234 street",
"zip": "12345"
}
}
and we have the following event payload that needs to be merged into the state data:
{
"customer": {
"name": "John",
"zip": "54321"
}
}
After merging the state data should be:
{
"customer": {
"name": "John",
"address": "1234 street",
"zip": "54321"
}
}
Merging array types should be done by concatenating them into a larger array including unique elements of both arrays. To give an example, merging:
{
"customers": [
{
"name": "John",
"address": "1234 street",
"zip": "12345"
},
{
"name": "Jane",
"address": "4321 street",
"zip": "54321"
}
]
}
into state data:
{
"customers": [
{
"name": "Michael",
"address": "6789 street",
"zip": "6789"
}
]
}
should produce state data:
{
"customers": [
{
"name": "Michael",
"address": "6789 street",
"zip": "6789"
},
{
"name": "John",
"address": "1234 street",
"zip": "12345"
},
{
"name": "Jane",
"address": "4321 street",
"zip": "54321"
}
]
}
To give an example, merging:
{
"customers": [
{
"name": "John",
"address": "1234 street",
"zip": "12345"
},
{
"name": "Jane",
"address": "4321 street",
"zip": "54321"
}
]
}
into state data:
{
"customers": [
{
"name": "Michael",
"address": "6789 street",
"zip": "6789"
}
]
}
should produce state data:
{
"customers": [
{
"name": "Michael",
"address": "6789 street",
"zip": "6789"
},
{
"name": "John",
"address": "1234 street",
"zip": "12345"
},
{
"name": "Jane",
"address": "4321 street",
"zip": "54321"
}
]
}
Merging number types should be done by overwriting the data from events data/action results into the merging element of the state data. For example merging action results:
{
"age": 30
}
into state data:
{
"age": 20
}
would produce state data:
{
"age": 30
}
Merging string types should be done by overwriting the data from events data/action results into the merging element of the state data.
Merging number types should be done by overwriting the data from events data/action results into the merging element of the state data.
Workflow functions are reusable definitions for service invocations and/or expression evaluation. They can be referenced by their domain-specific names inside workflow states.
Reference the following sections to learn more about workflow functions:
- Using functions for RESTful service invocations
- Using functions for gRPC service invocation
- Using functions for GraphQL service invocation
- Using functions for expression evaluations
Functions can be used to describe services and their operations that need to be invoked during workflow execution. They can be referenced by states action definitions to clearly define when the service operations should be invoked during workflow execution, as well as the data parameters passed to them if needed.
Note that with Serverless Workflow, we can also define service invocations via events. To learn more about that, please reference the event definitions section, as well as the actions definitions eventRef property.
Because of an overall lack of a common way to describe different services and their operations, many workflow languages typically chose to define custom function definitions. This approach, however, often runs into issues such as lack of portability, limited capabilities, as well as forcing non-workflow-specific information, such as service authentication, to be added inside the workflow language.
To avoid these issues, the Serverless Workflow specification mandates that details about RESTful services and their operations be described using the OpenAPI Specification specification. OpenAPI is a language-agnostic standard that describes discovery of RESTful services. This allows Serverless Workflow language to describe RESTful services in a portable way, as well as workflow runtimes to utilize OpenAPI tooling and APIs to invoke service operations.
Here is an example function definition for a RESTful service operation.
{
"functions": [
{
"name": "sendOrderConfirmation",
"operation": "file://confirmationapi.json#sendOrderConfirmation"
}
]
}
It can, as previously mentioned be referenced during workflow execution when the invocation of this service is desired. For example:
{
"states": [
{
"name":"SendConfirmState",
"type":"operation",
"actions":[
{
"functionRef": "sendOrderConfirmation"
}],
"end": true
}]
}
Note that the referenced function definition type in this case must be rest
(default type).
For more information about functions, reference the Functions definitions section.
Similar to defining invocations of operations on RESTful services, you can also use the workflow functions definitions that follow the remote procedure call (RPC) protocol. For RPC invocations, the Serverless Workflow specification mandates that they are described using gRPC, a widely used RPC system. gRPC uses Protocol Buffers to define messages, services, and the methods on those services that can be invoked.
Let's look at an example of invoking a service method using RPC. For this example let's say we have the following gRPC protocol buffer definition in a myuserservice.proto file:
service UserService {
rpc AddUser(User) returns (google.protobuf.Empty) {
option (google.api.http) = {
post: "/api/v1/users"
body: "*"
};
}
rpc ListUsers(ListUsersRequest) returns (stream User) {
option (google.api.http) = {
get: "/api/v1/users"
};
}
rpc ListUsersByRole(UserRole) returns (stream User) {
option (google.api.http) = {
get: "/api/v1/users/role"
};
}
rpc UpdateUser(UpdateUserRequest) returns (User) {
option (google.api.http) = {
patch: "/api/v1/users/{user.id}"
body: "*"
};
}
}
In our workflow definition, we can then use function definitions:
{
"functions": [
{
"name": "listUsers",
"operation": "file://myuserservice.proto#UserService#ListUsers",
"type": "rpc"
}
]
}
Note that the operation
property has the following format:
<URI_to_proto_file>#<Service_Name>#<Service_Method_Name>
Note that the referenced function definition type in this case must be rpc
.
For more information about functions, reference the Functions definitions section.
If you want to use GraphQL services, you can also invoke them using a similar syntax to the above methods.
We'll use the following GraphQL schema definition to show how that would work with both a query and a mutation:
type Query {
pets: [Pet]
pet(id: Int!): Pet
}
type Mutation {
createPet(pet: PetInput!): Pet
}
type Treat {
id: Int!
}
type Pet {
id: Int!
name: String!
favoriteTreat: Treat
}
input PetInput {
id: Int!
name: String!
favoriteTreatId: Int
}
In our workflow definition, we can then use a function definition for the pet
query field as such:
{
"functions": [
{
"name": "getOnePet",
"operation": "https://example.com/pets/graphql#query#pet",
"type": "graphql"
}
]
}
Note that the operation
property has the following format for the graphql
type:
<url_to_graphql_endpoint>#<literal "mutation" or "query">#<mutation_or_query_field>
In order to invoke this query, we would use the following functionRef
parameters:
{
"refName": "getOnePet",
"arguments": {
"id": 42
},
"selectionSet": "{ id, name, favoriteTreat { id } }"
}
Which would return the following result:
{
"pet": {
"id": 42,
"name": "Snuffles",
"favoriteTreat": {
"id": 9001
}
}
}
Likewise, we would use the following function definition:
{
"functions": [
{
"name": "createPet",
"operation": "https://example.com/pets/graphql#mutation#createPet",
"type": "graphql"
}
]
}
With the parameters for the functionRef
:
{
"refName": "createPet",
"arguments": {
"pet": {
"id": 43,
"name":"Sadaharu",
"favoriteTreatId": 9001
}
},
"selectionSet": "{ id, name, favoriteTreat { id } }"
}
Which would execute the mutation, creating the object and returning the following data:
{
"pet": {
"id": 43,
"name": "Sadaharu",
"favoriteTreat": {
"id": 9001
}
}
}
Note you can include expressions in both arguments
and selectionSet
:
{
"refName": "getOnePet",
"arguments": {
"id": "${ .petId }"
},
"selectionSet": "{ id, name, age(useDogYears: ${ .isPetADog }) { dateOfBirth, years } }"
}
Expressions must be evaluated before executing the operation.
Note that GraphQL Subscriptions are not supported at this time.
For more information about functions, reference the Functions definitions section.
In addition to defining RESTful, RPC and GraphQL services and their operations, workflow functions definitions can also be used to define expressions that should be evaluated during workflow execution.
Defining expressions as part of function definitions has the benefit of being able to reference them by their logical name through workflow states where expression evaluation is required.
Expression functions must declare their type
parameter to be expression
.
Let's take a look at an example of such definitions:
{
"functions": [
{
"name": "isAdult",
"operation": ".applicant | .age >= 18",
"type": "expression"
},
{
"name": "isMinor",
"operation": ".applicant | .age < 18",
"type": "expression"
}
]
}
Here we define two reusable expression functions. Expressions in Serverless Workflow can be evaluated against the workflow, or workflow state data. Note that different data filters play a big role as to which parts of the workflow data are being evaluated by the expressions. Reference the State Data Filters section for more information on this.
Our expression function definitions can now be referenced by workflow states when they need to be evaluated. For example:
{
"states":[
{
"name":"CheckApplicant",
"type":"switch",
"dataConditions": [
{
"name": "Applicant is adult",
"condition": "${ fn:isAdult }",
"transition": "ApproveApplication"
},
{
"name": "Applicant is minor",
"condition": "${ fn:isMinor }",
"transition": "RejectApplication"
}
],
"defaultCondition": {
"transition": "RejectApplication"
}
}
]
}
Our expression functions can also be referenced and executed as part of state action execution. Let's say we have the following workflow definition:
{
"name": "simpleadd",
"functions": [
{
"name": "Increment Count Function",
"type": "expression",
"operation": ".count += 1 | .count"
}
],
"start": "Initialize Count",
"states": [
{
"name": "Initialize Count",
"type": "inject",
"data": {
"count": 0
},
"transition": "Increment Count"
},
{
"name": "Increment Count",
"type": "operation",
"actions": [
{
"functionRef": "Increment Count Function",
"actionFilter": {
"toStateData": "${ .count }"
}
}
],
"end": true
}
]
}
The starting inject state "Initialize Count" injects the count element into our state data, which then becomes the state data input of our "Increment Count" operation state. This state defines an invocation of the "Increment Count Function" expression function defined in our workflow definition.
This triggers the evaluation of the defined expression. The input of this expression is by default the current state data. Just like with "rest", and "rpc" type functions, expression functions also produce a result. In this case the result of the expression is just the number 1. The actions filter then assigns this result to the state data element "count" and the state data becomes:
{
"count": 1
}
Note that the used function definition type in this case must be expression
.
For more information about functions, reference the Functions definitions section.
For more information about workflow expressions, reference the Workflow Expressions section.
Workflow model parameters can use expressions to select/manipulate workflow and/or state data.
Note that different data filters play a big role as to which parts of the states data are to be used when the expression is evaluated. Reference the State Data Filtering section for more information about state data filters.
By default, all workflow expressions should be defined using the jq version 1.6 syntax. You can find more information on jq in its manual.
Serverless Workflow does not mandate the use of jq and it's possible to use an expression language
of your choice with the restriction that a single one must be used for all expressions
in a workflow definition. If a different expression language needs to be used, make sure to set the workflow
expressionLang
property to identify it to runtime implementations.
Note that using a non-default expression language could lower the portability of your workflow definitions across multiple container/cloud platforms.
All workflow expressions in this document, specification examples as well as comparisons examples are written using the default jq syntax.
Workflow expressions have the following format:
${ expression }
Where expression
can be either an in-line expression, or a reference to a
defined expression function definition.
To reference a defined expression function definition the expression must have the following format, for example:
${ fn:myExprFuncName }
Where fn
is the namespace of the defined expression functions and
myExprName
is the unique expression function name.
To show some expression examples, let's say we have the following state data:
{
"applicant": {
"name": "John Doe",
"age" : 26,
"address" : {
"streetAddress": "Naist street",
"city" : "Nara",
"postalCode" : "630-0192"
},
"phoneNumbers": [
{
"type" : "iPhone",
"number": "0123-4567-8888"
},
{
"type" : "home",
"number": "0123-4567-8910"
}
]
}
}
In our workflow model we can define our reusable expression function:
{
"functions": [
{
"name": "IsAdultApplicant",
"operation": ".applicant | .age > 18",
"type": "expression"
}
]
}
We will get back to this function definition in just a bit, but now let's take a look at using an inline expression that sets an input parameter inside an action for example:
{
"actions": [
{
"functionRef": {
"refName": "confirmApplicant",
"parameters": {
"applicantName": "${ .applicant.name }"
}
}
}
]
}
In this case our input parameter applicantName
would be set to "John Doe".
Expressions can also be used to select and manipulate state data, this is in particularly useful for state data filters.
For example let's use another inline expression:
{
"stateDataFilter": {
"output": "${ .applicant | {applicant: .name, contactInfo: { email: .email, phone: .phoneNumbers }} }"
}
}
This would set the data output of the particular state to:
{
"applicant": "John Doe",
"contactInfo": {
"email": "[email protected]",
"phone": [
{
"type": "iPhone",
"number": "0123-4567-8888"
},
{
"type": "home",
"number": "0123-4567-8910"
}
]
}
}
Switch state conditions require for expressions to be resolved to a boolean value (true / false).
We can now get back to our previously defined "IsAdultApplicant" expression function and reference it:
{
"dataConditions": [ {
"condition": "${ fn:IsAdultApplicant }",
"transition": "StartApplication"
}]
}
As previously mentioned, expressions are evaluated against certain subsets of data. For example
the parameters
param of the functionRef definition can evaluate expressions
only against the data that is available to the action it belongs to.
One thing to note here are the top-level workflow definition parameters. Expressions defined
in them can only be evaluated against the initial workflow data input.
For example let's say that we have a workflow data input of:
{
"inputVersion" : "1.0.0"
}
we can use this expression in the workflow "version" parameter:
{
"id": "MySampleWorkflow",
"name": "Sample Workflow",
"version": "${ .inputVersion }",
"specVersion": "0.7"
}
which would set the workflow version to "1.0.0". Note that the workflow "id" property value is not allowed to use an expression. The workflow definition "id" must be a constant value.
Parameter | Description | Type | Required |
---|---|---|---|
id | Workflow unique identifier | string | yes if key not defined |
key | Domain-specific workflow identifier | string | yes if id not defined |
name | Workflow name | string | yes |
description | Workflow description | string | no |
version | Workflow version | string | no |
annotations | List of helpful terms describing the workflows intended purpose, subject areas, or other important qualities | array | no |
dataInputSchema | Used to validate the workflow data input against a defined JSON Schema | string or object | no |
constants | Workflow constants | string or object | no |
secrets | Workflow secrets | string or array | no |
start | Workflow start definition | string | yes |
specVersion | Serverless Workflow specification release version | string | yes |
expressionLang | Identifies the expression language used for workflow expressions. Default value is "jq" | string | no |
timeouts | Defines the workflow default timeout settings | object | no |
keepActive | If "true", workflow instances is not terminated when there are no active execution paths. Instance can be terminated with "terminate end definition" or reaching defined "workflowExecTimeout" | boolean | no |
auth | Workflow authentication definitions | array or string | no |
events | Workflow event definitions. | array or string | no |
functions | Workflow function definitions. Can be either inline function definitions (if array) or URI pointing to a resource containing json/yaml function definitions (if string) | array or string | no |
retries | Workflow retries definitions. Can be either inline retries definitions (if array) or URI pointing to a resource containing json/yaml retry definitions (if string) | array or string | no |
states | Workflow states | array | yes |
metadata | Metadata information | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"id": "sampleWorkflow",
"version": "1.0",
"specVersion": "0.7",
"name": "Sample Workflow",
"description": "Sample Workflow",
"start": "MyStartingState",
"states": [],
"functions": [],
"events": [],
"retries":[]
} |
id: sampleWorkflow
version: '1.0'
specVersion: '0.7'
name: Sample Workflow
description: Sample Workflow
start: MyStartingState
states: []
functions: []
events: []
retries: [] |
Defines the top-level structure of a serverless workflow model. Following figure describes the main workflow definition blocks.
The id
property defines the unique, domain-specific workflow identifier, for example "orders", "payment", etc.
The key
property defines the unique, domain-specific workflow identifier.
It can be used when the id
property is auto-generated by a content-management system for example.
In these cases, you can specify the key
property to be the domain-specific identifier of the workflow definition.
The id
and key
properties are mutually exclusive, meaning you cannot define both.
The name
property is the workflow logical name.
The description
property can be used to give further information about the workflow.
The version
property can be used to provide a specific workflow version.
The annotations
property defines a list of helpful terms describing the workflows intended purpose, subject areas, or other important qualities,
for example "machine learning", "monitoring", "networking", etc
The dataInputSchema
property can be used to validate the workflow data input against a defined JSON Schema.
This check should be done before any states are executed. dataInputSchema
can have two different types.
If it is an object type it has the following definition:
"dataInputSchema": {
"schema": "URL_to_json_schema",
"failOnValidationErrors": false
}
It's schema
property is an URI which points to the JSON schema used to validate the workflow data input.
It' failOnValidationErrors
property determines if workflow execution should continue in case of validation
errors. The default value of failOnValidationErrors
is true
.
If dataInputSchema
has the string type, it has the following definition:
"dataInputSchema": "URL_to_json_schema"
In this case the failOnValidationErrors
default value of true
is assumed.
The dataInputSchema
property validates the workflow data input. In case of
a starting Event state, it is not used to validate its event payloads.
The secrets
property allows you to use sensitive information such as passwords, OAuth tokens, ssh keys, etc. inside your
Workflow expressions.
It has two possible types, string
or array
.
If string
type, it is an URI pointing to a JSON or YAML document
which contains an array of names of the secrets, for example:
"secrets": "file://workflowsecrets.json"
If array
type, it defines an array (of string types) which contains the names of the secrets, for example:
"secrets": ["MY_PASSWORD", "MY_STORAGE_KEY", "MY_ACCOUNT"]
For more information about Workflow secrets, reference the Workflow Secrets section.
The constants
property can be used to define Workflow constants values
which are accessible in Workflow Expressions.
It has two possible types, string
or object
.
If string
type, it is an URI pointing to a JSON or YAML document
which contains an object of global definitions, for example:
"constants": "file://workflowconstants.json"
If object
type, it defines a JSON object which contains the constants definitions, for example:
{
"AGE": {
"MIN_ADULT": 18
}
}
For more information see the Workflow Constants section.
The start
property defines the workflow starting information. For more information see the start definition section.
The specVersion
property is used to set the Serverless Workflow specification release version
the workflow markup adheres to.
It has to follow the specification release versions (excluding the leading "v"), meaning that for
the release version v0.6
its value should be set to "0.6"
.
The expressionLang
property can be used to identify the expression language used for all expressions in
the workflow definition. The default value of this property is "jq".
You should set this property if you chose to define workflow expressions
with an expression language / syntax other than the default.
The timeouts
property is used to define the default workflow timeouts for workflow, state, action, and branch
execution. For more information about timeouts and its use cases see the Workflow Timeouts section.
The auth
property can be either an inline auth definition array, or a URI reference to
a resource containing an array of auth definitions.
If defined in a separate resource file (Json or Yaml), auth
definitions can be re-used by multiple workflow definitions.
Auth definitions can be used to define authentication that should be used to access
the resource defined in the operation
property of the function definitions.
If we have the following function definition:
{
"functions": [
{
"name": "HelloWorldFunction",
"operation": "https://secure.resources.com/myapi.json#helloWorld",
"authRef": "My Basic Auth"
}
]
}
The authRef
property is used to reference an authentication definition in
the auth
property and should be applied to access the https://secure.resources.com/myapi.json
OpenApi definition file.
The functions
property can be either an in-line function definition array, or an URI reference to
a resource containing an array of functions definition.
Referenced resource can be used by multiple workflow definitions.
Here is an example of using external resource for function definitions:
- Workflow definition:
{
"id": "sampleWorkflow",
"version": "1.0",
"specVersion": "0.7",
"name": "Sample Workflow",
"description": "Sample Workflow",
"start": "MyStartingState",
"functions": "http://myhost:8080/functiondefs.json",
"states":[
...
]
}
- Function definitions resource:
{
"functions": [
{
"name":"HelloWorldFunction",
"operation":"file://myapi.json#helloWorld"
}
]
}
Referenced resource must conform to the specifications Workflow Functions JSON Schema.
The events
property can be either an in-line event definition array, or an URI reference to
a resource containing an array of event definition. Referenced resource can be used by multiple workflow definitions.
Here is an example of using external resource for event definitions:
- Workflow definition:
{
"id": "sampleWorkflow",
"version": "1.0",
"specVersion": "0.7",
"name": "Sample Workflow",
"description": "Sample Workflow",
"start": "MyStartingState",
"events": "http://myhost:8080/eventsdefs.json",
"states":[
...
]
}
- Event definitions resource:
{
"events": [
{
"name": "ApplicantInfo",
"type": "org.application.info",
"source": "applicationssource",
"correlation": [
{
"contextAttributeName": "applicantId"
}
]
}
]
}
Referenced resource must conform to the specifications Workflow Events JSON Schema.
The retries
property can be either an in-line retry definition array, or an URI reference to
a resource containing an array of retry definition.
Referenced resource can be used by multiple workflow definitions. For more information about
using and referencing retry definitions see the Workflow Error Handling section.
The keepActive
property allows you to change the default behavior of workflow instances.
By default, as described in the Core Concepts section, a workflow instance is terminated once there are no more
active execution paths, one of its active paths ends in a "terminate" end definition, or when
its workflowExecTimeout
time is reached.
Setting the keepActive
property to "true" allows you to change this default behavior in that a workflow instance
created from this workflow definition can only be terminated if one of its active paths ends in a "terminate" end definition, or when
its workflowExecTimeout
time is reached.
This allows you to explicitly model workflows where an instance should be kept alive, to collect (event) data for example.
You can reference the specification examples to see the keepActive
property in action.
Workflow states define building blocks of the workflow execution instructions. They define the control flow logic instructions on what the workflow is supposed to do. Serverless Workflow defines the following Workflow States:
Name | Description | Consumes events? | Produces events? | Executes actions? | Handles errors/retries? | Allows parallel execution? | Makes data-based transitions? | Can be workflow start state? | Can be workflow end state? |
---|---|---|---|---|---|---|---|---|---|
Event | Define events that trigger action execution | yes | yes | yes | yes | yes | no | yes | yes |
Operation | Execute one or more actions | no | yes | yes | yes | yes | no | yes | yes |
Switch | Define data-based or event-based workflow transitions | no | yes | no | yes | no | yes | yes | no |
Delay | Delay workflow execution | no | yes | no | yes | no | no | yes | yes |
Parallel | Causes parallel execution of branches (set of states) | no | yes | no | yes | yes | no | yes | yes |
Inject | Inject static data into state data | no | yes | no | yes | no | no | yes | yes |
ForEach | Parallel execution of states for each element of a data array | no | yes | no | yes | yes | no | yes | yes |
Callback | Manual decision step. Executes a function and waits for callback event that indicates completion of the manual decision | yes | yes | yes | yes | no | no | yes | yes |
Parameter | Description | Type | Required |
---|---|---|---|
id | Unique state id | string | no |
name | State name | string | yes |
type | State type | string | yes |
exclusive | If "true", consuming one of the defined events causes its associated actions to be performed. If "false", all of the defined events must be consumed in order for actions to be performed. Default is "true" | boolean | no |
onEvents | Define the events to be consumed and optional actions to be performed | array | yes |
timeouts | State specific timeout settings | object | no |
stateDataFilter | State data filter definition | object | no |
transition | Next transition of the workflow after all the actions have been performed | object | yes |
onErrors | States error handling and retries definitions | array | no |
end | Is this state an end state | object | no |
compensatedBy | Unique name of a workflow state which is responsible for compensation of this state | String | no |
metadata | Metadata information | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "MonitorVitals",
"type": "event",
"exclusive": true,
"onEvents": [{
"eventRefs": ["HighBodyTemperature"],
"actions": [{
"functionRef": {
"refName": "sendTylenolOrder",
"arguments": {
"patientid": "${ .patientId }"
}
}
}]
},
{
"eventRefs": ["HighBloodPressure"],
"actions": [{
"functionRef": {
"refName": "callNurse",
"arguments": {
"patientid": "${ .patientId }"
}
}
}]
},
{
"eventRefs": ["HighRespirationRate"],
"actions": [{
"functionRef": {
"refName": "callPulmonologist",
"arguments": {
"patientid": "${ .patientId }"
}
}
}]
}
],
"end": {
"terminate": true
}
} |
name: MonitorVitals
type: event
exclusive: true
onEvents:
- eventRefs:
- HighBodyTemperature
actions:
- functionRef:
refName: sendTylenolOrder
arguments:
patientid: "${ .patientId }"
- eventRefs:
- HighBloodPressure
actions:
- functionRef:
refName: callNurse
arguments:
patientid: "${ .patientId }"
- eventRefs:
- HighRespirationRate
actions:
- functionRef:
refName: callPulmonologist
arguments:
patientid: "${ .patientId }"
end:
terminate: true |
Event states await one or more events and perform actions when they are received. If defined as the workflow starting state, the event state definition controls when the workflow instances should be created.
The exclusive
property determines if the state should wait for any of the defined events in the onEvents
array, or
if all defined events must be present for their associated actions to be performed.
Following two figures illustrate the exclusive
property:
If the Event state in this case is a workflow starting state, the occurrence of any of the defined events would start a new workflow instance.
If the Event state in this case is a workflow starting state, the occurrence of all defined events would start a new workflow instance.
In order to consider only events that are related to each other, we need to set the correlation
property in the workflow
events definitions. This allows us to set up event correlation rules against the events
extension context attributes.
If the Event state is not a workflow starting state, the timeout
property can be used to define the time duration from the
invocation of the event state. If the defined event, or events have not been received during this time,
the state should transition to the next state or can end the workflow execution (if it is an end state).
The timeouts
property can be used to define state specific timeout settings. Event states can define the
stateExecTimeout
, actionExecTimeout
, and eventTimeout
properties.
For more information about Event state specific event timeout settings reference [this section](#Event Timeout Definition).
For more information about workflow timeouts reference the Workflow Timeouts section.
Parameter | Description | Type | Required |
---|---|---|---|
id | Unique state id | string | no |
name | State name | string | yes |
type | State type | string | yes |
actionMode | Should actions be performed sequentially or in parallel | string | no |
actions | Actions to be performed | array | yes |
timeouts | State specific timeout settings | object | no |
stateDataFilter | State data filter | object | no |
onErrors | States error handling and retries definitions | array | no |
transition | Next transition of the workflow after all the actions have been performed | object | yes (if end is not defined) |
compensatedBy | Unique name of a workflow state which is responsible for compensation of this state | String | no |
usedForCompensation | If true, this state is used to compensate another state. Default is "false" | boolean | no |
metadata | Metadata information | object | no |
end | Is this state an end state | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "RejectApplication",
"type": "operation",
"actionMode": "sequential",
"actions": [
{
"functionRef": {
"refName": "sendRejectionEmailFunction",
"arguments": {
"customer": "${ .customer }"
}
}
}
],
"end": true
} |
name: RejectApplication
type: operation
actionMode: sequential
actions:
- functionRef:
refName: sendRejectionEmailFunction
arguments:
customer: "${ .customer }"
end: true |
Operation state defines a set of actions to be performed in sequence or in parallel. Once all actions have been performed, a transition to another state can occur.
The timeouts
property can be used to define state specific timeout settings. Operation states can define
the stateExecTimeout
and actionExecTimeout
settings. For more information on Workflow timeouts reference
the Workflow Timeouts section.
Parameter | Description | Type | Required |
---|---|---|---|
id | Unique state id | string | no |
name | State name | string | yes |
type | State type | string | yes |
dataConditions or eventConditions | Defined if the Switch state evaluates conditions and transitions based on state data, or arrival of events. | array | yes (one) |
stateDataFilter | State data filter | object | no |
onErrors | States error handling and retries definitions | array | no |
timeouts | State specific timeout settings | object | no |
defaultCondition | Default transition of the workflow if there is no matching data conditions or event timeout is reached. Can be a transition or end definition | object | yes |
compensatedBy | Unique name of a workflow state which is responsible for compensation of this state | String | no |
usedForCompensation | If true, this state is used to compensate another state. Default is "false" | boolean | no |
metadata | Metadata information | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name":"CheckVisaStatus",
"type":"switch",
"eventConditions": [
{
"eventRef": "visaApprovedEvent",
"transition": "HandleApprovedVisa"
},
{
"eventRef": "visaRejectedEvent",
"transition": "HandleRejectedVisa"
}
],
"eventTimeout": "PT1H",
"defaultCondition": {
"transition": "HandleNoVisaDecision"
}
} |
name: CheckVisaStatus
type: switch
eventConditions:
- eventRef: visaApprovedEvent
transition: HandleApprovedVisa
- eventRef: visaRejectedEvent
transition: HandleRejectedVisa
eventTimeout: PT1H
defaultCondition:
transition: HandleNoVisaDecision |
Switch states can be viewed as workflow gateways: they can direct transitions of a workflow based on certain conditions. There are two types of conditions for switch states:
These are exclusive, meaning that a switch state can define one or the other condition type, but not both.
At times multiple defined conditions can be evaluated to true
by runtime implementations.
Conditions defined first take precedence over conditions defined later. This is backed by the fact that arrays/sequences
are ordered in both JSON and YAML. For example, let's say there are two true
conditions: A and B, defined in that order.
Because A was defined first, its transition will be executed, not B's.
In case of data-based conditions definition, switch state controls workflow transitions based on the states data.
If no defined conditions can be matched, the state transitions is taken based on the defaultCondition
property.
This property can be either a transition
to another workflow state, or an end
definition meaning a workflow end.
For event-based conditions, a switch state acts as a workflow wait state. It halts workflow execution
until one of the referenced events arrive, then making a transition depending on that event definition.
If events defined in event-based conditions do not arrive before the states eventTimeout
property expires,
state transitions are based on the defined defaultCondition
property.
The timeouts
property can be used to define state specific timeout settings. Switch states can define the
stateExecTimeout
setting. If eventConditions
is defined, the switch state can also define the
eventTimeout
property. For more information on workflow timeouts reference the Workflow Timeouts section.
Parameter | Description | Type | Required |
---|---|---|---|
id | Unique state id | string | no |
name | State name | string | yes |
type | State type | string | yes |
timeDelay | Amount of time (ISO 8601 format) to delay when in this state. For example: "PT15M" (delay 15 minutes), or "P2DT3H4M" (delay 2 days, 3 hours and 4 minutes) | integer | yes |
timeouts | State specific timeout settings | object | no |
stateDataFilter | State data filter | object | no |
onErrors | States error handling and retries definitions | array | no |
transition | Next transition of the workflow after the delay | object | yes (if end is not defined) |
compensatedBy | Unique name of a workflow state which is responsible for compensation of this state | String | no |
usedForCompensation | If true, this state is used to compensate another state. Default is "false" | boolean | no |
end | If this state an end state | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "WaitForCompletion",
"type": "delay",
"timeDelay": "PT5S",
"transition": "GetJobStatus"
} |
name: WaitForCompletion
type: delay
timeDelay: PT5S
transition: GetJobStatus |
Delay state waits for a certain amount of time before transitioning to a next state. The amount of delay is specified by the timeDelay
property in ISO 8601 format.
The timeouts
property allows you to define state-specific timeouts.
It can be used to define the stateExecTimeout
. For more information on workflow timeouts
see the Workflow Timeouts section.
Parameter | Description | Type | Required |
---|---|---|---|
id | Unique state id | string | no |
name | State name | string | yes |
type | State type | string | yes |
branches | List of branches for this parallel state | array | yes |
completionType | Option types on how to complete branch execution. Default is "allOf" | enum | no |
numCompleted | Used when branchCompletionType is set to atLeast to specify the least number of branches that must complete in order for the state to transition/end. |
string or number | no |
timeouts | State specific timeout settings | object | no |
stateDataFilter | State data filter | object | no |
onErrors | States error handling and retries definitions | array | no |
transition | Next transition of the workflow after all branches have completed execution | object | yes (if end is not defined) |
compensatedBy | Unique name of a workflow state which is responsible for compensation of this state | String | no |
usedForCompensation | If true, this state is used to compensate another state. Default is "false" | boolean | no |
metadata | Metadata information | object | no |
end | If this state and end state | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name":"ParallelExec",
"type":"parallel",
"completionType": "allOf",
"branches": [
{
"name": "Branch1",
"actions": [
{
"functionRef": {
"refName": "functionNameOne",
"arguments": {
"order": "${ .someParam }"
}
}
}
]
},
{
"name": "Branch2",
"actions": [
{
"functionRef": {
"refName": "functionNameTwo",
"arguments": {
"order": "${ .someParam }"
}
}
}
]
}
],
"end": true
} |
name: ParallelExec
type: parallel
completionType: allOf
branches:
- name: Branch1
actions:
- functionRef:
refName: functionNameOne
arguments:
order: "${ .someParam }"
- name: Branch2
actions:
- functionRef:
refName: functionNameTwo
arguments:
order: "${ .someParam }"
end: true |
Parallel state defines a collection of branches
that are executed in parallel.
A parallel state can be seen a state which splits up the current workflow instance execution path
into multiple ones, one for each branch. These execution paths are performed in parallel
and are joined back into the current execution path depending on the defined completionType
parameter value.
The "completionType" enum specifies the different ways of completing branch execution:
- allOf: All branches must complete execution before the state can transition/end. This is the default value in case this parameter is not defined in the parallel state definition.
- atLeast: State can transition/end once at least the specified number of branches have completed execution. In this case you must also
specify the
numCompleted
property to define this number.
Exceptions may occur during execution of branches of the Parallel state, this is described in detail in this section.
The timeouts
property can be used to set state specific timeout settings. Parallel states can define the
stateExecTimeout
and branchExecTimeout
timeout settings. For more information on workflow timeouts
reference the Workflow Timeouts section.
Parameter | Description | Type | Required |
---|---|---|---|
id | Unique state id | string | no |
name | State name | string | yes |
type | State type | string | yes |
data | JSON object which can be set as state's data input and can be manipulated via filter | object | yes |
stateDataFilter | State data filter | object | no |
transition | Next transition of the workflow after injection has completed | object | yes (if end is set to false) |
timeouts | State specific timeout settings | object | no |
onErrors | States error handling and retries definitions | array | no |
compensatedBy | Unique name of a workflow state which is responsible for compensation of this state | String | no |
usedForCompensation | If true, this state is used to compensate another state. Default is "false" | boolean | no |
metadata | Metadata information | object | no |
end | If this state and end state | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name":"Hello",
"type":"inject",
"data": {
"result": "Hello"
},
"transition": "World"
} |
name: Hello
type: inject
data:
result: Hello
transition: World |
Inject state can be used to inject static data into state data input. Inject state does not perform any actions. It is very useful for debugging, for example, as you can test/simulate workflow execution with pre-set data that would typically be dynamic in nature (e.g., function calls, events).
The inject state data
property allows you to statically define a JSON object which gets added to the states data input.
You can use the filter property to control the states data output to the transition state.
Here is a typical example of how to use the inject state to add static data into its states data input, which then is passed as data output to the transition state:
JSON | YAML |
---|---|
{
"name":"SimpleInjectState",
"type":"inject",
"data": {
"person": {
"fname": "John",
"lname": "Doe",
"address": "1234 SomeStreet",
"age": 40
}
},
"transition": "GreetPersonState"
} |
name: SimpleInjectState
type: inject
data:
person:
fname: John
lname: Doe
address: 1234 SomeStreet
age: 40
transition: GreetPersonState |
The data output of the "SimpleInjectState" which then is passed as input to the transition state would be:
{
"person": {
"fname": "John",
"lname": "Doe",
"address": "1234 SomeStreet",
"age": 40
}
}
If the inject state already receives a data input from the previous transition state, the inject data should be merged with its data input.
You can also use the filter property to filter the state data after data is injected. Let's say we have:
JSON | YAML |
---|---|
{
"name":"SimpleInjectState",
"type":"inject",
"data": {
"people": [
{
"fname": "John",
"lname": "Doe",
"address": "1234 SomeStreet",
"age": 40
},
{
"fname": "Marry",
"lname": "Allice",
"address": "1234 SomeStreet",
"age": 25
},
{
"fname": "Kelly",
"lname": "Mill",
"address": "1234 SomeStreet",
"age": 30
}
]
},
"stateDataFilter": {
"output": "${ {people: [.people[] | select(.age < 40)]} }"
},
"transition": "GreetPersonState"
} |
name: SimpleInjectState
type: inject
data:
people:
- fname: John
lname: Doe
address: 1234 SomeStreet
age: 40
- fname: Marry
lname: Allice
address: 1234 SomeStreet
age: 25
- fname: Kelly
lname: Mill
address: 1234 SomeStreet
age: 30
stateDataFilter:
output: "${ {people: [.people[] | select(.age < 40)]} }"
transition: GreetPersonState |
In which case the states data output would include only people whose age is less than 40:
{
"people": [
{
"fname": "Marry",
"lname": "Allice",
"address": "1234 SomeStreet",
"age": 25
},
{
"fname": "Kelly",
"lname": "Mill",
"address": "1234 SomeStreet",
"age": 30
}
]
}
You can change your output path easily during testing, for example change the expression to:
${ {people: [.people[] | select(.age >= 40)]} }
This allows you to test if your workflow behaves properly for cases when there are people whose age is greater or equal 40.
The timeouts
property can be used to define state specific timeout settings. Inject states can define the
stateExecTimeout
property. For more information on workflow timeouts reference the
Workflow Timeouts section.
Parameter | Description | Type | Required |
---|---|---|---|
id | Unique state id | string | no |
name | State name | string | yes |
type | State type | string | yes |
inputCollection | Workflow expression selecting an array element of the states data | string | yes |
outputCollection | Workflow expression specifying an array element of the states data to add the results of each iteration | string | no |
iterationParam | Name of the iteration parameter that can be referenced in actions/workflow. For each parallel iteration, this param should contain an unique element of the inputCollection array | string | yes |
max | Specifies how upper bound on how many iterations may run in parallel | string or number | no |
actions | Actions to be executed for each of the elements of inputCollection | array | yes |
timeouts | State specific timeout settings | object | no |
stateDataFilter | State data filter definition | object | no |
onErrors | States error handling and retries definitions | array | no |
transition | Next transition of the workflow after state has completed | object | yes (if end is not defined) |
compensatedBy | Unique name of a workflow state which is responsible for compensation of this state | String | no |
usedForCompensation | If true, this state is used to compensate another state. Default is "false" | boolean | no |
metadata | Metadata information | object | no |
end | Is this state an end state | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "ProvisionOrdersState",
"type": "foreach",
"inputCollection": "${ .orders }",
"iterationParam": "singleorder",
"outputCollection": "${ .provisionresults }",
"actions": [
{
"functionRef": {
"refName": "provisionOrderFunction",
"arguments": {
"order": "${ .singleorder }"
}
}
}
]
} |
name: ProvisionOrdersState
type: foreach
inputCollection: "${ .orders }"
iterationParam: "singleorder"
outputCollection: "${ .provisionresults }"
actions:
- functionRef:
refName: provisionOrderFunction
arguments:
order: "${ .singleorder }" |
ForEach states can be used to execute actions for each element of a data set.
Each iteration of the ForEach state should be executed in parallel.
You can use the max
property to set the upper bound on how many iterations may run in parallel. The default
of the max
property is zero, which places no limit on number of parallel executions.
The inputCollection
property is a workflow expression which selects an array in the states data. All iterations
are performed against data elements of this array. If this array does not exist, the runtime should throw
an error. This error can be handled inside the states onErrors
definition.
The outputCollection
property is a workflow expression which selects an array in the state data where the results
of each iteration should be added to. If this array does not exist, it should be created.
The iterationParam
property defines the name of the iteration parameter passed to each parallel execution of the ForEach state.
It should contain the unique element of the inputCollection
array and passed as data input to the actions/workflow defined.
iterationParam
should be created for each iteration, so it can be referenced/used in defined actions / workflow data input.
The actions
property defines actions to be executed in each state iteration.
Let's take a look at an example:
In this example the data input to our workflow is an array of orders:
{
"orders": [
{
"orderNumber": "1234",
"completed": true,
"email": "[email protected]"
},
{
"orderNumber": "5678",
"completed": true,
"email": "[email protected]"
},
{
"orderNumber": "9910",
"completed": false,
"email": "[email protected]"
}
]
}
and our workflow is defined as:
JSON | YAML |
---|---|
{
"id": "sendConfirmWorkflow",
"name": "SendConfirmationForCompletedOrders",
"version": "1.0",
"specVersion": "0.7",
"start": "SendConfirmState",
"functions": [
{
"name": "sendConfirmationFunction",
"operation": "file://confirmationapi.json#sendOrderConfirmation"
}
],
"states": [
{
"name":"SendConfirmState",
"type":"foreach",
"inputCollection": "${ [.orders[] | select(.completed == true)] }",
"iterationParam": "completedorder",
"outputCollection": "${ .confirmationresults }",
"actions":[
{
"functionRef": {
"refName": "sendConfirmationFunction",
"arguments": {
"orderNumber": "${ .completedorder.orderNumber }",
"email": "${ .completedorder.email }"
}
}
}],
"end": true
}]
} |
id: sendConfirmWorkflow
name: SendConfirmationForCompletedOrders
version: '1.0'
specVersion: '0.7'
start: SendConfirmState
functions:
- name: sendConfirmationFunction
operation: file://confirmationapi.json#sendOrderConfirmation
states:
- name: SendConfirmState
type: foreach
inputCollection: "${ [.orders[] | select(.completed == true)] }"
iterationParam: completedorder
outputCollection: "${ .confirmationresults }"
actions:
- functionRef:
refName: sendConfirmationFunction
arguments:
orderNumber: "${ .completedorder.orderNumber }"
email: "${ .completedorder.email }"
end: true |
The workflow data input containing order information is passed to the SendConfirmState
ForEach state.
The ForEach state defines an inputCollection
property which selects all orders that have the completed
property set to true
.
For each element of the array selected by inputCollection
a JSON object defined by iterationParam
should be
created containing an unique element of inputCollection
and passed as the data input to the parallel executed actions.
So for this example, we would have two parallel executions of the sendConfirmationFunction
, the first one having data:
{
"completedorder": {
"orderNumber": "1234",
"completed": true,
"email": "[email protected]"
}
}
and the second:
{
"completedorder": {
"orderNumber": "5678",
"completed": true,
"email": "[email protected]"
}
}
The results of each parallel action execution are stored as elements in the state data array defined by the outputCollection
property.
The timeouts
property can be used to set state specific timeout settings. ForEach states can define the
stateExecTimeout
and actionExecTimeout
settings. For more information on workflow timeouts reference the Workflow Timeouts
section.
Parameter | Description | Type | Required |
---|---|---|---|
id | Unique state id | string | no |
name | State name | string | yes |
type | State type | string | yes |
action | Defines the action to be executed | object | yes |
eventRef | References an unique callback event name in the defined workflow events | string | yes |
timeouts | State specific timeout settings | object | no |
eventDataFilter | Callback event data filter definition | object | no |
stateDataFilter | State data filter definition | object | no |
onErrors | States error handling and retries definitions | array | no |
transition | Next transition of the workflow after callback event has been received | object | yes |
end | Is this state an end state | object | no |
compensatedBy | Uniaue name of a workflow state which is responsible for compensation of this state | String | no |
usedForCompensation | If true, this state is used to compensate another state. Default is "false" | boolean | no |
metadata | Metadata information | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "CheckCredit",
"type": "callback",
"action": {
"functionRef": {
"refName": "callCreditCheckMicroservice",
"arguments": {
"customer": "${ .customer }"
}
}
},
"eventRef": "CreditCheckCompletedEvent",
"timeouts": {
"stateExecTimeout": "PT15M"
},
"transition": "EvaluateDecision"
} |
name: CheckCredit
type: callback
action:
functionRef:
refName: callCreditCheckMicroservice
arguments:
customer: "${ .customer }"
eventRef: CreditCheckCompletedEvent
timeouts:
stateExecTimeout: PT15M
transition: EvaluateDecision |
Serverless orchestration can at times require manual steps/decisions to be made. While some work performed in a serverless workflow can be executed automatically, some decisions must involve manual steps (e.g., human decisions). The Callback state allows you to explicitly model manual decision steps during workflow execution.
The action property defines a function call that triggers an external activity/service. Once the action executes,
the callback state will wait for a CloudEvent (defined via the eventRef
property), which indicates the completion
of the manual decision by the called service.
Note that the called decision service is responsible for emitting the callback CloudEvent indicating the completion of the
decision and including the decision results as part of the event payload. This event must be correlated to the
workflow instance using the callback events context attribute defined in the correlation
property of the
referenced Event Definition.
Once the completion (callback) event is received, the Callback state completes its execution and transitions to the next defined workflow state or completes workflow execution in case it is an end state.
The callback event payload is merged with the Callback state data and can be filtered via the "eventDataFilter" definition.
If the defined callback event has not been received during this time period, the state should transition to the next state or end workflow execution if it is an end state.
The timeouts
property defines state specific timeout settings. Callback states can define the
stateExecTimeout
, actionExecTimeout
, and eventTimeout
properties.
For more information on workflow timeouts reference the Workflow Timeouts
section.
Parameter | Description | Type | Required |
---|---|---|---|
name | Unique function name | string | yes |
operation | If type is rest , <path_to_openapi_definition>#<operation_id>. If type is rpc , <path_to_grpc_proto_file>#<service_name>#<service_method>. If type is graphql , <url_to_graphql_endpoint>#<literal "mutation" or "query">#<query_or_mutation_name>. If type is expression , defines the workflow expression. |
string | no |
type | Defines the function type. Is either rest , rpc or expression . Default is rest |
enum | no |
authRef | References an auth definition name to be used to access to resource defined in the operation parameter | string | no |
metadata | Metadata information. Can be used to define custom function information | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "HelloWorldFunction",
"operation": "https://hellworldservice.api.com/api.json#helloWorld"
} |
name: HelloWorldFunction
operation: https://hellworldservice.api.com/api.json#helloWorld |
The name
property defines an unique name of the function definition.
The type
property defines the function type. Its value can be either rest
or expression
. Default value is rest
.
Depending on the function type
, the operation
property can be:
- If
type
isrest
, a combination of the function/service OpenAPI definition document URI and the particular service operation that needs to be invoked, separated by a '#'. For examplehttps://petstore.swagger.io/v2/swagger.json#getPetById
. - If
type
isrpc
, a combination of the gRPC proto document URI and the particular service name and service method name that needs to be invoked, separated by a '#'. For examplefile://myuserservice.proto#UserService#ListUsers
. - If
type
isgraphql
, a combination of the GraphQL schema definition URI and the particular service name and service method name that needs to be invoked, separated by a '#'. For examplefile://myuserservice.proto#UserService#ListUsers
. - If
type
isexpression
, defines the expression syntax. Take a look at the workflow expressions section for more information on this.
The authRef
property references a name of a defined workflow auth definition.
It is used to provide authentication info to access the resource defined in the operation
property.
The metadata
property allows users to define custom information to function definitions.
This allows you for example to define functions that describe of a command executions on a Docker image:
functions:
- name: whalesayimage
metadata:
image: docker/whalesay
command: cowsay
Note that using metadata for cases such as above heavily reduces the portability of your workflow markup.
Function definitions themselves do not define data input parameters. Parameters can be
defined via the parameters
property in function definitions inside actions.
Parameter | Description | Type | Required |
---|---|---|---|
name | Unique event name | string | yes |
source | CloudEvent source | string | yes if kind is set to "consumed", otherwise no |
type | CloudEvent type | string | yes |
kind | Defines the event is either consumed or produced by the workflow. Default is consumed |
enum | no |
correlation | Define event correlation rules for this event. Only used for consumed events | array | no |
dataOnly | If true (default value), only the Event payload is accessible to consuming Workflow states. If false , both event payload and context attributes should be accessible |
boolean | no |
metadata | Metadata information | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "ApplicantInfo",
"type": "org.application.info",
"source": "applicationssource",
"kind": "consumed",
"correlation": [
{
"contextAttributeName": "applicantId"
}
]
} |
name: ApplicantInfo
type: org.application.info
source: applicationssource
kind: consumed
correlation:
- contextAttributeName: applicantId |
Used to define events and their correlations. These events can be either consumed or produced during workflow execution as well as can be used to trigger function/service invocations.
The Serverless Workflow specification mandates that all events conform to the CloudEvents specification. This is to assure consistency and portability of the events format used.
The name
property defines a single name of the event that is unique inside the workflow definition. This event name can be
then referenced within function and state definitions.
The source
property matches this event definition with the source
property of the CloudEvent required attributes.
The type
property matches this event definition with the type property of the CloudEvent required attributes.
The kind
property defines this event as either consumed
or produced
. In terms of the workflow, this means it is either an event
that triggers workflow instance creation, or continuation of workflow instance execution (consumed), or an event
that the workflow instance creates during its execution (produced).
The default value (if not specified) of the kind
property is consumed
.
Note that for produced
event definitions, implementations must provide the value of the CloudEvent source attribute.
In this case (i.e., when the kind
property is set to produced
), the source
property of the event definition is not required.
Otherwise, (i.e., when the kind
property is set to consumed
), the source
property must be defined in the event definition.
Event correlation plays a big role in large event-driven applications. Correlating one or more events with a particular workflow instance
can be done by defining the event correlation rules within the correlation
property.
This property is an array of correlation definitions.
The CloudEvents specification allows users to add Extension Context Attributes
and the correlation definitions can use these attributes to define clear matching event correlation rules.
Extension context attributes are not part of the event payload, so they are serialized the same way as other standard required attributes.
This means that the event payload does not have to be inspected by implementations in order to read and evaluate the defined correlation rules.
Let's take a look at an example. Here we have two events that have an extension context attribute called "patientId" (as well as "department", which will be used in further examples below):
{
"specversion" : "1.0",
"type" : "com.hospital.patient.heartRateMonitor",
"source" : "hospitalMonitorSystem",
"subject" : "HeartRateReading",
"id" : "A234-1234-1234",
"time" : "2020-01-05T17:31:00Z",
"patientId" : "PID-12345",
"department": "UrgentCare",
"data" : {
"value": "80bpm"
}
}
and
{
"specversion" : "1.0",
"type" : "com.hospital.patient.bloodPressureMonitor",
"source" : "hospitalMonitorSystem",
"subject" : "BloodPressureReading",
"id" : "B234-1234-1234",
"time" : "2020-02-05T17:31:00Z",
"patientId" : "PID-12345",
"department": "UrgentCare",
"data" : {
"value": "110/70"
}
}
We can then define a correlation rule, through which all consumed events with the "hospitalMonitorSystem", and the "com.hospital.patient.heartRateMonitor"
type that have the same value of the patientId
property to be correlated to the created workflow instance:
{
"events": [
{
"name": "HeartRateReadingEvent",
"source": "hospitalMonitorSystem",
"type": "com.hospital.patient.heartRateMonitor",
"kind": "consumed",
"correlation": [
{
"contextAttributeName": "patientId"
}
]
}
]
}
If a workflow instance is created (e.g., via Event state) by consuming a "HeartRateReadingEvent" event, all other consumed events from the defined source and with the defined type that have the same "patientId" as the event that triggered the workflow instance should then also be associated with the same instance.
You can also correlate multiple events together. In the following example, we assume that the workflow consumes two different event types, and we want to make sure that both are correlated, as in the above example, with the same "patientId":
{
"events": [
{
"name": "HeartRateReadingEvent",
"source": "hospitalMonitorSystem",
"type": "com.hospital.patient.heartRateMonitor",
"kind": "consumed",
"correlation": [
{
"contextAttributeName": "patientId"
}
]
},
{
"name": "BloodPressureReadingEvent",
"source": "hospitalMonitorSystem",
"type": "com.hospital.patient.bloodPressureMonitor",
"kind": "consumed",
"correlation": [
{
"contextAttributeName": "patientId"
}
]
}
]
}
Event correlation can be based on equality (values of the defined "contextAttributeName" must be equal), but it can also be based on comparing it to custom defined values (string, or expression). For example:
{
"events": [
{
"name": "HeartRateReadingEvent",
"source": "hospitalMonitorSystem",
"type": "com.hospital.patient.heartRateMonitor",
"kind": "consumed",
"correlation": [
{
"contextAttributeName": "patientId"
},
{
"contextAttributeName": "department",
"contextAttributeValue" : "UrgentCare"
}
]
}
]
}
In this example, we have two correlation rules defined: The first one is on the "patientId" CloudEvent context attribute, meaning again that all consumed events from this source and type must have the same "patientId" to be considered. The second rule says that these events must all have a context attribute named "department" with the value of "UrgentCare".
This allows developers to write orchestration workflows that are specifically targeted to patients that are in the hospital urgent care unit, for example.
The dataOnly
property deals with what Event data is accessible by the consuming Workflow states.
If its value is true
(default value), only the Event payload is accessible to consuming Workflow states.
If false
, both Event payload and context attributes should be accessible.
Auth definitions can be used to define authentication information that should be applied to resources defined in the operation property of function definitions. It is not used as authentication information for the function invocation, but just to access the resource containing the function invocation information.
Parameter | Description | Type | Required |
---|---|---|---|
name | Unique auth definition name | string | yes |
scheme | Auth scheme, can be "basic", "bearer", or "oauth2". Default is "basic" | enum | yes |
properties | Auth scheme properties. Can be one of "Basic properties definition", "Bearer properties definition", or "OAuth2 properties definition" | object | yes |
The name
property defines the unique auth definition name.
The dataOnly
property defines the auth scheme to be used. Cane be "bearer", "basic" or "oath2".
The dataOnly
property defines the auth scheme information.
Can be one of "Basic properties definition", "Bearer properties definition", or "OAuth2 properties definition"
See here for more information about Basic Authentication scheme.
The Basic properties definition can have two types, either string
or object
.
If string
type, it defines a workflow expression referencing a workflow secret that contains all needed Basic auth information.
If object
type, it has the following properties:
Parameter | Description | Type | Required |
---|---|---|---|
username | String or a workflow expression. Contains the user name | string | yes |
password | String or a workflow expression. Contains the user password | string | yes |
metadata | Metadata information | object | no |
See here for more information about Bearer Authentication scheme.
Parameter | Description | Type | Required |
---|---|---|---|
token | String or a workflow expression. Contains the token information | string | yes |
metadata | Metadata information | object | no |
See here for more information about OAuth2 Authentication scheme.
Parameter | Description | Type | Required |
---|---|---|---|
authority | String or a workflow expression. Contains the authority information | string | no |
grantType | Defines the grant type. Can be "password", "clientCredentials", or "tokenExchange" | enum | yes |
clientId | String or a workflow expression. Contains the client identifier | string | yes |
clientSecret | Workflow secret or a workflow expression. Contains the client secret | string | no |
scopes | Array containing strings or workflow expressions. Contains the OAuth2 scopes | array | no |
username | String or a workflow expression. Contains the user name. Used only if grantType is 'resourceOwner' | string | no |
password | String or a workflow expression. Contains the user password. Used only if grantType is 'resourceOwner' | string | no |
audiences | Array containing strings or workflow expressions. Contains the OAuth2 audiences | array | no |
subjectToken | String or a workflow expression. Contains the subject token | string | no |
requestedSubject | String or a workflow expression. Contains the client identifier | string | no |
requestedIssuer | String or a workflow expression. Contains the requested issuer | string | no |
requestedSubject | String or a workflow expression. Contains the client identifier | string | no |
metadata | Metadata information | object | no |
Parameter | Description | Type | Required |
---|---|---|---|
contextAttributeName | CloudEvent Extension Context Attribute name | string | yes |
contextAttributeValue | CloudEvent Extension Context Attribute name | string | no |
Click to view example definition
JSON | YAML |
---|---|
{
"correlation": [
{
"contextAttributeName": "patientId"
},
{
"contextAttributeName": "department",
"contextAttributeValue" : "UrgentCare"
}
]
} |
correlation:
- contextAttributeName: patientId
- contextAttributeName: department
contextAttributeValue: UrgentCare |
Used to define event correlation rules. Only usable for consumed
event definitions.
The contextAttributeName
property defines the name of the CloudEvent extension context attribute.
The contextAttributeValue
property defines the value of the defined the CloudEvent extension context attribute.
Parameter | Description | Type | Required |
---|---|---|---|
eventRefs | References one or more unique event names in the defined workflow events | array | yes |
actionMode | Specifies how actions are to be performed (in sequence or in parallel). Default is "sequential" | string | no |
actions | Actions to be performed | array | no |
eventDataFilter | Event data filter definition | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"eventRefs": ["HighBodyTemperature"],
"actions": [{
"functionRef": {
"refName": "sendTylenolOrder",
"arguments": {
"patientid": "${ .patientId }"
}
}
}]
} |
eventRefs:
- HighBodyTemperature
actions:
- functionRef:
refName: sendTylenolOrder
arguments:
patientid: "${ .patientId }" |
OnEvent definition allow you to define which actions are to be performed
for the one or more events definitions defined in the eventRefs
property.
The actionMode
property defines if the defined actions need to be performed sequentially or in parallel.
The actions
property defines a list of actions to be performed.
When specifying the onEvents
definition it is important to consider the Event states exclusive
property,
because it determines how 'onEvents' is interpreted.
Let's look at the following JSON definition of 'onEvents' to show this:
{
"onEvents": [{
"eventRefs": ["HighBodyTemperature", "HighBloodPressure"],
"actions": [{
"functionRef": {
"refName": "SendTylenolOrder",
"arguments": {
"patient": "${ .patientId }"
}
}
},
{
"functionRef": {
"refName": "CallNurse",
"arguments": {
"patient": "${ .patientId }"
}
}
}
]
}]
}
Depending on the value of the Event states exclusive
property, this definition can mean two different things:
-
If
exclusive
is set to "true", the consumption of either theHighBodyTemperature
orHighBloodPressure
events will trigger action execution. -
If
exclusive
is set to "false", the consumption of both theHighBodyTemperature
andHighBloodPressure
events will trigger action execution.
This is visualized in the diagram below:
Parameter | Description | Type | Required |
---|---|---|---|
name | Unique action name | string | no |
functionRef | References a reusable function definition | object | yes if eventRef & subFlowRef are not defined |
eventRef | References a trigger and result reusable event definitions |
object | yes if functionRef & subFlowRef are not defined |
subFlowRef | References a workflow to be invoked | object or string | yes if eventRef & functionRef are not defined |
actionDataFilter | Action data filter definition | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "Finalize Application Action",
"functionRef": {
"refName": "finalizeApplicationFunction",
"arguments": {
"applicantid": "${ .applicantId }"
}
}
} |
name: Finalize Application Action
functionRef:
refName: finalizeApplicationFunction
arguments:
applicantid: "${ .applicantId }" |
Actions specify invocations of services or other workflows during workflow execution. Service invocation can be done in two different ways:
- Reference functions definitions by its unique name using the
functionRef
property. - Reference a
produced
andconsumed
event definitions via theeventRef
property.
In the event-based scenario a service or a set of services we want to invoke
are not exposed via a specific resource URI for example, but can only be invoked via events.
The eventRef defines the
referenced produced
event via its triggerEventRef
property and a consumed
event via its resultEventRef
property.
The timeout
property defines the amount of time to wait for function execution to complete, or the consumed event referenced by the
resultEventRef
to become available.
It is described in ISO 8601 format, so for example "PT2M" would mean the maximum time for the function to complete
its execution is two minutes.
Possible invocation timeouts should be handled via the states onErrors definition.
Often you want to group your workflows into small logical units that solve a particular business problem and can be reused in multiple other workflow definitions.
Reusable workflows are referenced by their id
property via the SubFlow action workflowId
parameter.
For the simple case, subFlowRef
can be a string containing the id
of the sub-workflow to invoke.
If you want to specify other parameters then a subFlowRef should be provided instead.
Each referenced workflow receives the SubFlow actions data as workflow data input.
Referenced sub-workflows must declare their own function and event definitions.
FunctionRef
definition can have two types, either string
or object
.
If string
, it defines the name of the referenced function.
This can be used as a short-cut definition when you don't need to define any parameters, for example:
"functionRef": "myFunction"
If you need to define parameters in your functionRef
definition, you can define
it with its object
type which has the following properties:
Parameter | Description | Type | Required |
---|---|---|---|
refName | Name of the referenced function | string | yes |
arguments | Arguments (inputs) to be passed to the referenced function | object | yes if function type is graphql , otherwise no |
selectionSet | Used if function type is graphql . String containing a valid GraphQL selection set |
string | yes if function type is graphql , otherwise no |
Click to view example definition
JSON | YAML |
---|---|
{
"refName": "finalizeApplicationFunction",
"arguments": {
"applicantid": "${ .applicantId }"
}
} |
refName: finalizeApplicationFunction
arguments:
applicantid: "${ .applicantId }" |
The refName
property is the name of the referenced function.
The arguments
property defines the arguments that are to be passed to the referenced function.
Here is an example of using the arguments
property:
{
"refName": "checkFundsAvailabe",
"arguments": {
"account": {
"id": "${ .accountId }"
},
"forAmount": "${ .payment.amount }",
"insufficientMessage": "The requested amount is not available."
}
}
Parameter | Description | Type | Required |
---|---|---|---|
triggerEventRef | Reference to the unique name of a produced event definition |
string | yes |
resultEventRef | Reference to the unique name of a consumed event definition |
string | yes |
data | If string type, an expression which selects parts of the states data output to become the data (payload) of the event referenced by triggerEventRef . If object type, a custom object to become the data (payload) of the event referenced by triggerEventRef . |
string or object | no |
contextAttributes | Add additional event extension context attributes to the trigger/produced event | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"eventRef": {
"triggerEventRef": "MakeVetAppointment",
"data": "${ .patientInfo }",
"resultEventRef": "VetAppointmentInfo"
}
} |
eventRef:
triggerEventRef: MakeVetAppointment
data: "${ .patientInfo }"
resultEventRef: VetAppointmentInfo |
References a produced
and consumed
event definitions via the "triggerEventRef" and resultEventRef
properties, respectively.
The data
property can have two types: string or object. If it is of string type, it is an expression that can select parts of state data
to be used as payload of the event referenced by triggerEventRef
. If it is of object type, you can define a custom object to be the event payload.
The contextAttributes
property allows you to add one or more extension context attributes
to the trigger/produced event.
SubFlowRef
definition can have two types, namely string
or object
.
If string
type, it defines the unique id of the sub-workflow to be invoked.
This short-hand definition can be used if sub-workflow lookup is done only by its id
property and not its version
property and if the default value of waitForCompletion
is assumed.
"subFlowRef": "mySubFlowId"
If you need to define the waitForCompletion
or the version
properties, you can use its
object
type:
Parameter | Description | Type | Required |
---|---|---|---|
workflowId | Sub-workflow unique id | string | yes |
waitForCompletion | If workflow execution must wait for sub-workflow to finish before continuing (default is true) | boolean | no |
version | Sub-workflow version | string | no |
Click to view example definition
JSON | YAML |
---|---|
{
"workflowId": "handleApprovedVisaWorkflowID",
"version": "2.0"
} |
workflowId: handleApprovedVisaWorkflowID
version: '2.0' |
The workflowId
property defined the unique ID of the sub-workflow to be invoked.
The version
property defined the unique version of the sub-workflow to be invoked.
If this property is defined, runtimes should match both the id
and the version
properties
defined in the sub-workflow definition.
The waitForCompletion
property defines if the SubFlow action should wait until the referenced reusable workflow
has completed its execution. If it's set to "true" (default value), SubFlow action execution must wait until the referenced workflow has completed its execution.
In this case the workflow data output of the referenced workflow will be used as the result data of the action.
If it is set to "false" the parent workflow can continue its execution as soon as the referenced sub-workflow
has been invoked (fire-and-forget). For this case, the referenced (child) workflow data output will be ignored and the result data
of the action will be an empty json object ({}
).
Parameter | Description | Type | Required |
---|---|---|---|
error | Domain-specific error name, or '*' to indicate all possible errors | string | yes |
code | Error code. Can be used in addition to the name to help runtimes resolve to technical errors/exceptions. Should not be defined if error is set to '*' | string | no |
retryRef | Defines the unique retry strategy definition to be used | string | no |
transition or end | Transition to next state to handle the error, or end workflow execution if this error is encountered | object | yes |
Click to view example definition
JSON | YAML |
---|---|
{
"error": "Item not in inventory",
"transition": "IssueRefundToCustomer"
} |
error: Item not in inventory
transition: IssueRefundToCustomer |
Error definitions describe errors that can occur during workflow execution and how to handle them.
The error
property defines the domain-specific name of the error. Users can also set the name to
*
which is a wildcard specifying "all" errors, in the case where no other error definitions are defined,
or "all other" errors if there are other errors defined within the same states onErrors
definition.
The code
property can be used in addition to name
to help runtimes resolve the defined
domain-specific error to the actual technical errors/exceptions that may happen during runtime execution.
The transition
property defines the transition to the next workflow state in cases when the defined
error happens during runtime execution.
If transition
is not defined you can also define the end
property which will end workflow execution at that point.
The retryRef
property is used to define the retry strategy to be used for this particular error.
For more information, see the Workflow Error Handling sections.
Parameter | Description | Type | Required |
---|---|---|---|
name | Unique retry strategy name | string | yes |
delay | Time delay between retry attempts (ISO 8601 duration format) | string | no |
maxAttempts | Maximum number of retry attempts. Value of 0 means no retries are performed | string or number | no |
maxDelay | Maximum amount of delay between retry attempts (ISO 8601 duration format) | string | no |
increment | Static duration which will be added to the delay between successive retries (ISO 8601 duration format) | string | no |
multiplier | Float value by which the delay is multiplied before each attempt. For example: "1.2" meaning that each successive delay is 20% longer than the previous delay. For example, if delay is 'PT10S', then the delay between the first and second attempts will be 10 seconds, and the delay before the third attempt will be 12 seconds. | float or string | no |
jitter | If float type, maximum amount of random time added or subtracted from the delay between each retry relative to total delay (between 0.0 and 1.0). If string type, absolute maximum amount of random time added or subtracted from the delay between each retry (ISO 8601 duration format) | float or string | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "TimeoutRetryStrat",
"delay": "PT2M",
"maxAttempts": 3,
"jitter": "PT0.001S"
} |
name: TimeoutRetryStrat
delay: PT2M
maxAttempts: 3
jitter: PT0.001S |
Defines the states retry policy (strategy). This is an explicit definition and can be reused across multiple defined workflow state errors.
The name
property specifies the unique name of the retry definition (strategy). This unique name
can be referred by workflow states error definitions.
The delay
property specifies the initial time delay between retry attempts (ISO 8601 duration format).
The increment
property specifies a static duration which will be added to the delay between successive retries.
To explain this better, let's say we have the following retry definition:
{
"name": "Timeout Errors Strategy",
"delay": "PT10S",
"increment": "PT2S",
"maxAttempts": 4
}
which means that we will retry up to 4 times after waiting with increasing delay between attempts; in this example 10, 12, 14, and 16 seconds between retries.
The multiplier
property specifies the value by which the interval time is increased for each of the retry attempts.
To explain this better, let's say we have the following retry definition:
{
"name": "Timeout Errors Strategy",
"delay": "PT10S",
"multiplier": 2,
"maxAttempts": 4
}
which means that we will retry up to 4 times after waiting with increasing delay between attempts; in this example 10, 20, 40, and 80 seconds between retries.
The maxAttempts
property determines the maximum number of retry attempts allowed and is a positive integer value.
The jitter
property is important to prevent certain scenarios where clients
are retrying in sync, possibly causing or contributing to a transient failure
precisely because they're retrying at the same time. Adding a typically small,
bounded random amount of time to the period between retries serves the purpose
of attempting to prevent these retries from happening simultaneously, possibly
reducing total time to complete requests and overall congestion. How this value
is used in the exponential backoff algorithm is left up to implementations.
jitter
may be specified as a percentage relative to the total delay.
For example, if interval
is 2 seconds, multiplier
is 2 seconds, and we're at
the third attempt, there will be a delay of 6 seconds. If we set jitter
to
0.3, then a random amount of time between 0 and 1.8 (totalDelay * jitter == 6 * 0.3
)
will be added or subtracted from the delay.
Alternatively, jitter
may be defined as an absolute value specified as an ISO
8601 duration. This way, the maximum amount of random time added is fixed and
will not increase as new attempts are made.
The maxDelay
property determines the maximum amount of delay that is desired between retry attempts, and is applied
after increment
, multiplier
, and jitter
.
To explain this better, let's say we have the following retry definition:
{
"name": "Timeout Errors Strategy",
"delay": "PT10S",
"maxDelay": "PT100S",
"multiplier": 4,
"jitter": "PT1S",
"maxAttempts": 4
}
which means that we will retry up to 4 times after waiting with increasing delay between attempts; in this example we might observe the following series of delays:
- 11s (min(
maxDelay
, (delay
+/- rand(jitter
)) => min(100, 10 + 1)) - 43s (min(
maxDelay
, (11s *multiplier
) +/- rand(jitter
)) => min(100, (11 * 4) - 1)) - 100s (min(
maxDelay
, (43s *multiplier
) +/- rand(jitter
)) => min(100, (43 * 4) + 0)) - 100s (min(
maxDelay
, (100s *multiplier
) +/- rand(jitter
)) => min(100, (100 * 4) - 1))
For more information, refer to the Workflow Error Handling sections.
Transition
definition can have two types, either string
or object
.
If string
, it defines the name of the state to transition to.
This can be used as a short-cut definition when you don't need to define any other parameters, for example:
"transition": "myNextState"
If you need to define additional parameters in your transition
definition, you can define
it with its object
type which has the following properties:
Parameter | Description | Type | Required |
---|---|---|---|
nextState | Name of the state to transition to next | string | yes |
compensate | If set to true , triggers workflow compensation before this transition is taken. Default is false |
boolean | no |
produceEvents | Array of producedEvent definitions. Events to be produced before the transition takes place | array | no |
Click to view example definition
JSON | YAML |
---|---|
{
"produceEvents": [{
"eventRef": "produceResultEvent",
"data": "${ .result.data }"
}],
"nextState": "EvalResultState"
} |
produceEvents:
- eventRef: produceResultEvent
data: "${ .result.data }"
nextState: EvalResultState |
The nextState
property defines the name of the state to transition to next.
The compensate
property allows you to trigger compensation before the transition (if set to true).
The produceEvents
property allows you to define a list of events to produce before the transition happens.
Transitions allow you to move from one state (control-logic block) to another. For more information see the Transitions section section.
Parameter | Description | Type | Required |
---|---|---|---|
name | Data condition name | string | no |
condition | Workflow expression evaluated against state data. Must evaluate to true or false | string | yes |
transition or end | Defines what to do if condition is true. Transition to another state, or end workflow | object | yes |
metadata | Metadata information | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "Eighteen or older",
"condition": "${ .applicant | .age >= 18 }",
"transition": "StartApplication"
} |
name: Eighteen or older
condition: "${ .applicant | .age >= 18 }"
transition: StartApplication |
Switch state data conditions specify a data-based condition statement, which causes a transition to another
workflow state if evaluated to true.
The condition
property of the condition defines an expression (e.g., ${ .applicant | .age > 18 }
), which selects
parts of the state data input. The condition must evaluate to true
or false
.
If the condition is evaluated to true
, you can specify either the transition
or end
definitions
to decide what to do, transition to another workflow state, or end workflow execution.
Parameter | Description | Type | Required |
---|---|---|---|
name | Event condition name | string | no |
eventRef | References an unique event name in the defined workflow events | string | yes |
transition or end | Defines what to do if condition is true. Transition to another state, or end workflow | object | yes |
eventDataFilter | Event data filter definition | object | no |
metadata | Metadata information | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "Visa approved",
"eventRef": "visaApprovedEvent",
"transition": "HandleApprovedVisa"
} |
name: Visa approved
eventRef: visaApprovedEvent
transition: HandleApprovedVisa |
Switch state event conditions specify events, which the switch state must wait for. Each condition
can reference one workflow-defined event. Upon arrival of this event, the associated transition is taken.
The eventRef
property references a name of one of the defined workflow events.
If the referenced event is received, you can specify either the transition
or end
definitions
to decide what to do, transition to another workflow state, or end workflow execution.
The eventDataFilter
property can be used to filter event data when it is received.
Parameter | Description | Type | Required |
---|---|---|---|
name | Branch name | string | yes |
actions | Actions to be executed in this branch | array | yes |
timeouts | Branch specific timeout settings | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"name": "Branch1",
"actions": [
{
"functionRef": {
"refName": "functionNameOne",
"arguments": {
"order": "${ .someParam }"
}
}
},
{
"functionRef": {
"refName": "functionNameTwo",
"arguments": {
"order": "${ .someParamTwo }"
}
}
}
]
} |
name: Branch1
actions:
- functionRef:
refName: functionNameOne
arguments:
order: "${ .someParam }"
- functionRef:
refName: functionNameTwo
arguments:
order: "${ .someParamTwo }" |
Each branch receives the same copy of the Parallel state's data input.
A branch can define either actions or a workflow id of the workflow that needs to be executed. The workflow id defined cannot be the same id of the workflow there the branch is defined.
The timeouts
property can be used to set branch specific timeout settings. Parallel state branches can set the
actionExecTimeout
and branchExecTimeout
timeout properties. For more information on workflow timeouts reference the
Workflow Timeouts section.
Exceptions can occur during execution of Parallel state branches.
By default, exceptions that are not handled within branches stop branch execution and are propagated
to the Parallel state and should be handled with its onErrors
definition.
If the parallel states branch defines actions, all exceptions that arise from executing these actions
are propagated to the parallel state
and can be handled with the parallel states onErrors
definition.
If the parallel states defines a subflow action, exceptions that occur during execution of the called workflow
can chose to handle exceptions on their own. All unhandled exceptions from the called workflow
execution however are propagated back to the parallel state and can be handled with the parallel states
onErrors
definition.
Note that once an error that is propagated to the parallel state from a branch and handled by the
states onErrors
definition is handled (its associated transition is taken) no further errors from branches of this
parallel state should be considered as the workflow control flow logic has already moved to a different state.
For more information, see the Workflow Error Handling sections.
Can be either string
or object
type. If type string, it defines the name of the workflow starting state.
"start": "MyStartingState"
In this case it's assumed that the schedule
property is not defined.
If the start definition is of type object
, it has the following structure:
Parameter | Description | Type | Required |
---|---|---|---|
stateName | Name of the starting workflow state | object | yes |
schedule | Define the recurring time intervals or cron expressions at which workflow instances should be automatically started. | object | yes |
Click to view example definition
JSON | YAML |
---|---|
{
"stateName": "MyStartingstate",
"schedule": "2020-03-20T09:00:00Z/2020-03-20T15:00:00Z"
} |
stateName: MyStartingstate
schedule: 2020-03-20T09:00:00Z/2020-03-20T15:00:00Z |
Start definition explicitly defines how/when workflow instances should be created and what the workflow starting state is.
The start definition can be either string
or object
type.
If string
type, it defines the name of the workflow starting state.
If object
type, it provides the ability to set the workflow starting state name, as well as the schedule
property.
The schedule
property allows to define scheduled workflow instance creation.
Scheduled starts have two different choices. You can define a recurring time interval or cron-based schedule at which a workflow
instance should be created (automatically).
You can also define cron-based scheduled starts, which allows you to specify periodically started workflow instances based on a cron definition. Cron-based scheduled starts can handle absolute time intervals (i.e., not calculated in respect to some particular point in time). One use case for cron-based scheduled starts is a workflow that performs periodical data batch processing. In this case we could use a cron definition
0 0/5 * * * ?
to define that a workflow instance from the workflow definition should be created every 5 minutes, starting at full hour.
Here are some more examples of cron expressions and their meanings:
* * * * * - Create workflow instance at the top of every minute
0 * * * * - Create workflow instance at the top of every hour
0 */2 * * * - Create workflow instance every 2 hours
0 9 8 * * - Create workflow instance at 9:00:00AM on the eighth day of every month
See here to get more information on defining cron expressions.
One thing to discuss when dealing with cron-based scheduled starts is when the workflow starting state is an Event. Event states define that workflow instances are triggered by the existence of the defined event(s). Defining a cron-based scheduled starts for the runtime implementations would mean that there needs to be an event service that issues the needed events at the defined times to trigger workflow instance creation.
Schedule
definition can have two types, either string
or object
.
If string
type, it defines time interval describing when the workflow instance should be automatically created.
This can be used as a short-cut definition when you don't need to define any other parameters, for example:
{
"schedule": "R/PT2H"
}
If you need to define the cron
or the timezone
parameters in your schedule
definition, you can define
it with its object
type which has the following properties:
Parameter | Description | Type | Required |
---|---|---|---|
interval | A recurring time interval expressed in the derivative of ISO 8601 format specified below. Declares that workflow instances should be automatically created at the start of each time interval in the series. | string | yes if cron not defined |
cron | Cron expression defining when workflow instances should be automatically created | object | yes if interval not defined |
timezone | Timezone name used to evaluate the interval & cron-expression. If the interval specifies a date-time w/ timezone then proper timezone conversion will be applied. (default: UTC). | string | no |
Click to view example definition
JSON | YAML |
---|---|
{
"cron": "0 0/15 * * * ?"
} |
cron: 0 0/15 * * * ? |
The interval
property uses a derivative of ISO 8601 recurring time interval format to describe a series of consecutive time intervals for workflow instances to be automatically created at the start of. Unlike full ISO 8601, this derivative format does not allow expression of an explicit number of recurrences or identification of a series by the date and time at the start and end of its first time interval.
There are three ways to express a recurring interval:
R/<Start>/<Duration>
: Defines the start time and a duration, for example: "R/2020-03-20T13:00:00Z/PT2H", meaning workflow instances will be automatically created every 2 hours starting from March 20th 2020 at 1pm UTC.R/<Duration>/<End>
: Defines a duration and an end, for example: "R/PT2H/2020-05-11T15:30:00Z", meaning that workflow instances will be automatically created every 2 hours until until May 11th 2020 at 3:30pm UTC (i.e., the last instance will be created 2 hours prior to that, at 1:30pm UTC).R/<Duration>
: Defines a duration only, for example: "R/PT2H", meaning workflow instances will be automatically created every 2 hours. The start time of the first interval may be indeterminate, but should be delayed by no more than the specified duration and must repeat on schedule after that (this is effectively supplying the start time "out-of-band" as permitted ISO ISO 8601-1:2019 section 5.6.1 NOTE 1). Each runtime implementation should document how the start time for a duration-only interval is established.
The cron
property uses a cron expression
to describe a repeating interval upon which a workflow instance should be created automatically.
For more information see the cron definition section.
The timezone
property is used to define a time zone name to evaluate the cron or interval expression against. If not specified, it should default
to UTC time zone. See here for a list of timezone names. For ISO 8601 date time
values in interval
or cron.validUntil
, runtimes should treat timezone
as the 'local time' (UTC if interval
is not defined by the user).
Note that when the workflow starting state is an Event defining cron-based scheduled starts for the runtime implementations would mean that there needs to be an event service that issues the needed events at the defined times to trigger workflow instance creation.
Cron
definition can have two types, either string
or object
.
If string
type, it defines the cron expression describing when the workflow instance should be created (automatically).
This can be used as a short-cut definition when you don't need to define any other parameters, for example:
{
"cron": "0 15,30,45 * ? * *"
}
If you need to define the validUntil
parameters in your cron
definition, you can define
it with its object
type which has the following properties:
Parameter | Description | Type | Required |
---|---|---|---|
expression | Cron expression describing when the workflow instance should be created (automatically) | string | yes |
validUntil | Specific date and time (ISO 8601 format) when the cron expression is no longer valid | string | no |
Click to view example definition
JSON | YAML |
---|---|
{
"expression": "0 15,30,45 * ? * *",
"validUntil": "2021-11-05T08:15:30-05:00"
} |
expression: 0 15,30,45 * ? * *
validUntil: '2021-11-05T08:15:30-05:00' |
The expression
property is a a cron expression which defines
when workflow instances should be created (automatically).
The validUntil
property defines a date and time (using ISO 8601 format). When the
validUntil
time is reached, the cron expression for instances creations of this workflow
should no longer be valid.
For example let's say we have to following cron definitions:
{
"expression": "0 15,30,45 * ? * *",
"validUntil": "2021-11-05T08:15:30-05:00"
}
This tells the runtime engine to create an instance of this workflow every hour
at minutes 15, 30 and 45. This is to be done until November 5, 2021, 8:15:30 am, US Eastern Standard Time
as defined by the validUntil
property value.
Can be either boolean
or object
type. If type boolean, must be set to true
, for example:
"end": true
In this case it's assumed that the terminate
property has its default value of false
, and the produceEvents
and
compensate
properties are not defined.
If the end definition is of type object
, it has the following structure:
Parameter | Description | Type | Required |
---|---|---|---|
terminate | If true, terminates workflow instance execution | boolean | no |
produceEvents | Array of producedEvent definitions. Defines events that should be produced. | array | no |
compensate | If set to true , triggers workflow compensation before workflow execution completes. Default is false |
boolean | no |
Click to view example definition
JSON | YAML |
---|---|
{
"terminate": true,
"produceEvents": [{
"eventRef": "provisioningCompleteEvent",
"data": "${ .provisionedOrders }"
}]
} |
terminate: true
produceEvents:
- eventRef: provisioningCompleteEvent
data: "${ .provisionedOrders }"
|
End definitions are used to explicitly define execution completion of a workflow instance or workflow execution path. A workflow definition must include at least one workflow state. Note that Switch states cannot declare to be workflow end states. Switch states must end their execution followed by a transition another workflow state, given their conditional evaluation.
The terminate
property, if set to true
, completes the workflow instance execution, this any other active
execution paths.
If a terminate end is reached inside a ForEach or Parallel state the entire workflow instance is terminated.
The produceEvents
allows defining events which should be produced
by the workflow instance before workflow stops its execution.
It's important to mention that if the workflow keepActive
property is set totrue
,
the only way to complete execution of the workflow instance
is if workflow execution reaches a state that defines an end definition with terminate
property set to true
,
or, if the workflowExecTimeout
property is defined, the time defined in its interval
is reached.
Parameter | Description | Type | Required |
---|---|---|---|
eventRef | Reference to a defined unique event name in the events definition | string | yes |
data | If string type, an expression which selects parts of the states data output to become the data (payload) of the produced event. If object type, a custom object to become the data (payload) of produced event. | string or object | no |
contextAttributes | Add additional event extension context attributes | object | no |
Click to view example definition
JSON | YAML |
---|---|
{
"eventRef": "provisioningCompleteEvent",
"data": "${ .provisionedOrders }",
"contextAttributes": [{
"buyerId": "${ .buyerId }"
}]
} |
eventRef: provisioningCompleteEvent
data: "${ .provisionedOrders }"
contextAttributes:
- buyerId: "${ .buyerId }" |
Defines the event (CloudEvent format) to be produced when workflow execution completes or during a workflow transitions.
The eventRef
property must match the name of
one of the defined produced
events in the events definition.
The data
property can have two types, object or string. If of string type, it is an expression that can select parts of state data
to be used as the event payload. If of object type, you can define a custom object to be the event payload.
The contextAttributes
property allows you to add one or more extension context attributes
to the generated event.
Being able to produce events when workflow execution completes or during state transition allows for event-based orchestration communication. For example, completion of an orchestration workflow can notify other orchestration workflows to decide if they need to act upon the produced event, or notify monitoring services of the current state of workflow execution, etc. It can be used to create very dynamic orchestration scenarios.
Serverless workflow states can have one or more incoming and outgoing transitions (from/to other states).
Each state can define a transition
definition that is used to determine which
state to transition to next.
Implementers can choose to use the states name
property
for determining the transition; however, we realize that in most cases this is not an
optimal solution that can lead to ambiguity. This is why each state also include an "id"
property. Implementers can choose their own id generation strategy to populate the id
property
for each of the states and use it as the unique state identifier that is to be used as the "nextState" value.
So the options for next state transitions are:
- Use the state name property
- Use the state id property
- Use a combination of name and id properties
Events can be produced during state transitions. The produceEvents
property of the transition
definitions allows you
to reference one or more defined produced
events in the workflow events definitions.
For each of the produced events you can select what parts of state data to be the event payload.
Transitions can trigger compensation via their compensate
property. See the Workflow Compensation
section for more information.
Specifying additional properties, namely properties which are not defined by the specification are only allowed in the Workflow Definition. Additional properties serve the same purpose as Workflow Metadata. They allow you to enrich the workflow definition with custom information.
Additional properties, just like workflow metadata, should not affect workflow execution. Implementations may choose to use additional properties or ignore them.
It is recommended to use workflow metadata instead of additional properties in the workflow definition.
Let's take a look at an example of additional properties:
{
"id": "myworkflow",
"version": "1.0",
"specVersion": "0.7",
"name": "My Test Workflow",
"start": "My First State",
"loglevel": "Info",
"environment": "Production",
"category": "Sales",
"states": [ ... ]
}
In this example, we specify the loglevel
, environment
, and category
additional properties.
Note the same can be also specified using workflow metadata, which is the preferred approach:
{
"id": "myworkflow",
"version": "1.0",
"specVersion": "0.7",
"name": "Py Test Workflow",
"start": "My First State",
"metadata": {
"loglevel": "Info",
"environment": "Production",
"category": "Sales"
},
"states": [ ... ]
}
Serverless Workflow language allows you to define explicit
error handling, meaning you can define what should happen
in case of errors inside your workflow model rather than some generic error handling entity.
This allows error handling to become part of your orchestration activities and as such part of your business problem
solutions.
Each workflow state can define error handling, which is related only to errors that may arise during its execution. Error handling defined in one state cannot be used to handle errors that happened during execution of another state during workflow execution.
Errors that may arise during workflow execution that are not explicitly handled within the workflow definition should be reported by runtime implementations and halt workflow execution.
Within workflow definitions, errors defined are domain specific
, meaning they are defined within
the actual business domain, rather than their technical (programming-language-specific) description.
For example, we can define errors such as "Order not found", or "Item not in inventory", rather than having to use terms such as "java.lang.IllegalAccessError", or "response.status == 404", which might make little to no sense to our specific problem domain, as well as may not be portable across various runtime implementations.
In addition to the domain specific error name, users have the option to also add an optional error code to help runtime implementations with mapping defined errors to concrete underlying technical ones.
Runtime implementations must be able to map the error domain specific name (and the optional error code) to concrete technical errors that arise during workflow execution.
Errors can be defined via the states onErrors
property. It is an array of 'error' definitions.
Each error definition should have a unique error
property. There can be only one error definition
which has the error
property set to the wildcard character *
.
The order of error definitions within onErrors
does not matter.
If there is an error definition with the error
property set to the wildcard character *
, it
can mean either "all errors", if it is the only error definition defined, or it can mean
"all other errors", in the case where other error definitions are defined.
Note that if the error
property is set to *
, the error definition code
property should not be defined.
Runtime implementations should warn users in this case.
Let's take a look at an example of each of these two cases:
JSON | YAML |
---|---|
{
"onErrors": [
{
"error": "Item not in inventory",
"transition": "ReimburseCustomer"
},
{
"error": "*",
"transition": "handleAnyOtherError"
}
]
} |
onErrors:
- error: Item not in inventory
transition: ReimburseCustomer
- error: "*"
transition: handleAnyOtherError |
In this example the "Item not in inventory" error is being handled by the first error definition. The second error definition handles "all other" errors that may happen during this states execution.
On the other hand the following example shows how to handle "all" errors with the same error definition:
JSON | YAML |
---|---|
{
"onErrors": [
{
"error": "*",
"transition": "handleAllErrors"
}
]
} |
onErrors:
- error: "*"
transition: handleAllErrors |
Retries are related to errors. When certain errors are encountered we might want to retry the states execution.
We can define retries within the workflow states error definitions.
This is done by defining the retry strategy as the workflow top-level parameter using its retries
array, and then
adding a retryRef
parameter to the error definition which references these retry strategies for a specific error.
If a defined retry for the defined error is successful, the defined workflow control flow logic of the state
should be performed, meaning either workflow can transition according to the states transition
definition, or end workflow execution in case the state defines an end
definition.
If the defined retry for the defined error is not successful, workflow control flow logic should follow the
transition
definition of the error definition where the retry is defined, to transition to the next state that can handle this problem.
Let's take a look at an example of a top-level retries definition of a workflow:
JSON | YAML |
---|---|
{
"retries": [
{
"name": "Service Call Timeout Retry Strategy",
"delay": "PT1M",
"maxAttempts": 4
}
]
} |
retries:
- name: Service Call Timeout Retry Strategy
delay: PT1M
maxAttempts: 4 |
This defines a reusable retry strategy. It can be referenced by different workflow states if needed to define the retries that need to performed for some specific errors that might be encountered during workflow execution.
In this particular case we define a retry strategy for "Service Call Timeouts" which says that the states control-flow logic should be retried up to 4 times, with a 1 minute delay between each retry attempt.
Different states now can use the defined retry strategy. For example:
JSON | YAML |
---|---|
{
"onErrors": [
{
"error": "Inventory service timeout",
"retryRef": "Service Call Timeout Retry Strategy",
"transition": "ReimburseCustomer"
}
]
} |
onErrors:
- error: Inventory service timeout
retryRef: Service Call Timeout Retry Strategy
transition: ReimburseCustomer |
In this example we say that if the "Inventory service timeout" error is encountered, we want to use our defined "Service Call Timeout Retry Strategy"
which holds the needed retry information. If the error definition does not include a retryRef
property
it means that we do not want to perform retries for the defined error.
When referencing a retry strategy in your states error definitions, if the maximum amount of unsuccessful retries is reached,
the workflow should transition to the next state
as defined by the error definitions transition
property. If one of the performed retries is successful,
the states transition
property should be taken, and the one defined in the error definition should be
ignored.
In order to issue a retry, the current state execution should be halted first, meaning that in the cases of parallel states, all currently running branches should halt their executions, before a retry can be performed.
It is important to consider one particular case which are retries defined within event states that are also
workflow starting states
(have the start
property defined). Starting event states trigger an instance of the workflow
when the particular event or events it defines are consumed. In case of an error which happens during
execution of the state, runtimes should not create a new instance of this workflow, or
wait for the defined event or events again. In these cases only the states actions should be retried
and the received event information used for all of the issued retries.
Workflow timeouts define the maximum times for:
- Workflow execution
- State execution
- Action execution
- Branch execution
- Event consumption time
The specification allows for timeouts to be defined on the top-level workflow definition, as well as in each of the workflow state definitions. Note that the timeout settings defined in states, and state branches overwrite the top-level workflow definition for state, action and branch execution. If they are not defined, then the top-level timeout settings should take in effect.
To give an example, let's say that in our workflow definition we define the timeout for state execution:
{
"id": "testWorkflow",
...
"timeouts": {
...
"stateExecTimeout": "PT2S"
}
...
}
This top-level workflow timeout setting defines that the maximum execution time of all defined workflow states is two seconds each.
Now let's say that we have worfklow states "A" and "B". State "A" does not define a timeout definition, but state "B" does:
{
"name": "B",
"type": "operation",
...
"timeouts": {
...
"stateExecTimeout": "PT10S"
}
...
}
Since state "A" does not overwrite the top-level stateExecTimeout
, its execution timeout should be inherited from
the top-level timeout definition.
On the other hand, state "B" does define it's own stateExecTimeout
, in which case it would overwrite the default
setting, meaning that it would its execution time has a max limit of ten seconds.
Defining timeouts is not mandatory, meaning that if not defined, all the timeout settings should be assumed to be "unlimited".
Note that the defined workflow execution timeout has precedence over all other defined timeouts. Just to give an extreme example, let's say we define the workflow execution timeout to ten seconds, and the state execution timeout to twenty seconds. In this case if the workflow execution timeout is reached it should follow the rules of workflow execution timeout and end workflow execution, no matter what the state execution time has been set to.
Let's take a look all possible timeout definitions:
Workflow timeouts are defined with the top-level timeouts
property. It can have two types, string
and object
.
If string
type it defines an URI that points to a Json or Yaml file containing the workflow timeout definitions.
If object
type, it is used to define the timeout definitions in-line and has the following properties:
Parameter | Description | Type | Required |
---|---|---|---|
workflowExecTimeout | Workflow execution timeout (ISO 8601 duration format) | string or object | no |
stateExecTimeout | Default workflow state execution timeout (ISO 8601 duration format) | string or object | no |
actionExecTimeouts | Default single actions definition execution timeout (ISO 8601 duration format) | string | no |
branchExecTimeout | Default single branch execution timeout (ISO 8601 duration format) | string | no |
eventTimeout | Default timeout for consuming defined events (ISO 8601 duration format) | string | no |
The eventTimeout
property defines the maximum amount of time to wait to consume defined events. If not specified it should default to
"unlimited".
The branchExecTimeout
property defines the maximum execution time for a single branch. If not specified it should default to
"unlimited".
The actionExecTimeout
property defines the maximum execution time for a single actions definition. If not specified it should default to
"unlimited". Note that an action definition can include multiple actions.
The stateExecTimeout
property defines the maximum execution time for a single workflow state. If not specified it should default to
"unlimited".
The workflowExecTimeout
property defines the workflow execution timeout.
It is defined using the ISO 8601 duration format. If not defined, the workflow execution should be given "unlimited"
amount of time to complete.
workflowExecTimeout
can have two possibly types, either string
or object
.
If string
type, it defines the maximum workflow execution time.
If Object type it has the following format:
Parameter | Description | Type | Required |
---|---|---|---|
duration | Timeout duration (ISO 8601 duration format) | string | yes |
interrupt | If false , workflow instance is allowed to finish current execution. If true , current workflow execution is stopped immediately. Default is false |
boolean | no |
runBefore | Name of a workflow state to be executed before workflow instance is terminated | string | no |
Click to view example definition
JSON | YAML |
---|---|
{
"duration": "PT2M",
"runBefore": "createandsendreport"
} |
duration: PT2M
runBefore: createandsendreport |
The duration
property defines the time duration of the execution timeout. Once a workflow instance is created,
and the amount of the defined time is reached, the workflow instance should be terminated.
The interrupt
property defines if the currently running instance should be allowed to finish its current
execution flow before it needs to be terminated. If set to true
, the current instance execution should stop immediately.
The runBefore
property defines a name of a workflow state to be executed before workflow instance is terminated.
States referenced by runBefore
(as well as any other states that they transition to) must obey following rules:
- They should not have any incoming transitions (should not be part of the main workflow control-flow logic)
- They cannot be states marked for compensation (have their
usedForCompensation
property and set totrue
) - If it is a single state, it must define an end definition, if it transitions to other states, at last one must define it.
- They can transition only to states are also not part of the main control flow logic (and are not marked for compensation).
Runtime implementations should raise compile time / parsing exceptions if any of the rules mentioned above are not obeyed in the workflow definition.
All workflow states can define the timeouts
property and can define different timeout
settings depending on their state type.
Please reference each workflow state definitions for more information on which
timeout settings are available for each state type.
Workflow states timeouts cannot define the workflowExecTimeout
property.
All workflow states can define the stateExecTimeout
property. This property can have two types, namely string
and object.
If defined as string type, it defines the total state execution timeout, including any retries
as defined in the states retry policy.
If defined as object type it has the following properties:
Parameter | Description | Type | Required |
---|---|---|---|
single | Single state execution timeout, not including retries (ISO 8601 duration format) | string | no |
total | Total state execution timeout, including retries (ISO 8601 duration format) | string | yes |
The single
property defines a single state execution timeout. This property does not take in account retries.
Each time the state is executed, whether when it is executes as part of standard control flow logic, or as part of a retry,
its total execution timeouts is the value of the single
property.
To show an example, let's say that we set the single
property to "PT10S", meaning 10 seconds. A workflow state "X" which defines this timeout when first executed has the max execution timeout
set to 10 seconds. If the state execution is then retried, each time it is retried, the individual retry max execution timeout is again 10 seconds.
The total
property on the other hand defines a state execution timeout taking in account retries.
This means when this state is executed, its execution timeout is the value of the total
property no matter how many retries
have to be performed. If a state execution includes zero or one hundred retries, the total execution timeout is set by this property.
Parallel states can define the branchExecTimeout
property. If defined on the state
level, it applies to each branch of the Parallel state. Note that each parallel state branch
can overwrite this setting to define its own branch execution timeout.
If a branch does not define this timeout property, it should be inherited from it's state definition branch timeout setting.
If its state does not define it either, it should be inherited from the top-level workflow branch timeout settings.
The Event state timeouts
property can be used to
specify state specific timeout settings. For event state it can contain the eventTimeout
property
which is defined using the ISO 8601 data and time format.
You can specify for example "PT15M" to represent 15 minutes or "P2DT3H4M" to represent 2 days, 3 hours and 4 minutes.
eventTimeout
values should always be represented as durations and not as specific time intervals.
The eventTimeout
property needs to be described in detail for Event states as it depends on whether or not the Event state is a workflow starting state or not.
If the Event state is a workflow starting state, incoming events may trigger workflow instances. In this case,
if the exclusive
property is set to true, the eventTimeout
property should be ignored.
If the exclusive
property is set to false, in this case, the defined eventTimeout
represents the time
between arrival of specified events. To give an example, consider the following:
{
"states": [
{
"name": "ExampleEventState",
"type": "event",
"exclusive": false,
"timeouts": {
"eventTimeout": "PT2M"
}
"onEvents": [
{
"eventRefs": [
"ExampleEvent1",
"ExampleEvent2"
],
"actions": [
...
]
}
],
"end": {
"terminate": true
}
}
]
}
The first eventTimeout
would start once any of the referenced events are consumed. If the second event does not occur within
the defined eventTimeout, no workflow instance should be created.
If the event state is not a workflow starting state, the eventTimeout
property is relative to the time when the
state becomes active. If the defined event conditions (regardless of the value of the exclusive property)
are not satisfied within the defined timeout period, the event state should transition to the next state or end the workflow
instance in case it is an end state without performing any actions.
Compensation deals with undoing or reversing the work of one or more states which have already successfully completed. For example, let's say that we have charged a customer $100 for an item purchase. In the case customer laster on decides to cancel this purchase we need to undo it. One way of doing that is to credit the customer $100.
It's important to understand that compensation with workflows is not the same as for example rolling back a transaction (a strict undo). Compensating a workflow state which has successfully completed might involve multiple logical steps and thus is part of the overall business logic that must be defined within the workflow itself. To explain this let's use our previous example and say that when our customer made the item purchase, our workflow has sent her/him a confirmation email. In the case, to compensate this purchase, we cannot just "undo" the confirmation email sent. Instead, we want to send a second email to the customer which includes purchase cancellation information.
Compensation in Serverless Workflow must be explicitly defined by the workflow control flow logic. It cannot be dynamically triggered by initial workflow data, event payloads, results of service invocations, or errors.
Each workflow state can define how it should be compensated via its compensatedBy
property.
This property references another workflow state (by its unique name) which is responsible for the actual compensation.
States referenced by compensatedBy
(as well as any other states that they transition to) must obey following rules:
- They should not have any incoming transitions (should not be part of the main workflow control-flow logic)
- They cannot be an event state
- They cannot define an end definition. If they do, it should be ignored
- They must define the
usedForCompensation
property and set it totrue
- They can transition only to states which also have their
usedForCompensation
property and set totrue
- They cannot themselves set their
compensatedBy
property to true (compensation is not recursive)
Runtime implementations should raise compile time / parsing exceptions if any of the rules mentioned above are not obeyed in the workflow definition.
Let's take a look at an example workflow state which defines its compensatedBy
property, and the compensation
state it references:
JSON | YAML |
---|---|
{
"states": [
{
"name": "NewItemPurchase",
"type": "event",
"onEvents": [
{
"eventRefs": [
"NewPurchase"
],
"actions": [
{
"functionRef": {
"refName": "DebitCustomerFunction",
"arguments": {
"customerid": "${ .purchase.customerid }",
"amount": "${ .purchase.amount }"
}
}
},
{
"functionRef": {
"refName": "SendPurchaseConfirmationEmailFunction",
"arguments": {
"customerid": "${ .purchase.customerid }"
}
}
}
]
}
],
"compensatedBy": "CancelPurchase",
"transition": "SomeNextWorkflowState"
},
{
"name": "CancelPurchase",
"type": "operation",
"usedForCompensation": true,
"actions": [
{
"functionRef": {
"refName": "CreditCustomerFunction",
"arguments": {
"customerid": "${ .purchase.customerid }",
"amount": "${ .purchase.amount }"
}
}
},
{
"functionRef": {
"refName": "SendPurchaseCancellationEmailFunction",
"arguments": {
"customerid": "${ .purchase.customerid }"
}
}
}
]
}
]
} |
states:
- name: NewItemPurchase
type: event
onEvents:
- eventRefs:
- NewPurchase
actions:
- functionRef:
refName: DebitCustomerFunction
arguments:
customerid: "${ .purchase.customerid }"
amount: "${ .purchase.amount }"
- functionRef:
refName: SendPurchaseConfirmationEmailFunction
arguments:
customerid: "${ .purchase.customerid }"
compensatedBy: CancelPurchase
transition: SomeNextWorkflowState
- name: CancelPurchase
type: operation
usedForCompensation: true
actions:
- functionRef:
refName: CreditCustomerFunction
arguments:
customerid: "${ .purchase.customerid }"
amount: "${ .purchase.amount }"
- functionRef:
refName: SendPurchaseCancellationEmailFunction
arguments:
customerid: "${ .purchase.customerid }" |
In this example our "NewItemPurchase" event state waits for a "NewPurchase" event and then debits the customer and sends them a purchase confirmation email. It defines that it's compensated by the "CancelPurchase" operation state which performs two actions, namely credits back the purchase amount to customer and sends them a purchase cancellation email.
As previously mentioned, compensation must be explicitly triggered by the workflows control-flow logic. This can be done via transition and end definitions.
Let's take a look at each:
- Compensation triggered on transition:
JSON | YAML |
---|---|
{
"transition": {
"compensate": true,
"nextState": "NextWorkflowState"
}
} |
transition:
compensate: true
nextState: NextWorkflowState |
Transitions can trigger compensations by specifying the compensate
property and setting it to true
.
This means that before the transition is executed (workflow continues its execution to the "NextWorkflowState" in this example),
workflow compensation must be performed.
- Compensation triggered by end definition:
JSON | YAML |
---|---|
{
"end": {
"compensate": true
}
} |
end:
compensate: true |
End definitions can trigger compensations by specifying the compensate
property and setting it to true
.
This means that before workflow finishes its execution workflow compensation must be performed. Note that
in case when the end definition has its produceEvents
property set, compensation must be performed before
producing the specified events and ending workflow execution.
Now that we have seen how to define and trigger compensation, we need to go into details on how compensation should be executed.
Compensation is performed on all already successfully completed states (that define compensatedBy
) in reverse order.
Compensation is always done in sequential order, and should not be executed in parallel.
Let's take a look at the following workflow image:
In this example lets say our workflow execution is at the "End" state which defines the compensate
property to true
as shown in the previous section. States with a red border, namely "A", "B", "D" and "E" are states which have so far
been executed successfully. State "C" has not been executed during workflow execution in our example.
When workflow execution encounters our "End" state, compensation has to be performed. This is done in reverse order:
- State "E" is not compensated as it does not define a
compensatedBy
state - State "D" is compensated by executing compensation "D1"
- State "B" is compensated by executing "B1" and then "B2"
- State C is not compensated as it was never active during workflow execution
- State A is not comped as it does not define a
compensatedBy
state
So if we look just at the workflow execution flow, the same workflow could be seen as:
In our example, when compensation triggers, the current workflow data is passed as input to the "D1" state, the first compensation state for our example. The states data output is then passed as states data input to "B1", and so on.
In some cases when compensation is triggered, some states such as Parallel and ForEach states can still be "active", meaning they still might have some async executions that are being performed.
If compensation needs to performed on such still active states, the state execution must be first cancelled. After it is cancelled, compensation should be performed.
States that are marked as usedForCompensation
can define error handling via their
onErrors
property just like any other workflow states. In case of unrecoverable errors during their execution
(errors not explicitly handled),
workflow execution should be stopped, which is the same behavior as when not using compensation as well.
In any application, regardless of size or type, one thing is for sure: changes happen. Versioning your workflow definitions is an important task to consider. Versions indicate changes or updates of your workflow definitions to the associated execution runtimes.
There are two places in the workflow definition where versioning can be applied:
- Top level workflow definition
version
property. - Actions subflowRef
version
property.
The Serverless Workflow specification does not mandate a specific versioning strategy
for the top level and actions subflowRef definitions version
properties. It does not mandate the use
of a versioning strategy at all. We do recommend however that you do use a versioning strategy
for your workflow definitions especially in production environments.
To enhance portability when using versioning of your workflow and sub-workflow definitions, we recommend using an existing versioning standard such as SemVer for example.
Workflow constants are used to define static, and immutable, data which is available to Workflow Expressions.
Constants can be defined via the Workflow top-level "constants" property, for example:
"constants": {
"Translations": {
"Dog": {
"Serbian": "pas",
"Spanish": "perro",
"French": "chien"
}
}
}
Constants can only be accessed inside Workflow expressions via the $CONST namespace. Runtimes must make constants available to expressions under that namespace.
Here is an example of using constants in Workflow expressions:
{
...,
"constants": {
"AGE": {
"MIN_ADULT": 18
}
},
...
"states":[
{
"name":"CheckApplicant",
"type":"switch",
"dataConditions": [
{
"name": "Applicant is adult",
"condition": "${ .applicant | .age >= $CONST.AGE.MIN_ADULT }",
"transition": "ApproveApplication"
},
{
"name": "Applicant is minor",
"condition": "${ .applicant | .age < $CONST.AGE.MIN_ADULT }",
"transition": "RejectApplication"
}
],
...
},
...
]
}
Note that constants can also be used in expression functions, for example:
{
"functions": [
{
"name": "isAdult",
"operation": ".applicant | .age >= $CONST.AGE.MIN_ADULT",
"type": "expression"
},
{
"name": "isMinor",
"operation": ".applicant | .age < $CONST.AGE.MIN_ADULT",
"type": "expression"
}
]
}
Workflow constants values should only contain static data, meaning that their value should not contain Workflow expressions. Workflow constants data must be immutable. Workflow constants should not have access to Workflow secrets definitions.
Secrets allow you access sensitive information, such as passwords, OAuth tokens, ssh keys, etc inside your Workflow Expressions.
You can define the names of secrets via the Workflow top-level "secrets" property, for example:
"secrets": ["MY_PASSWORD", "MY_STORAGE_KEY", "MY_ACCOUNT"]
If secrets are defined in a Workflow definition, runtimes must assure to provide their values during Workflow execution.
Secrets can be used only in Workflow expressions under the SECRETS
namespace.
This is reserved namespace that should only be allowed for values defined by the secrets
property.
Here is an example on how to use secrets and pass them as arguments to a function invocation:
"secrets": ["AZURE_STORAGE_ACCOUNT", "AZURE_STORAGE_KEY"],
...
{
"refName": "uploadToAzure",
"arguments": {
"account": "${ $SECRETS.AZURE_STORAGE_ACCOUNT }",
"account-key": "${ $SECRETS.AZURE_STORAGE_KEY }",
...
}
}
Note that secrets can also be used in expression functions.
Secrets are immutable, meaning that workflow expressions are not allowed to change their values.
Metadata enables you to enrich the serverless workflow model with information beyond its core definitions. It is intended to be used by clients, such as tools and libraries, as well as users that find this information relevant.
Metadata should not affect workflow execution. Implementations may choose to use metadata information or ignore it. Note, however, that using metadata to control workflow execution can lead to vendor-locked implementations that do not comply with the main goals of this specification, which is to be completely vendor-neutral.
Metadata includes key/value pairs (string types). Both keys and values are completely arbitrary and non-identifying.
Metadata can be added to:
- Workflow Definition
- Function definitions
- Event definitions
- State definitions
- Switch state data and event conditions.
Here is an example of metadata attached to the core workflow definition:
{
"id": "processSalesOrders",
"name": "Process Sales Orders",
"version": "1.0",
"specVersion": "0.7",
"start": "MyStartingState",
"metadata": {
"loglevel": "Info",
"environment": "Production",
"category": "Sales",
"giturl": "github.com/myproject",
"author": "Author Name",
"team": "Team Name",
...
},
"states": [
...
]
}
Some other examples of information that could be recorded in metadata are:
- UI tooling information such as sizing or scaling factors.
- Build, release, or image information such as timestamps, release ids, git branches, PR numbers, etc.
- Logging, monitoring, analytics, or audit repository information.
- Labels used for organizing/indexing purposes, such as "release" "stable", "track", "daily", etc.
The workflow extension mechanism allows you to enhance your model definitions with additional information useful for things like analytics, logging, simulation, debugging, tracing, etc.
Model extensions do no influence control flow logic (workflow execution semantics). They enhance it with extra information that can be consumed by runtime systems or tooling and evaluated with the end goal being overall workflow improvements in terms of time, cost, efficiency, etc.
Serverless Workflow specification provides extensions which can be found here.
Even tho users can define their own extensions, it is encouraged to use the ones provided by the specification. We also encourage users to contribute their extensions to the specification. That way they can be shared with the rest of the community.
If you have an idea for a new workflow extension, or would like to enhance an existing one,
please open an New Extension Request
issue in this repository.
You can find different Serverless Workflow use cases here.
You can find many Serverless Workflow examples here.
You can find info how the Serverless Workflow language compares with other workflow languages here.
You can find a list of other languages, technologies and specifications related to workflows here.
Serverless Workflow specification operates under the Apache License version 2.0.